Unveiling the Structure of Wide Flat Minima in Neural Networks

Carlo Baldassi, Clarissa Lauditi, Enrico M. Malatesta, Gabriele Perugini, and Riccardo Zecchina
Phys. Rev. Lett. 127, 278301 – Published 29 December 2021
PDFHTMLExport Citation

Abstract

The success of deep learning has revealed the application potential of neural networks across the sciences and opened up fundamental theoretical problems. In particular, the fact that learning algorithms based on simple variants of gradient methods are able to find near-optimal minima of highly nonconvex loss functions is an unexpected feature of neural networks. Moreover, such algorithms are able to fit the data even in the presence of noise, and yet they have excellent predictive capabilities. Several empirical results have shown a reproducible correlation between the so-called flatness of the minima achieved by the algorithms and the generalization performance. At the same time, statistical physics results have shown that in nonconvex networks a multitude of narrow minima may coexist with a much smaller number of wide flat minima, which generalize well. Here, we show that wide flat minima arise as complex extensive structures, from the coalescence of minima around “high-margin” (i.e., locally robust) configurations. Despite being exponentially rare compared to zero-margin ones, high-margin minima tend to concentrate in particular regions. These minima are in turn surrounded by other solutions of smaller and smaller margin, leading to dense regions of solutions over long distances. Our analysis also provides an alternative analytical method for estimating when flat minima appear and when algorithms begin to find solutions, as the number of model parameters varies.

  • Figure
  • Figure
  • Figure
  • Figure
  • Received 2 July 2021
  • Revised 6 December 2021
  • Accepted 8 December 2021

DOI:https://doi.org/10.1103/PhysRevLett.127.278301

© 2021 American Physical Society

Physics Subject Headings (PhySH)

Statistical Physics & Thermodynamics

Authors & Affiliations

Carlo Baldassi1, Clarissa Lauditi2, Enrico M. Malatesta1, Gabriele Perugini1, and Riccardo Zecchina1

  • 1Artificial Intelligence Lab, Bocconi University, 20136 Milano, Italy
  • 2Department of Applied Science and Technology, Politecnico di Torino, 10129 Torino, Italy

Article Text (Subscription Required)

Click to Expand

Supplemental Material (Subscription Required)

Click to Expand

References (Subscription Required)

Click to Expand
Issue

Vol. 127, Iss. 27 — 31 December 2021

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review Letters

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×