Finite size scaling of the Bayesian perceptron

Arnaud Buhot, Juan-Manuel Torres Moreno, and Mirta B. Gordon
Phys. Rev. E 55, 7434 – Published 1 June 1997
PDFExport Citation

Abstract

We study numerically the properties of the Bayesian perceptron through a gradient descent on the optimal cost function. The theoretical distribution of stabilities is deduced. It predicts that the optimal generalizer lies close to the boundary of the space of (error-free) solutions. The numerical simulations are in good agreement with the theoretical distribution. The extrapolation of the generalization error to infinite input space size agrees with the theoretical results. Finite size corrections are negative and exhibit two different scaling regimes, depending on the training set size. The variance of the generalization error vanishes for N→∞ confirming the property of self-averaging.

  • Received 12 February 1997

DOI:https://doi.org/10.1103/PhysRevE.55.7434

©1997 American Physical Society

Authors & Affiliations

Arnaud Buhot, Juan-Manuel Torres Moreno, and Mirta B. Gordon

  • Département de Recherche Fondamentale sur la Mati`ere Condensée, CEA/Grenoble, 17 rue des Martyrs,

References (Subscription Required)

Click to Expand
Issue

Vol. 55, Iss. 6 — June 1997

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×