Natural Gradient Descent for On-Line Learning

Magnus Rattray, David Saad, and Shun-ichi Amari
Phys. Rev. Lett. 81, 5461 – Published 14 December 1998
PDFExport Citation

Abstract

Natural gradient descent is an on-line variable-metric optimization algorithm which utilizes an underlying Riemannian parameter space. We analyze the dynamics of natural gradient descent beyond the asymptotic regime by employing an exact statistical mechanics description of learning in two-layer feed-forward neural networks. For a realizable learning scenario we find significant improvements over standard gradient descent for both the transient and asymptotic stages of learning, with a slower power law increase in learning time as task complexity grows.

  • Received 15 May 1998

DOI:https://doi.org/10.1103/PhysRevLett.81.5461

©1998 American Physical Society

Authors & Affiliations

Magnus Rattray

  • Computer Science Department, University of Manchester, Manchester M13 9PL, United Kingdom

David Saad

  • Neural Computing Research Group, Aston University, Birmingham B4 7ET, United Kingdom

Shun-ichi Amari

  • Laboratory for Information Synthesis, RIKEN Brain Science Institute, Saitama, Japan

References (Subscription Required)

Click to Expand
Issue

Vol. 81, Iss. 24 — 14 December 1998

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review Letters

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×