Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances

Ila R. Fiete and H. Sebastian Seung
Phys. Rev. Lett. 97, 048104 – Published 28 July 2006

Abstract

We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of “empiric” synapses driven by random spike trains from an external source.

  • Figure
  • Received 19 January 2006

DOI:https://doi.org/10.1103/PhysRevLett.97.048104

©2006 American Physical Society

Authors & Affiliations

Ila R. Fiete1 and H. Sebastian Seung2

  • 1Kavli Institute for Theoretical Physics, University of California, Santa Barbara, California 93106, USA
  • 2Howard Hughes Medical Institute and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA

Article Text (Subscription Required)

Click to Expand

References (Subscription Required)

Click to Expand
Issue

Vol. 97, Iss. 4 — 28 July 2006

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review Letters

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×