Democratic reinforcement: A principle for brain function

Dimitris Stassinopoulos and Per Bak
Phys. Rev. E 51, 5033 – Published 1 May 1995
PDFExport Citation

Abstract

We introduce a simple ‘‘toy’’ brain model. The model consists of a set of randomly connected, or layered integrate-and-fire neurons. Inputs to and outputs from the environment are connected randomly to subsets of neurons. The connections between firing neurons are strengthened or weakened according to whether the action was successful or not. Unlike previous reinforcement learning algorithms, the feedback from the environment is democratic: it affects all neurons in the same way, irrespective of their position in the network and independent of the output signal. Thus no unrealistic back propagation or other external computation is needed. This is accomplished by a global threshold regulation which allows the system to self-organize into a highly susceptible, possibly ‘‘critical’’ state with low activity and sparse connections between firing neurons. The low activity permits memory in quiescent areas to be conserved since only firing neurons are modified when new information is being taught.

  • Received 10 November 1994

DOI:https://doi.org/10.1103/PhysRevE.51.5033

©1995 American Physical Society

Authors & Affiliations

Dimitris Stassinopoulos and Per Bak

  • Brookhaven National Laboratory, Upton, New York 11973

References (Subscription Required)

Click to Expand
Issue

Vol. 51, Iss. 5 — May 1995

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×