Interacting neural networks

R. Metzler, W. Kinzel, and I. Kanter
Phys. Rev. E 62, 2555 – Published 1 August 2000
PDFExport Citation

Abstract

Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random.

  • Received 6 March 2000

DOI:https://doi.org/10.1103/PhysRevE.62.2555

©2000 American Physical Society

Authors & Affiliations

R. Metzler and W. Kinzel

  • Institut für Theoretische Physik, Universität Würzburg, Am Hubland, D-97074 Würzburg, Germany

I. Kanter

  • Minerva Center and Department of Physics, Bar Ilan University, 52900 Ramat Gan, Israel

References (Subscription Required)

Click to Expand
Issue

Vol. 62, Iss. 2 — August 2000

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×