Difference between memory and prediction in linear recurrent networks

Sarah Marzen
Phys. Rev. E 96, 032308 – Published 11 September 2017

Abstract

Recurrent networks are trained to memorize their input better, often in the hopes that such training will increase the ability of the network to predict. We show that networks designed to memorize input can be arbitrarily bad at prediction. We also find, for several types of inputs, that one-node networks optimized for prediction are nearly at upper bounds on predictive capacity given by Wiener filters and are roughly equivalent in performance to randomly generated five-node networks. Our results suggest that maximizing memory capacity leads to very different networks than maximizing predictive capacity and that optimizing recurrent weights can decrease reservoir size by half an order of magnitude.

  • Figure
  • Figure
  • Figure
  • Received 29 June 2017
  • Revised 14 August 2017
  • Corrected 22 October 2018

DOI:https://doi.org/10.1103/PhysRevE.96.032308

©2017 American Physical Society

Physics Subject Headings (PhySH)

Interdisciplinary Physics

Corrections

22 October 2018

Erratum

Authors & Affiliations

Sarah Marzen*

  • Department of Physics, Physics of Living Systems, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA

  • *semarzen@mit.edu

Article Text (Subscription Required)

Click to Expand

References (Subscription Required)

Click to Expand
Issue

Vol. 96, Iss. 3 — September 2017

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×