Policy-guided Monte Carlo: Reinforcement-learning Markov chain dynamics

Troels Arnfred Bojesen
Phys. Rev. E 98, 063303 – Published 4 December 2018; Erratum Phys. Rev. E 101, 039903 (2020)

Abstract

We introduce policy-guided Monte Carlo (PGMC), a computational framework using reinforcement learning to improve Markov chain Monte Carlo (MCMC) sampling. The methodology is generally applicable, unbiased, and opens up a path to automated discovery of efficient MCMC samplers. After developing a general theory, we demonstrate some of PGMC's prospects on an Ising model on the kagome lattice, including when the model is in its computationally challenging kagome spin ice regime. Here we show that PGMC is able to automatically machine learn efficient MCMC updates without a priori knowledge of the physics at hand.

  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
6 More
  • Received 27 August 2018

DOI:https://doi.org/10.1103/PhysRevE.98.063303

©2018 American Physical Society

Physics Subject Headings (PhySH)

Statistical Physics & ThermodynamicsInterdisciplinary Physics

Erratum

Authors & Affiliations

Troels Arnfred Bojesen*

  • Department of Applied Physics, University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo 113-8656, Japan

  • *troels.bojesen@aion.t.u-tokyo.ac.jp

Article Text (Subscription Required)

Click to Expand

References (Subscription Required)

Click to Expand
Issue

Vol. 98, Iss. 6 — December 2018

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×