Abstract
The restricted Boltzmann machine is a basic machine learning tool able, in principle, to model the distribution of some arbitrary dataset. Its standard training procedure appears, however, delicate and obscure in many respects. We bring some new insights to it by considering the situation where the data have low intrinsic dimension, offering the possibility of an exact treatment and revealing a fundamental failure of the standard training procedure. The reasons for this failure—like the occurrence of first-order phase transitions during training—are clarified thanks to a Coulomb interactions reformulation of the model. In addition, a convex relaxation of the original optimization problem is formulated, thereby resulting in a unique solution, obtained in precise numerical form on , 2 study cases, while a constrained linear regression solution can be conjectured on the basis of an information theory argument.
- Received 19 March 2021
- Revised 22 July 2021
- Accepted 14 September 2021
DOI:https://doi.org/10.1103/PhysRevLett.127.158303
© 2021 American Physical Society