Abstract
Canonical transformation plays a fundamental role in simplifying and solving classical Hamiltonian systems. Intriguingly, it has a natural correspondence to normalizing flows with a symplectic constraint. Building on this key insight, we design a neural canonical transformation approach to automatically identify independent slow collective variables in general physical systems and natural datasets. We present an efficient implementation of symplectic neural coordinate transformations and two ways to train the model based either on the Hamiltonian function or phase-space samples. The learned model maps physical variables onto an independent representation where collective modes with different frequencies are separated, which can be useful for various downstream tasks such as compression, prediction, control, and sampling. We demonstrate the ability of this method first by analyzing toy problems and then by applying it to real-world problems, such as identifying and interpolating slow collective modes of the alanine dipeptide molecule and MNIST database images.
2 More- Received 13 October 2019
- Revised 22 January 2020
- Accepted 9 March 2020
DOI:https://doi.org/10.1103/PhysRevX.10.021020
Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
Published by the American Physical Society
Physics Subject Headings (PhySH)
Popular Summary
For centuries, physicists and astronomers have tackled the dynamics of complex interacting systems (such as the Sun-Earth-Moon system) with a mathematical tool known as a canonical transformation. Such a transformation changes the coordinates of the Hamiltonian equations (which describe the time evolution of the system) to simplify computation while preserving the form of the equations. However, despite being a deep concept, its wider application has been limited by cumbersome manual inspection and manipulation. By exploring the inherent connection between canonical transformation and a modern machine-learning method known as the normalizing flow, we have constructed a neural canonical transformation that can be trained automatically using the Hamiltonian function or data.
Normalizing flows are adaptive transformations often implemented as deep neural networks, and they find many real-world applications such as speech synthesis, image generation, and so on. In essence, it is an invertible change of variables that deforms a complex probability distribution into a simpler one. The canonical transformations are normalizing flows, albeit with two crucial twists. First, they are flows in the phase space, which contains both coordinates and momenta. Second, these flows satisfy the symplectic condition, a mathematical property that underlies most intriguing features in classical mechanics.
An immediate application of the neural canonical transformation is to simplify complex dynamics toward independent nonlinear modes, thereby allowing one to identify a small number of slow modes that are essential for applications such as molecule dynamics and dynamical control. Meanwhile, our work also stands as an example of imposing physical principles into the design of deep neural networks for better modeling of natural data.