• Open Access

Compressing deep neural networks by matrix product operators

Ze-Feng Gao, Song Cheng, Rong-Qiang He, Z. Y. Xie, Hui-Hai Zhao, Zhong-Yi Lu, and Tao Xiang
Phys. Rev. Research 2, 023300 – Published 8 June 2020
PDFHTMLExport Citation

Abstract

A deep neural network is a parametrization of a multilayer mapping of signals in terms of many alternatively arranged linear and nonlinear transformations. The linear transformations, which are generally used in the fully connected as well as convolutional layers, contain most of the variational parameters that are trained and stored. Compressing a deep neural network to reduce its number of variational parameters but not its prediction power is an important but challenging problem toward the establishment of an optimized scheme in training efficiently these parameters and in lowering the risk of overfitting. Here we show that this problem can be effectively solved by representing linear transformations with matrix product operators (MPOs), which is a tensor network originally proposed in physics to characterize the short-range entanglement in one-dimensional quantum states. We have tested this approach in five typical neural networks, including FC2, LeNet-5, VGG, ResNet, and DenseNet on two widely used data sets, namely, MNIST and CIFAR-10, and found that this MPO representation indeed sets up a faithful and efficient mapping between input and output signals, which can keep or even improve the prediction accuracy with a dramatically reduced number of parameters. Our method greatly simplifies the representations in deep learnin, and opens a possible route toward establishing a framework of modern neural networks which might be simpler and cheaper, but more efficient.

  • Figure
  • Figure
  • Figure
  • Received 8 May 2019
  • Accepted 12 May 2020

DOI:https://doi.org/10.1103/PhysRevResearch.2.023300

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Published by the American Physical Society

Physics Subject Headings (PhySH)

NetworksCondensed Matter, Materials & Applied PhysicsQuantum Information, Science & TechnologyStatistical Physics & Thermodynamics

Authors & Affiliations

Ze-Feng Gao1, Song Cheng2,3,4, Rong-Qiang He1, Z. Y. Xie1,*, Hui-Hai Zhao5,†, Zhong-Yi Lu1,‡, and Tao Xiang2,4,§

  • 1Department of Physics, Renmin University of China, Beijing 100872, China
  • 2Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
  • 3Center for Quantum Computing, Peng Cheng Laboratory, Shenzhen 518055, China
  • 4University of Chinese Academy of Sciences, Beijing, 100049, China
  • 5RIKEN Brain Science Institute, Hirosawa, Wako-shi, Saitama, 351-0106, Japan

  • *qingtaoxie@ruc.edu.cn
  • huihai.zhao@riken.jp
  • zlu@ruc.edu.cn
  • §txiang@iphy.ac.cn

Article Text

Click to Expand

Supplemental Material

Click to Expand

References

Click to Expand
Issue

Vol. 2, Iss. 2 — June - August 2020

Subject Areas
Reuse & Permissions
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review Research

Reuse & Permissions

It is not necessary to obtain permission to reuse this article or its components as it is available under the terms of the Creative Commons Attribution 4.0 International license. This license permits unrestricted use, distribution, and reproduction in any medium, provided attribution to the author(s) and the published article's title, journal citation, and DOI are maintained. Please note that some figures may have been included with permission from other third parties. It is your responsibility to obtain the proper permission from the rights holder directly for these figures.

×

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×