Abstract
If noisy-intermediate-scale-quantum-era quantum computers are to perform useful tasks, they will need to employ powerful error mitigation techniques. Quasiprobability methods can permit perfect error compensation at the cost of additional circuit executions, provided that the nature of the error model is fully understood and sufficiently local both spatially and temporally. Unfortunately, these conditions are challenging to satisfy. Here we present a method by which the proper compensation strategy can instead be learned ab initio. Our training process uses multiple variants of the primary circuit where all non-Clifford gates are substituted with gates that are efficient to simulate classically. The process yields a configuration that is near optimal versus noise in the real system with its non-Clifford gate set. Having presented a range of learning strategies, we demonstrate the power of the technique both with real quantum hardware (IBM devices) and exactly emulated imperfect quantum computers. The systems suffer a range of noise severities and types, including spatially and temporally correlated variants. In all cases the protocol successfully adapts to the noise and mitigates it to a high degree.
12 More- Received 21 May 2020
- Revised 1 March 2021
- Accepted 13 October 2021
DOI:https://doi.org/10.1103/PRXQuantum.2.040330
Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
Published by the American Physical Society
Physics Subject Headings (PhySH)
Popular Summary
Quantum computers are “noisy”: errors sneak in whenever more than a few qubits perform a calculation of any significant complexity. For near-term devices, we can try to mitigate errors, minimizing their impact so that the output is still useful. However, there is a problem. The most powerful error mitigation processes require the user to know the exact nature of the noise that the computer suffers from and that knowledge is hard to get. The task of learning all about the noise in the system can itself become an insurmountable problem. We want to sidestep this requirement.
When machine learning is used to, for example, learn to recognize handwritten postal codes, it does so without ever having to actually describe the countless variations of human handwriting. In the present paper we find that we can also automatically learn the key features of quantum noise provided we have good training examples. We take the circuit we would like to run on the quantum device and create many training variants with the special property that they can be evaluated on a conventional computer, so that we know what the “right” output is.
We prove that the lessons learned on the simplified circuits will work on the real one. To check, we use high-performance conventional computers to “pretend” to be quantum computers with various types of noise problem, and then we successfully apply our learning technique. Our approach can provide a new pathway toward the milestone of “quantum advantage,” the moment when quantum machines will do something really useful.