Abstract
Linear optical quantum computing (LOQC) seems attractively simple: Information is borne entirely by light and processed by components such as beam splitters, phase shifters, and detectors. However, this very simplicity leads to limitations, such as the lack of deterministic entangling operations, which are compensated for by using substantial hardware overheads. Here, we quantify the resource costs for full-scale LOQC by proposing a specific protocol based on the surface code. With the caveat that our protocol can be further optimized, we report that the required number of physical components is at least 5 orders of magnitude greater than in comparable matter-based systems. Moreover, the resource requirements grow further if the per-component photon-loss rate is worse than or the per-component noise rate is worse than . We identify the performance of switches in the network as the single most influential factor influencing resource scaling.
4 More- Received 26 April 2015
DOI:https://doi.org/10.1103/PhysRevX.5.041007
This article is available under the terms of the Creative Commons Attribution 3.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
Published by the American Physical Society
Synopsis
Optical Computing Under the Lens
Published 14 October 2015
A theoretical analysis quantifies the technical resources required to build a quantum computer based on photons.
See more in Physics
Popular Summary
In the decades since Richard Feynman proposed the possibility of a quantum computer, physicists have examined many different ways to build this “ultimate machine.” One elegant possibility is to use particles of light—photons—as the sole means of representing and processing information stored as electromagnetic degrees of freedom, such as polarization. Calculations can then be performed using basic, well-understood components such as lenses, mirrors, beam splitters, and detectors. This setup represents linear optical quantum computing. But can such a seemingly simple solution be truly practical? Here, we conduct, for the first time, a detailed theoretical study aimed at shedding light on this question.
We focus on pure linear optical quantum computing, and we assume that single photons can be generated reliably. These kinds of systems pay a unique price for their simplicity. When qubits are represented as photons, operations between qubits cannot be performed “deterministically”; there is always a strong possibility that the operation will fail. This possibility of failure can be overcome by attempting the same task multiple times and routing forward only successful outcomes, which must be achieved while controlling photon losses (from beam splitters, detectors, and switches, for example) and noise. We find that photon-based devices must have approximately 100,000 more physical components than matter-based systems using trapped atoms or superconducting circuits.
Our results do not rule out all-optical quantum computers, but they do reveal how demanding the requirements are for achieving these “ultimate machines.” We anticipate that our work will invigorate the linear optical quantum computing community to address the fidelity targets or derive better protocols.