Early Fault-Tolerant Quantum Computing

Over the past decade, research in quantum computing has tended to fall into one of two camps: near-term intermediate scale quantum (NISQ) and fault-tolerant quantum computing (FTQC). Yet, a growing body of work has been investigating how to use quantum computers in transition between these two eras. This envisions operating with tens of thousands to millions of physical qubits, able to support fault-tolerant protocols, though operating close to the fault-tolerant threshold. Two challenges emerge from this picture: how to model the performance of devices that are continually improving and how to design algorithms to make the most use of these devices? In this work we develop a model for the performance of early fault-tolerant quantum computing (EFTQC) architectures and use this model to elucidate the regimes in which algorithms suited to such architectures are advantageous. As a concrete example, we show that, for the canonical task of phase estimation, in a regime of moderate scalability and using just over one million physical qubits, the ``reach'' of the quantum computer can be extended (compared to the standard approach) from 90-qubit instances to over 130-qubit instances using a simple early fault-tolerant quantum algorithm, which reduces the number of operations per circuit by a factor of 100 and increases the number of circuit repetitions by a factor of 10,000. This clarifies the role that such algorithms might play in the era of limited-scalability quantum computing.

Over the past decade, research in quantum computing has tended to fall into one of two camps: near-term intermediate scale quantum (NISQ) and fault-tolerant quantum computing (FTQC).Yet, a growing body of work has been investigating how to use quantum computers in transition between these two eras.This envisions operating with tens of thousands to millions of physical qubits, able to support fault-tolerant protocols, though operating close to the fault-tolerant threshold.Two challenges emerge from this picture: how to model the performance of devices that are continually improving and how to design algorithms to make the most use of these devices?In this work we develop a model for the performance of early fault-tolerant quantum computing (EFTQC) architectures and use this model to elucidate the regimes in which algorithms suited to such architectures are advantageous.As a concrete example, we show that, for the canonical task of phase estimation, in a regime of moderate scalability and using just over one million physical qubits, the "reach" of the quantum computer can be extended (compared to the standard approach) from 90-qubit instances to over 130-qubit instances using a simple early fault-tolerant quantum algorithm, which reduces the number of operations per circuit by a factor of 100 and increases the number of circuit repetitions by a factor of 10,000.This clarifies the role that such algorithms might play in the era of limited-scalability quantum computing.

I. INTRODUCTION
Quantum computers were first proposed to efficiently simulate quantum systems [1].It then it took about a decade before it was discovered that quantum phenomena, such as superposition and entanglement, could be leveraged to provide an exponential advantage in performing tasks unrelated to quantum mechanics [2].Although of no practical use, the Deutsch-Jozsa algorithm sparked interest in using a quantum computer to perform other tasks beyond simulating quantum systems [3,4], the most famous case being Shor's algorithm [5].Around the same time the groundbreaking discovery of quantum error correcting codes (QECC) [6][7][8][9][10] set the stage for practical quantum computing.This showed that errors due to faulty hardware could not only be identified but also corrected.Two pieces of the puzzle were left, namely: 1. Could quantum computation be done in a fault tolerant manner i.e. could error-corrected qubits perform better than physical qubits?
2. Can one rigorously prove the existence of a threshold [11] below which error can be reduced exponentially in the time and memory overhead cost?
The first piece of the puzzle was tackled by Peter Shor [12] and later, building on his work, threshold theorems were proved assuming various kinds of error models [13][14][15].For a specific quantum error correcting code and a noise model it is then left to prove and find error thresholds, with early works being [16][17][18][19]; this continues to be an active area of research [20][21][22][23].
Meanwhile on the hardware side, astonishing progress has been made across various modalities (e.g.superconducting, ion trap, photonic, etc.) in terms of extending qubit coherence times and improving entangling operations [24][25][26][27][28][29][30].Driven by such advances, a watershed moment occurred in 2016 when IBM put the first quantum computer on the cloud giving the public access to quantum computers.This event spurred widespread interest in finding near-term quantum algorithms that did not need the full machinery of fault tolerance.These algorithms first formulate the problem as a solution to the ground state of some Hamiltonian store a trial ansatz on the quantum processing unit (QPU) and use a classical optimizer to find the optimal parameters.The variational principle guarantees that the optimized parameters will produce a state whose energy upper bounds that of the target Hamiltonian.These so called hybrid quantum/classical algorithms allow one to use short depth quantum circuits and reduce the need for high quality quantum coherence.They have found application in areas of quantum chemistry [31], machine learning [32,33] and optimization [34].
Despite this progress, there is still need to reduce errors and the area of quantum error mitigation arose as attempts were made to meet the needs of these applications [35][36][37][38][39].This way of using a QPU is what is characteristic of the so called NISQ era [40].Although there is no strict definition of what constitutes a NISQ device it can generally be assumed that NISQ devices are too large to be simulated classically, but also too small to implement quantum error correction.IBM's work [41] in some sense is the true dawn of the NISQ era i.e a quantum device where error mitigation is important and classical simulation is hard.The flurry of work [42][43][44] immediately arose pushing classical methods of simulation and claiming to reproduce IBM's results.This is a new phase in which NISQ devices will be put to the test by state of the art classical simulators and vice-versa.This back and forth will not last long as Hilbert space for quantum systems grows exponentially and the NISQ device will be the only viable simulation approach.
But an important question remains and in a very obvious sense the elephant in the room is, "are NISQ devices and NISQ algorithms up for the task of realizing quantum advantage at utility scale?"Work has been done in quantum chemistry where the problem can be precisely asked, for example, in finding the ground state of large molecules.The best estimates so far for resource estimates suggest the variational quantum eigensolver (VQE) is not up to the task [45].Other work suggests a possible quantum advantage for the quantum adiabatic optimization ansatz (QAOA) [46][47][48] in optimization but it remains to be seen whether these claims can be confirmed in the presence of noise at scale.
Given these roadblocks, should our attitude be to wait for fully fault tolerant devices?An area of research offers an intriguing possibility; we are offered a trade-off, we require fault tolerant quantum computing but the ability to run smaller quantum circuits at the cost of requiring more sampling for the quantum device.Such a trade-off has been the focus of a substantial amount of research in the past few years [49][50][51][52][53][54][55][56][57][58] However, in a regime where we are able to arbitrarily scale the number of physical qubits while maintaining quality fault-tolerant protocols, such a trade-off would not be favorable; by increasing the circuit size using methods such as quantum amplitude amplification [59], the additional overhead of efficient fault-tolerant protocols is negligible compared to the overall reduction in runtime.Accordingly, such a trade-off would be better suited to a setting in which the efficiency of fault-tolerant protocols worsens with increasing system size.If the ability to scale the number of physical qubits (i.e. the "scalability") is compromised by a worsening of the operations, then this diminishing returns will, in turn, limit size of problems that can be solved.Such a regime of computation has been referred to as early fault tolerant quantum computing (EFTQC) [60], a natural successor to the NISQ era.A field of research has emerged recently where the proposed quantum algorithms enable this "circuit-size vs sample-cost" trade-off [49,51,52,55,[61][62][63][64][65][66].Two questions are then placed before us: 1. Will this regime of limited-scale quantum computers exist in a meaningful way? 2. If so, will we be able to unlock intrinsic quantum value at scale in this regime?
The ultimate answers to these questions will depend on hard-to-predict factors including hardware, QECC and quantum algorithm advances, and improvements in competing classical hardware and algorithms.Rather than predicting the timeline of these advances, we propose a quantitative framework to track their progress.Figure 1 depicts the landscape in which this framework assesses the ability of a given hardware vendor to supply useful physical qubits, transitioning from NISQ to EFTQC to FTQC.
To address the first question we propose a very simple model (see Equation 3) to quantitatively discuss these regimes.This simple model describes how the quality of elementary quantum operations degrade as system size is increased; that is, we model the physical gate error rate as a function of physical qubit number.We dub this model the scalability of a device.For the second question, we quantify how recently-developed algorithms can extend the "reach" of quantum computers with FIG.1: This figure roughly demarcates the regimes of NISQ, EFTQC, and FTQC according to the scalability model introduced in Section II.The vertical axis quantifies the base error rate (i.e. that achievable for a single-qubit), while the horizontal axis quantifies the ability of the architecture to maintain low error rates as it is scaled (i.e. its scalability).Contours indicate the maximum physical qubit number that the architecture is warranted in scaling to as predicted by the scalability model of Equation 3. The NISQ-to-EFTQC transition is characterized by having enough qubits to implement fault-tolerant non-Clifford operations (e.g.T factories), while the EFTQC-to-FTQC transition is characterized by the ability to accommodate very large problem instances (e.g.encoding 10,000 logical qubits using in 10 9 physical qubits).The red x corresponds to data presented in Section A, which estimates that a hardware vendor of today (IBM) has a scalability of 1.75 with  0 = 0.005.An editable version of the plot can be accessed here: https://www.desmos.com/calculator/9iphmmdjfplimited scalability.This is an important step towards understanding what value such methods can provide.Two results that we will establish (see Equations 10 and 12) are that, according to the scalability model, the optimal number of physical qubits to use in the architecture is the following function of scalability parameter and the maximum problem size, measured in terms of the largest number of logical qubits, is predicted to be ( The various parameters are defined in Section II B. We will ultimately explore so-called "EFTQC algorithms", which enable an increase in  max  .We will explain how these expressions show 1) the importance of the scalability parameter  in governing the capabilities of a quantum hardware vendor and 2) the role played by the "fault-tolerance burden factor"    , that combines gate count, error correction, and algorithm robustness parameters.These elucidate multiple ways to improve a quantum computation towards solving utility-scale problem instances in the finite-scalability regime.
The manuscript is organized as follows.In Section II we present the scalability model and apply this to an example resource estimation for the quantum phase estimation algorithm.In Section III we review progress in algorithms for early fault-tolerant quantum computers and then present an example of one such algorithm, showing how can improve the capabilities of a device with limited scalability.Finally, in Section IV we discuss the implications of our findings and outline important future research directions.

A. Introduction to the scalability model
In this section we establish and discuss the precise sense in which a device can be an early fault tolerant device.We first note the tension in the very phrase early fault tolerance.Fault-tolerance evokes the ability to ensure efficient suppression of error despite the use of faulty operations [12].The string of results [9,13,18,67] collectively known as the threshold theorems show that in principle this can be achieved.In fact thanks to these results we know [13] that under quite general assumptions such as allowing for long range correlations of noise and non-Markovianity, fault-tolerance is still possible.These foundational works would put the threshold error rate around 10 −5 to 10 −6 .However, more optimistic threshold predictions have been made using numerical investigations [68].For the surface code [69], which is a leading contender for practical quantum computing [70], such simulations have led to the prediction of quite optimistic thresholds of ∼ 1% [71], which have also been argued for analytically [72].On the other hand, numerical thresholds are based on particular assumptions of noise and error that cannot fully capture the complexity of quantum architectures at scale.For example, an important assumption is that a single number can be used to capture the performance of operations and that this single number remains constant as larger code distances [73] are used [71].Such thresholds have become the established targets for hardware developers [74][75][76][77].
The "early" in "early fault tolerance" on the other hand suggests some kind of limited ability to achieve fault tolerance i.e using polynomial amount of resources to achieve exponential error suppression [78,79].This tension is what lies behind the motivation for this work.
A key insight towards resolving this tension is to realize what we might call the scalability requirement: In order to reap the benefits of being below any threshold, an approach to building a quantum architecture must be able to maintain each operation below the threshold error rate as larger and larger architectures are built.
The failure to achieve the scalability requirement implies the existence of scale dependent errors.To motivate where these scale dependent errors might come from we consider the general setup used to prove the threshold theorem.It is assumed we have the following Hamiltonian as where ℋ  is the hamiltonian governing the evolution of the system which for our discussion can be the evolution corresponding to implementing the quantum gate, ℋ  governs the evolution of some bath and ℋ  entangles the bath with the qubits in the computation.The scale dependent errors arise from the engineering details involved in implementing ℋ  as larger and larger chips are developed.These engineering problems can't be completely inserted into ℋ  and yet would ultimately impact how easily we could stay below threshold as we try to scale up.For fixed frequency qubits in superconducting architectures the issue of "frequency crowding" affects the quality that any single two-qubit gate can achieve [80].The number of frequencies that must be avoided when implementing the cross resonant gate, increases as the number of qubits increases in the chip; this makes targeting the required frequency harder and harder as you scale up.Another scale dependent engineering difficulty can arise from unwanted interactions between control lines going into the chip.The calibration of these pulses is partly a classical problem that gets more complicated and cumbersome as the chip gets larger.The issue of "cross mode coupling" at iontraps will affect the fidelity of the gate [81,82], where the target has a specific motional mode but unwanted couplings destroy the quality of the gate.The problem has a classical component that scales with the number of qubits.In the above cases the physics of accurate addressability of a qubit or pairs of qubits is a problem that becomes harder with increasing number of qubits, and thus, affects the quality of the gate operation.Recent works have explored the consequences of scale dependent errors which would most likely arise from the limited resources to control qubits and design good quality operations [83,84].
It is reasonable to believe that the assumption of scale-independent error rates may eventually become effectively true on account of modularity, as future quantum architectures will likely be made from repeated modular components.And while the holy-grail of (effectively) scale-independent, sub-threshold error rates may someday be realized, quantum architectures will necessarily undergo a transition from today's scale-dependent error to the future of scale-independent error.We will take this transition to be the defining characteristic of early fault-tolerant quantum computing.
Ultimately, this investigation is motivated by wanting to understand the prospects of using early fault-tolerant quantum computers to solve utility-scale problems.We take such machines to be characterized by a non-negligible degree of scale-dependent error.The standard approach to predicting the performance of fault-tolerant architectures for utility-scale problems is to assume that the error is scale independent [85][86][87].Therefore, our approach will be seen as 1) a generalization that incorporates both the scale-independent and dependent settings and 2) an attempt to bridge the observed scale-dependence of error in today's devices with the hoped-for scale-independence of error in future quantum architectures.We expect that the degree of scale-dependence will inform the capabilities of the architecture being modeled.Furthermore, scale-dependent error may warrant the development and use of quantum algorithms that are suited to this limitation.These considerations motivate the main question pursued in the remainder of this manuscript: how does the degree of scale-dependent error determine the capabilities of early fault-tolerant quantum computers?Next, we introduce a model to capture the degree of scale-dependent error.
We start by describing the particular setting in which we model scale-dependent error.Our model will center around the concept of scalability, the ability to maintain low error rates (e.g.sub-threshold) as larger architectures are requested.Our setting and model are driven by the need to answer the question: for a series of quantum computations of increasing size, how well will a hardware vendor be able to service the request to run the quantum computations.Accordingly, we will not consider the capabilities of a single quantum device or a single quantum architecture, as the hardware vendor might have several architectures to service computations of various sizes.Furthermore, we will not consider the capabilities of the hardware vendor as they improve over time, as our hypothetical test is used to assess capability at one moment in time.
In order to make this quantitative, we can consider a scalability profile: an empirically derived function that reports the worst-case error rate among the elementary operations of the device as a function of the requested number of physical qubits.For the case of today's IBM devices, we present data on their scalability profile in Appendix A. In lieu of scalability profile data for future quantum vendors, we propose a simple parameterized model for this function FIG. 2: The concept of scalability captures the ability of a quantum architecture to maintain low physical error rates as the number of physical qubits of the architecture is increased.This figure shows the scalability profiles of different quantum architectures given by the scalability model (Eq.3) for different scalability values ( = 0.5, 1.5, 2.5, 3.5, 4.5, ∞) and base error rate  0 = 10 −4 .A finite scalability implies that beyond a certain physical qubit size, the architecture cannot maintain physical error rates below the error threshold (  ℎ ) of the fault-tolerant protocol.An editable version of the plot can be accessed here: https://www.desmos.com/calculator/jlmbygcqrpwhere  phys is the number of physical qubits in the architecture and  labels the particular hardware vendor that is providing the qubits at any time.Parameters  0 and  capture the base error rate and the "scalability", respectively.It is helpful to view this model as a power-law fit of a scalability profile.In Appendix B we investigate the more optimistic case of a logarithmic model.The case of  = ∞ corresponds to scale-independent error, or infinite scalability, while any finite value of  corresponds to the case of finite or limited scalability.As we will show in the next section, in the context of fault-tolerant quantum computing, a finite scalability will result in a finite limit on the number of physical qubits being used before faulttolerant protocols yield diminishing returns.We then explain how this limit on physical qubit number places a limit on the problem sizes that the architecture can accommodate.Importantly, all of these considerations apply in the setting where fault-tolerant protocols are being used.This differs from the setting assumed for NISQ quantum computing [40], where physical qubits instead of logical qubits are used for computation.Before moving to the next section, we provide some perspective on the transition from the NISQ regime to the EFTQC regime.Specifically, in the rest of this subsection we estimate the minimal number for  phys in an EFTQC computation assuming a simple surface code architecture.
The total number of physical qubits for a computation can be written as  phys =  comp +  MSD where  comp is the number of physical qubits used to compute (i.e.storing and routing the logical data) and  MSD are the physical qubits used for magic state distillation.To calculate the minimum number of qubits required for QEC, we will set  comp = 2( + 1) 2 [86] corresponding to single surface code logical qubit and pick the smallest distillation widgets which give an improvement on the error rate.
The most efficient distillation widgets known in the surface code are given in [88].We have listed the smallest of these in Figure 3 (note that these do not give much of an improvement over the physical error rate).An important property of the magic state injection process is that it cannot have error rates which are less than the current logical level.Thus a magic state which is injected into a code with error rate   can have at most error rate   .If we have a single logical qubit in the  [88].
phys is the physical error rate,  phys is the number of physical qubits required to create the factory,  out is the probability that the output state magic state is incorrect,  min, EFTQC is a rough lower bound on the number of qubits in an EFTQC calculation, and   is the logical failure rate in that lower bound calculation.Note that in the case of superconducting qubits, the lower bound EFTQC example does not decrease the logical error rate   .smallest non-trivial surface code (i.e.distance 3), then this minimum viable example of EFTQC will be at least 540 and 826 qubits for lower and high-quality operations, respectively.Note that the magic state distillation factory dominated the number of qubits.As a result, the FTQC community has put a lot of work into decreasing the size of factories [88], improving injection protocols [89], or eliminating distillation entirely [90].One would expect that the first EFTQC demonstrations will employ many of these techniques rather than the "pure FTQC" calculation presented above.In a more careful calculation to estimate a lower bound for the EFTQC range, one may want to take such techniques into account and calculate  phys .Refining this estimate to clarify and lower the NISQ-to-EFTQC transition is important future work.

B. Example: quantum phase estimation compiled to the surface code
In the preceding subsection, we introduced Eq. 3 as a model for how physical operation error rates might increase with system size.To understand the implications of this model, we work through the example of using the quantum phase estimation (QPE) algorithm [59] to solve the phase estimation task.The task of phase estimation is to estimate the eigenphase of a unitary operator  with respect to an eigenstate |⟩ assuming access to circuits that implement - and prepare |⟩.We review how to estimate the quantum resources required to perform this task under the scalability model and compare these to the ideal model case (i.e. → ∞).
A fault-tolerant resource estimation answers the question: how many physical qubits are needed per logical qubit to ensure that the logical error rates are sufficiently low to make the algorithm succeed (with some probability)?To answer this, we must 1) determine what logical error rates the algorithm deems as "sufficiently low" and 2) establish the relationship between logical error rate and quantum resources.
For 1), the QPE algorithm will succeed with sufficiently high probability as long as the total circuit error rate is below some value   .We will set   = 0.1, noting that, in the literature, this tolerable circuit error rate varies from 0.1 [86] to 0.01 [85], but can be made lower using alternative algorithms [91][92][93].This tolerable circuit error rate, along with the number of operations per circuit, lets us bound the tolerable operation error rate.The quantum circuit will ultimately be compiled into a set of logical operations that are implemented using fault-tolerant protocols (e.g.initialization of |0⟩, measurement in the computational basis,  gate, CNOT gate, and  gate).We define   to be the number of elementary logical operations (including idling [94]) used by the circuit.To ensure that the circuit error rate is less than   , it suffices [95] to ensure a logical error rate of   ≤   /  (by the union bound).
For the quantum phase estimation algorithm,   is determined by the target accuracy and the number of operations per -.To yield an estimate of the phase angle to within  of the true value requires using a circuit with 1/ applications of - [96].For our purposes we assume a model for   by fitting data in Table II of [60] to the following power law, where the  is set to be approximately FIG. 4: The scalability model of Eq. 3 predicts that, for each finite value of scalability parameter , there is a maximum problem instance size that can be accommodated by the architecture.Each curve is a contour in the  phys -  plane of a solution to Eq. 9 for a particular value of the scalability parameter  (3, 3.5, 4, 4.5, ∞).The remaining parameters of Eq. 9 are set to   ℎ = 10 −2 ,  0 = 10 −4 ,  = 4.12 • 10 9 ,  = 0.515 following Table II in [60].The transition from solid to faded dashed curves occurs when the physical qubit number reaches  opt phys =  max phys / 2 , beyond which increasing the code distance leads to diminishing returns.The diagonal black dotted lines show the physical qubit count for two fixed code distances: 7 (small distance) and 51 (large distance).
Note that code distance is discrete, which, if taken into account, would result in the contours jumping from one fixed-code-distance line to the next.However, we have chosen to allow for the distance parameter to be continuous, for ease of viewing the trends of the contours.An editable version of the plot can be accessed here: https://www.desmos.com/calculator/7mbziuf8gdhalf a percent of the total system energy, yielding  = 4.12 • 10 9 and  ≈ 0.515.Thus, the algorithm success is ensured (with high probability) by ( For simplicity, we'll assume that the number of logical qubits needed for magic state factories are accounted for in this model (see notes in the desmos plot of Figure 4 for details of the assumptions and the relevant references) and we will assume that the physical qubit overhead is captured by the code distance used for the data qubits (though the factories typically have multiple layers of concatenation with differing code distance).
For 2) we will assume a model of error suppression based on simulations of the surface code in [97].This model is where [86,97] estimates  = 0.1 and  th = 0.01.The number of physical qubits used to encode one logical qubit in the surface code is 2( + 1) 2 , leading to In the case that  phys is independent of the number of physical qubits,   can be made arbitrarily small, with cost (depending on code distance ) scaling as  ∼ log(1/  ).However, if we replace  phys with the  phys -dependent function  phys ( phys ) of Eq. 3 (i.e. the scalability model), the logical error rates cannot be made arbitrarily small.The smallest error rate is achieved when  phys =  th , which occurs when  phys = (  ℎ / 0 )  ; including more qubits (i.e.increasing the code distance) will lead to a decrease in the logical error rate.This number of physical qubits is therefore the maximal number of physical qubits that should be used under the scalability model: So, for example, when   ℎ = 0.01 and  0 = 0.001 (as is sometimes assumed for superconducting qubit resource estimates with the surface code [86]) we have  max phys = 10  .A more optimistic setting of  0 = 0.0001 leads to  max phys = 10 2 .Figure 1  Putting these together, we can determine the number of physical qubits required to ensure that QPE returns an -accurate estimate (with high probability) as a function of the number of logical qubits   (roughly corresponding to problem size).This relationship is expressed by  phys -  pairs that ensure Eq. 5 is satisfied (i.e. that logical error rates are low enough for the algorithm to succeed), Before applying this result to the quantitative example that has been set up, we make a few general remarks that apply to any algorithm analyzed in this manner.First, we consider the right-hand side of this inequality.This function will determine an optimal value for  phys , which we label as  opt phys .Previously, we described a maximum value of  phys as set by the condition of  phys being below threshold.However, the maximum allowed value of   is now set by a function of  phys ; to increase this ceiling, we should maximize the right hand side function of  phys .This function achieves its maximum of 2  2  th  0  at a value of This is considered the optimal number of physical qubits in that it enables the use of the largest number of logical qubits.As an example, for  th = 0.01,  0 = 0.0001, and  = 3.5, the optimal number of physical qubits is  opt phys ≈ 1.35 × 10 6 .These quantities of  max phys and  opt phys can help us to quantify the scalability parameters  0 and  that are relevant to the NISQ-to-EFTQC transition and the EFTQC-to-FTQC transitions.At the end of the previous subsection we described how the NISQ-to-EFTQC might occur in the range of 100 to 10,000 physical qubits.Considering Equation 8 and 10, this determines the ( 0 ,) pairs characteristic of this transition and shown as the red-to-green blend in Figure 1.
We motivate the idea that the transition from EFTQC to FTQC is characterized by how the quantum computations are "bottle-necked".In the case of fault-tolerant quantum computing, it is envisioned that the ability to run larger and larger quantum computations is possible as long as the computations are not practically limited by resources such as time and energy.We propose that early fault-tolerant quantum computing be characterized by the regime in which the largest possible quantum computations are limited by the maximum number of physical qubits warranted in the architecture ( max phys or  opt phys ).Viewing time as the limiting resource, if we assume that the quantum computation must finish within a month, then this limits the problem sizes that can be accommodated accordingly.Using the quantum chemistry resource estimations of [86] as a point of reference, problem instances that would take a month would require on the order of 10 7 physical qubits.There may be other classes of problems that become runtime-limited when fewer or more physical qubits are required.Thus, in Figure 1 we depict the transition from EFTQC to FTQC as the green-to-blue gradient ranging from 10 6 to 10 8 .
Second, we consider the left-hand side of Equation 9. Most of the parameters are contained in the factor /  .In Section III B we will explain the importance of this factor in quantifying the "burden" placed on the elementary fault-tolerant protocols.Equation 9shows that decreasing this burden factor affords a decrease in the number of physical qubits  phys .Alternatively, when fixing the number of physical qubits, a reduction in the burden factor affords an increase in the number of logical qubits, and subsequently the maximum problem size or "reach" of the quantum computer.The methods introduced in Section III A will be understood to reduce this burden factor, enabling algorithms to be run using fewer physical qubits, though at the cost of an increase in runtime.
Figure 4 shows the contours of solutions to Equation 9for several scalability values .The most striking feature is that, for the finite values of scalability ( < ∞), there is a maximum-size instance (measured by   ) that the architecture can accommodate using the QPE algorithm.For example, in the case of  = 3.5,  0 = 0.0001, and  th = 0.01, we find that the largest instance that can be accommodated (i.e. the "reach" of the quantum architecture) is   ≈ 90.The maximum number of logical qubits  max  can be solved for by setting  phys =  opt phys in Equation 9 and solving for   , where () is the solution to () exp(()) = , known as the Lambert  function.Using the upper bound of () ≤ ln(), we can lower bound the maximum qubit number as where we have used the expression for  opt phys .This maximum solvable problem size motivates the question explored in the next section: with a fixed scalability, is it possible to extend the "reach" of a quantum architecture using algorithms designed for finite scalability?
These algorithms have typically been developed with certain improvements in mind that include: the reduction of logical qubit number [62,108], the reduction of number of operations per circuit [49,51,106], the reduction of expensive operations [60,63] (e.g.non-Clifford operations like T gates and Toffoli gates), and establishing or increasing the robustness to error [51,92,93,99,110].In many cases, achieving these improvements comes at a cost.The predominant cost is an increase in the number of circuit repetitions (also known as the "sample complexity", "number of samples" or "shots"), and, subsequently, runtime.Another cost is an increase in classical processing (e.g.converting the measurement outcome data from the many circuit repetitions into the estimate of the ground state energy).In the next subsection we will detail an example algorithm where these trade-offs can be easily understood.
One of the first algorithms suited for early fault-tolerant quantum computers was the so-called -VQE method [49].This method for solving the task of amplitude estimation enables a trade-off between the number of quantum operations per circuit (1/  ) and the number of circuit repetitions Õ(1/ 2(1−) ) (where Õ indicates that we ignore polylog factors), set by a tunable parameter .Later, Wang et al. introduced a variable-depth amplitude estimation algorithm that is robust to substantial amounts of circuit error [51].Similar methods were explored in the context of quantum algorithms for finance [52,111] and some have been implemented on quantum hardware [112,113].
Another thread in the development of quantum algorithms for early fault-tolerant quantum computing has focused on problems related to physical systems such as electronic structure or condensed matter systems specified by their Hamiltonians.In this direction, one of the first papers to introduce the phrase "early fault-tolerant" was [60], where the author reduces the counts of expensive non-Clifford operations to make simulation of the Fermi-Hubbard model more amenable to smaller fault-tolerant quantum computers.Other methods have sought to reduce the logical qubit requirements.Previous approaches had mostly been based on the quantum phase estimation (QPE) algorithm [15], which uses additional ancilla qubits for reading out the phase [114].A method for estimating the spectrum of a Hamiltonian without the use of QPE (and its ancilla qubit overhead) was introduced in [115]; instead, the spectrum is estimated by classically post-processing measurement outcome data from Hadamard tests of the -  circuit.Then, one of the first papers to motivate their algorithm development in the context of early fault-tolerant quantum computers was [62].They developed a novel post-processing technique for the measurement outcome data generated in [115] and carried out an analysis of their ground state energy estimation (GSEE) algorithm showing that they could achieve a runtime with Heisenberg-limit scaling of (1/), compared to the (1/ 4 ) runtime of [115] (when improved and applied to the task of GSEE).Placing their work in the context of early fault-tolerant quantum computing, this work solidified this new research direction in quantum algorithms and helped place earlier works in the context of early fault-tolerant quantum computing.By combining the insights of [62], linear combination of unitaries [116], and QDRIFT [117], the authors of [63] developed a method to exploit structure in the Hamiltonian to make the overall complexity of GSEE independent of the number of terms in the Hamiltonian.They also demonstrate that their method enables trading the number of operations per circuit for number of circuit repetitions.A methodology similar to the works above has been applied to the task of estimating ground states properties [104], which is often required in industrially-relevant quantum chemistry calculations [98,118].
The EFTQC algorithms developed for amplitude estimation [49,51,52] established a trade-off between operations per circuit and circuit repetitions.These result in tuning the runtime between Heisenberg limit scaling (1/) and the central limit scaling (1/ 2 ).This raises the question of whether a similar trade-off can be established for the task of ground state energy estimation.Such circuit trading was established in [106].They showed an exponential reduction in the number of operations per circuit in terms of accuracy dependence (i.e. a reduction from Õ(1/) to Õ(log 1/)).This reduction in number of operations per circuit comes at the cost of an increase in circuit repetitions from Õ(log 1/) to Õ(1/ 2 ).With this method, the minimal number of operations per circuit is Õ(1/Δ), where Δ is a lower bound on the spectral gap of .Often it is the case that Δ is larger than , enabling a reduction in number of operations using this method [106].Later, Ding and Lin [57] established a similar result using an approach based on numerically fitting a parameterized curve to a set of estimated expectation values, which they refer to as quantum complex exponential least squares (QCELS).They also showed that, assuming the ground state overlap | ⟨| ⟩| =  is sufficiently close to 1, the circuit depth of energy estimation can be made arbitrarily small while still retaining the Heisenberg limit scaling.
These two methods that enabled a reduction in number of operations per circuit [57,106] required a runtime and number of circuit repetitions scaling as (1/ 4 ).In contrast, for methods using more operations per circuit, a runtime scaling of (1/ 2 ) [119] (and even (1/) [55]) was shown to be possible.This motivated the search for algorithms that could improve the runtime scaling with respect to overlap to (1/ 2 ), while also using few operations per circuit.The first work to achieve this was [66].This algorithm uses the quantum computer in a very different manner compared to previous approaches.To estimate the ground state energy, a classical computer first generates a uniformly random sampling of energy values on an interval expected to contain the ground state energy.Then, for each sample a quantum circuit is designed such that a binary measurement outcome from the quantum computer is used to decide whether or not that sample should be accepted or rejected.The set of accepted samples are proven to be drawn from a Gaussian distribution centered about the ground state energy and the mean of these samples will be close to this value.The number of operations per circuit can be tuned anywhere from (1/) to (1/Δ), where the consequence is a broadening of the Gaussian peak width (requiring more samples to achieve the same accuracy).Later, [107] also developed a ground state energy estimation algorithm with (1/Δ) operations per circuit and (1/ 2 ) circuit repetitions based on a Gaussian-filter variant of quantum phase estimation.Although this uses more ancilla qubits like QPE, [107] shows that the number of operations per circuit is reduced compared to previous methods.
Another thread of research in ground state energy estimation has drawn on methods from numerical linear algebra to classically postprocess quantum measurement outcome data in a more efficient and robust manner [50,53,58,103,105,120].These methods employ techniques like filter diagonalization [50,53], Lanczos methods [58], and dynamic mode decomposition [103].Some of these methods [58,120] have been shown to only require a number of operations per circuit scaling as Õ(1/Δ), similar to [106].However, in the case of [58], the runtime upper bound has a scaling of Õ(1/Δ 2 ), compared to the Õ(Δ) runtime scaling of [106].An important direction for future work will be to carry out empirical studies that give more realistic estimates for the runtimes, required operations per circuit, and robustness of these algorithms.
As mentioned above, all ground state energy estimation methods have a runtime that depends on the overlap between the input trial state and the ground state ⟨| ⟩ = .The runtime of these ground state energy estimation algorithms can be improved by using a ground state preparation method to improve this overlap before running the ground state energy estimation algorithm.For papers analyzing the interplay between ground state energy estimation and state preparation see [121,122].An excellent overview of ground state preparation algorithms is presented in Table II of [55].In the regime of early fault-tolerant quantum computing, it may be advantageous to use ground state preparation methods that reduce the number of operations per circuit.One such method was proposed in [56], where an approximate Gaussian filter of varying width can be used to suppress high energy states and boost the overlap with the ground state.More recently, [108] introduced a novel ground state preparation method based on Lindblad dynamics, engineering a process that has the ground state as the unique steady state.
Finally, we discuss another important thread of research for early fault-tolerant quantum computing: robustness.As shown in Section II B,   , the circuit error rate that the algorithm can tolerate, plays a role in determining the fault-tolerant overhead.An increase in the robustness (i.e.  ) reduces the fault-tolerant overhead.A canonical reference on the robustness of quantum algorithms is [91], which introduced the robust phase estimation algorithm.They showed that their variant of quantum phase estimation was able to tolerate a substantial circuit error rate of ∼ 35%.Such robustness analysis is especially important when reduction of fault-tolerant overhead is essential, as in quantum algorithms suited for early fault-tolerant quantum computers.
One of the first works to analyze the robustness of a quantum algorithm in the EFTQC setting was [92].Here, the simple robust phase estimation algorithm was introduced and its robustness was analyzed with respect to two different models of algorithmic noise (i.e. a model of how the measurement outcome probabilities are impacted).See Section III B for a discussion of these algorithmic noise models.A variant of the robust phase estimation algorithm was developed in [99] that enables circuit trading.Here the robustness of the algorithm is analyzed with respect to the exponential decay model (see Section III B).Most recently, [110] developed an algorithm for ground state energy estimation that is provably robust with respect to the exponential decay model (see Section 19).
The works presented above represent a foundation for an increasingly important research direction: the development of quantum algorithms suited to the capabilities of finite-scalability quantum computers.They are built to reduce logical qubit number, to enable a trade-off between gate count and sample cost, and they are robust to error in the circuit.All of these features contribute to reducing the burden placed on fault-tolerant operations and thus reducing fault-tolerant overheads.This helps to run larger problem instances on earlier quantum computers, or, in other words to "extend the reach" of a finite-scalability quantum architecture.In the following subsection we will make these concepts more clear through an example.

B. Example: randomized Fourier estimation under finite scalability
Section II B ended with the question of how we might extend the reach of finite scalability quantum computers.The previous subsection overviewed a host of quantum algorithms suited for addressing this question.In this section we take one quantum algorithm from the previous section and quantitatively investigate its ability to extend the reach of a finite scalability quantum computer for the task of phase estimation.We use as our example the randomized Fourier estimation (RFE) algorithm as introduced in [92] and adapted for trading circuit repetitions for number of operations per circuit in [99].The RFE algorithm solves the task of phase estimation introduced in Section II B. It is an alternative to the standard quantum phase estimation (QPE) algorithm [123] and related algorithms such as robust phase estimation (RPE) [91].
We consider the RFE algorithm to be a prototypical quantum algorithm suited for early faulttolerant quantum computing given that it has the following features: • Qubit conservation: the (high-level) circuit conserves qubit count by using just one ancilla qubit.
• Circuit trading: the number of operations per circuit is tuned by input parameter , enabling a trade-off between this quantity and the required number of circuit repetitions.
• Robustness: the algorithm is robust to circuit error and this robustness can be understood in terms of a signal corrupted by a noise floor.
As we will show, and like many of the other EFTQC algorithms introduced in Section III A, these features equip the algorithm to accommodate limited scalability in the early fault-tolerant quantum computing regime.Furthermore, RFE is very simple, helping to facilitate discussion of these algorithmic concepts relevant to early fault-tolerant quantum computing.
We will a) briefly review randomized Fourier estimation, and then investigate b) how trading circuit repetitions for decrease of operations and c) how robustness to error help to increase the problem instance size (i.e. the "reach") that can be solved with a finite scalability architecture.

𝑈 𝑘
FIG. 5: An archetypal circuit template used by many EFTQC algorithms.The measurement outcome probabilites depend on |⟩ and  as Pr(±1|, ) = 1 2 (1 ± cos( + )).Measurement outcomes can be processed to In the case of the Randomized Fourier Estimation (RFE) algorithm, the measurement outcomes encode the The parameter  is uniformly randomly chosen among {0, . . .,  − 1} for each circuit repetition. then controls the maximal circuit depth and is used to reduce the number of operations per circuit.The boxed-up elements in blue can be collectively interpreted as a measurement with respect to the observable   = cos()  − sin()  , where   and   are the conventional Pauli operators and () = 1 0 0 exp() .

a. RFE Intro
The RFE algorithm relies on the Hadamard test circuit (as depicted in Figure 5).Each Hadamard test circuit is parameterized by the circuit depth () and a phase parameter ().The output measurement probabilities correspond to an oscillatory function that encodes : It is convenient to view the expected value of , which is () = cos( + ), as the true signal encoding .The phase  is then estimated from measurement outcome data in a manner similar to estimating the frequency of a noisy estimate of ().The parameters  and  are chosen uniformly randomly in each sample, with  ∈ [0,  − 1] and  ranging between 0 and 2.Each measurement outcome  obtained from the circuit is used to form an unbiased estimator f = 2 −2 /  − of the discrete Fourier transform of the signal (), where  is an algorithm parameter that sets the grid size of the Fourier spectrum.The estimate of the Fourier signal can be made more accurate by taking multiple samples and averaging them: By accumulating enough measurement outcomes, one can estimate  accurately (i.e.within ) with high probability (i.e. less than 1 − ) by locating the tallest peak (or point of largest magnitude) in the estimate of the discrete Fourier transform, The algorithm's accuracy is limited by parameter , which is set to ensure that the Fourier resolution matches the desired accuracy.

𝐽
Sets the Fourier domain grid spacing.

𝐾
Sets the maximum number of - per circuit.

𝑀
Sets the number of circuit repetitions (i.e.samples).
b. Circuit trading We now describe how this algorithm is able to trade number of operations per circuit for circuit repetitions.The maximum number of operations per circuit (in expectation) is ( − 1)  , where   is the number of operations in a single -.In the quantum phase estimation algorithm, 1/ calls are made to -, corresponding to setting  ≈ 1/.In RFE we can reduce the number of operations per circuit by setting  to any value less than 1/.This reduction in  reduces the burden factor in Equations 9 and 12 proportionally.Figure 6 shows how varying reductions in the burden factor lead to an increase in the problem size that RFE can accommodate.Equation 12 predicts that this increase in problem size grows as (1/ln 2 ()) with burden factor .For the specific example considered, the largest problem instance can be increased from 90 to over 200 by decreasing  by a factor of 100,000.
As mentioned previously, circuit trading means that a decrease in operations per circuit comes at the cost of an increase in the number of circuit repetitions.This trade-off can be understood as follows.Decreasing  causes the width of the peak in the discrete Fourier spectrum to increase.With the spectrum being more flat near the peak, smaller amounts of noise in the signal are able to shift the peak location more than  (leading to algorithm failure).This statistical sampling noise must then be reduced by taking more samples.The analytic relationship is given in the appendix of [99].This describes the nature of the trade-off between operations per circuit and circuit repetitions.
c. Robustness The RFE algorithm has been analyzed in previous work with respect to three different algorithmic noise models: adversarial noise and Gaussian noise [92] and exponential decay noise [99].We give brief explanations of how the Gaussian noise and the exponential decay noise impact the algorithm performance and thus explain the robustness of the RFE algorithm to a particular model of noise.In [92], the Gaussian noise model is analyzed, wherein it is assumed that, for each circuit (labeled by ), the output probability has been corrupted by a small perturbation drawn from a Gaussian distribution, where each   has been drawn from a Gaussian distribution with mean zero and standard deviation .How does this impact the performance of the algorithm?The   can be understood to corrupt the expected value of  (i.e. the signal ()).This impacts the Fourier spectrum by adding a "noise FIG.6: This plot shows that, under the scalability model, the EFTQC algorithm robust Fourier estimation (RFE) can extend the reach of the quantum computation from 90 logical qubits to over 200 logical qubits.This is achieved by either reducing the number of - used per circuit or increasing the tolerable circuit error rate   in the RFE algorithm.Both of these reduce the burden factor /  appearing in Equation 9.This increase in the "reach" of the quantum computer comes at the cost of an increase in the runtime (roughly by the burden factor), which is a combination of the decrease in time per circuit and increase in number of circuit repetitions.Here we take the scalability to be  = 3.5 with  0 = 10 −4 , which implies that the optimal number of physical qubits is  floor" related to the Fourier transform of the   .The algorithm can still succeed as long as this noise floor does not shift the location of the peak by more than .The authors of [92] proved that if  is below a certain quantity (dependent on  and ) then the algorithm can succeed with more than 1 −  probability.
In [99], the exponential decay model is derived from a lower-level noise model.The exponential decay model assumes that the likelihood function now includes a factor that decreases exponentially in , with decay parameter .Experiments [112,113] show that this model is accurate for small systems.This exponential decay factor causes the expected value of  (i.e. the underlying signal ()) to attenuate as  is increased.In the Fourier domain, this attenuation translates into an attenuation of the peak (see this desmos plot).As with the peak broadening due to reducing , a smaller amount of statistical noise is sufficient to shift the location of the estimated peak more than .Accordingly, more samples must be taken to sufficiently reduce this statistical noise.Under the assumption that the exponential decay model holds exactly, [99] shows that with arbitrarily large decay parameter , the algorithm can generate an  accurate estimate with probability greater than 1 − .In other words, the algorithm can be made arbitrarily robust.The reason is that the exponential decay error does not shift the location of the peak in the Fourier spectrum of the expected signal.This increase in robustness translates into a decrease in the burden: allowing the circuit error rate   to increase towards 1 increases the allowed logical error rate   , decreasing the burden factor.Consider a reduction in the burden on account of an increase in the tolerable circuit error rate   , which quantifies the the robustness of the algorithm.Note that in the case where   is close to 1, a better approximation than the union bound can be used to replace   with ln(1/(1 −   )), which grows to infinity as   → ∞.We remark that in the case of the exponential decay model, the circuit error rate is   = 1 −  − , which leads to ln(1/(1 −   )) = .Therefore, as we allow for an increase in , the burden factor is reduced proportionally (where we keep in mind that, for small values of   , the burden factor scales proportionally to it).
We previously discussed Figure 6 in the context of circuit trading.This figure can also be used to demonstrate the impact of increased robustness.Considering an increase in   to be the cause of the burden factor reduction, Figure 6 shows how the reach of the quantum computer is increased accordingly.As with circuit trading, there is a price paid for this extended reach of the quantum computer: for the RFE algorithm, [99] shows that the runtime grows exponentially in  for  ≥ 1/2 (where  is set to its minimum value of 2).Therefore, in practice, there may be an upper limit to the degree of robustness, beyond which the runtime becomes too large to be practical.This is an issue that many error mitigation techniques face [124].This similarity may not be surprising in that the way RFE accommodates error is a type of error mitigation.
In practice, the exponential decay model is not exact.Instead, we expect that in any given device and compilation of -, the likelihood function will include some deviation (possibly varying over time) from the exponential decay model likelihoods.While in the exact exponential decay model the Fourier peak location is unchanged, allowing deviations from this model can shift the location of the peak.This sets a lower limit to the achievable accuracy , a feature which is found in the bounded adversarial noise model and the Gaussian noise model of [92].
We have demonstrated how the RFE algorithm, as an archetypal EFTQC algorithm, enables a reduction in the burden placed on the fault-tolerant protocols.Figure 6 demonstrates how larger problem instance sizes can be accommodated by either reducing the number of operations per circuit (decreasing ) or by increasing the robustness of the algorithm (increasing   ).This is because the burden factor    incorporates both of these quantities.For both examples of reducing the burden factor, there is an increase in the runtime of the algorithm.Although the RFE algorithm enables parallelizing the circuit repetitions over multiple quantum computers to reduce runtime, the runtime is expected to be a bottleneck for many applications.Therefore, the runtime costs of reducing the fault-tolerance burden must be carefully considered.See [99] for an quantitative account of such runtime costs for RFE.We leave a thorough investigation of the runtime costs of decreasing the fault-tolerance burden for EFTQC algorithms to future work.

IV. DISCUSSION AND OUTLOOK
In this perspective we investigated the regime between NISQ and FTQC, which is referred to as "early fault-tolerant quantum computing".To understand the prospects for utility in this regime, we proposed a simple computational model to quantitatively capture the performance of quantum architectures within these three regimes.The scalability model characterizes the ability of a quantum hardware vendor to provide systems with low physical error rates as the requested number of physical qubits is increased.This differs from previous approaches that assume a scale-independent performance for their quantum architectures [85][86][87].We demonstrated that the QPE algorithm [59] compiled to the surface code [97] has a limit on the problem size that can be accommodated by a vendor with finite scalability, according to our model.Unsurprisingly, this is due to scale-dependent error rates (Equation 3) combined with the diminishing returns of fault-tolerant protocols as the error rates of the device approach the numerically-estimated threshold value [71].Next, we showed that by using an algorithm suited to finite-scalability (the randomized Fourier estimation algorithm [92]), when granted the same scalability, the problem size limit can be extended from around 90 qubits (for QPE) to around 130 qubits (using the same number of physical qubits).This comes at the cost of roughly a 100 times increase in runtime.
The scalability model enabled us to quantitatively discuss the transition from NISQ to EFTQC to FTQC.At the end of Section II A we described how the nature of the transition from the regime of NISQ to EFTQC is difficult to predict; future advances might allow for implementing certain fault-tolerant components far sooner than current methods would enable.However, we mentioned some of the technical considerations that might govern the transition and, accordingly, depict this transition in Figure 1 to occur through the range of  max phys being 100 to 10,000.Regarding the transition from EFTQC to FTQC, we described in Section II A how each regime might be characterized by different bottlenecks; EFTQC is characterized by the largest solvable problem instances being bottlenecked by the number of available physical qubits (or, better,  opt phys ), whereas FTQC is characterized by the largest solvable problem instances being bottle-necked by runtime.Accordingly, we explain how this transition might occur in the range of  max phys being 10 6 to 10 8 .Different factors, such as hardware, algorithmic, and fault-tolerance advances, play a dominant role in characterizing the EFTQC regime.The recent work of [125] provides evidence for the utility of noisy quantum devices in the pre-fault tolerant era and emphasizes the role of hardware advances to achieve this.Moreover, many works have highlighted the importance that quantum algorithm development has in leveraging the capabilities of the quantum devices to their maximum potential [49,51,112,126].Recent works have also explored the effect of noise in the performance of quantum algorithms and highlighted the need to use QEC prudently [99,111,127].This work is a first attempt to incorporate all the aforementioned factors (hardware, algorithmic, and faulttolerance advances) in order to validate the assumption that there is a meaningful regime of early fault-tolerant quantum computing methods, which is usually assumed in papers on the subject [54,57,62].What remains to be determined is how rapidly quantum hardware will progress through this regime; or, in other words, it remains to determine how the scalability of quantum hardware vendors will increase over time.
To put these results into context, recent resource estimates on a variety of molecules relevant to Li-on electrolyte chemistry [128] show that above 100 logical qubits would be necessary to tackle such systems.This indicates that extending the reach of the problem size limit from 90 logical qubits to over 130 with the framework discussed here might have interesting implications, i.e. allowing to study problems of interest before the realization of FTQC regime.Our results suggest that the EFTQC regime could exist in a meaningful way, i.e. using the same quantum resources compared to FTQC (number of physical qubits and scalability model), while affording the use of a larger number of logical qubits (Fig. 6).
This work explored the usefulness of the EFTQC regime for a specific quantum algorithm and QEC model, namely the RFE [92] and the surface code [97].The underlying methodology, however, can be easily extended to other algorithms and fault-tolerant protocols while using the suggested or alternative scalability models.Although the proposed model of scalability is quite general, we do not expect it to perfectly fit the scalability profile of vendors over many orders of magnitude.But, we anticipate that it could capture the qualitative behavior over at least a few orders of magnitude.Moreover, we showed in Appendix B that, even when a more optimistic model is used (specifically a logarithmic model), the qualitative finding remains: there is an upper limit on the size of the quantum computation.
Future work could explore other models of scalability that, for example, might be given directly by the hardware provider and accommodate the features of the architecture as it is scaled.Another interesting direction is to adapt the scalability model to address the interplay between quantum error mitigation and quantum error correction [129,130] which will help drive the transition from NISQ to EFTQC.Moreover, the proposed framework could be applied to other combinations of algorithms and quantum error correcting codes and be used to examine the utility of the EFTQC regime for other potential application fields of quantum computing.
Our work provides evidence for the utility of the EFTQC regime within a framework that includes crucial factors of quantum computing, such as hardware, algorithm, and fault-tolerance advances.
To incorporate the hardware advances, we have introduced a simple scalability model to capture the performance of devices that are continually improving.As it is yet unclear how exactly quantum devices will scale up to incorporate millions or billions of physical qubits [40], the proposed model of scalability is just a first attempt to bridge the gap between NISQ and FTQC.Future works in these directions could help move beyond the NISQ-FTQC dichotomy and further explore how EFTQC might deliver practical quantum advantage at scale.

FIG. 7 :
FIG. 7: We plot the worst two qubit gate error of two IBM quantum devices on the cloud as a function of the number of qubits.The power law fit (blue line) suggests today's scalability is  = 1.75 and  0 = 0.005.