Visibility-based hypothesis testing using higher-order optical interference

Many quantum information protocols rely on optical interference to compare datasets with efficiency or security unattainable by classical means. Standard implementations exploit first-order coherence between signals whose preparation requires a shared phase reference. Here, we analyze and experimentally demonstrate binary discrimination of visibility hypotheses based on higher-order interference for optical signals with a random relative phase. This provides a robust protocol implementation primitive when a phase lock is unavailable or impractical. With the primitive cost quantified by the total detected optical energy, optimal operation is typically reached in the few-photon regime.

Optical systems, in addition to being the workhorse of modern telecommunication, provide a natural platform to implement quantum-enhanced protocols for information transfer and processing between distant parties. Quantum strategies can provide authentication or reduce the communication complexity of certain tasks, in which large distributed datasets need to be processed to infer a relatively small amount of information [1,2]. Examples include quantum digital signatures [3] and quantum fingerprinting [4]. These protocols share a primitive which consists in imprinting the input data onto the modal structure of transmitted fields, e.g. in the form of phase patterns, and interfering the received signals, as shown in Fig. 1. Different hypotheses, e.g. the instances of identical and unequal inputs, are mapped onto distinct ranges of the interference visibility, which can therefore serve as the basis for hypothesis testing. Strikingly, optical signals sufficient to realize the quantum scheme may not have the capacity to carry information necessary to implement the classical protocols with the matching confidence level. This enhancement, stemming from the interplay between wave and particle properties of light exploited in quantum protocols, can advantageously change the scaling of resources required to perform the task as well as ensure security.
As recently pointed out [5,6] and demonstrated experimentally [7][8][9][10], the protocol primitive described above can be realized efficiently with coherent light beams and first-order interference. This implementation uses laser light sources and is robust against attenuation introduced by optical channels transmitting the signals, but it requires phase stability between the sending parties. In certain scenarios a shared phase reference may be unavailable or very difficult to furnish. An alternative may be to resort to Hong-Ou-Mandel interference between single photons which has been exploited in proof-of-principle demonstrations of quantum communication complexity protocols [11,12]. However, a practical implementation may require single photon sources with long coherence times and would be inefficient for high channel attenuation. The latter impairment affects also a realization based on weak classical states with a random global phase [13].
In this paper we present a strategy to carry out optical hypothesis testing based on the visibility of higher-order interference between classical fields with a random relative phase. This approach concurrently benefits from conventional optical signal generation techniques, removes the need for . .  Input data in possession of two parties A and B are mapped onto phase patterns φ A 1 , φ A 2 , . . . , φ A m and φ B 1 , φ B 2 , . . . , φ B m used to modulate sequences of m pulses described by a family of normalized temporal waveforms u1(t), u2(t), . . . , um(t). Generated optical signals can be viewed as prepared in collective modes described respectively by The signals are brought to interference at a 50/50 beam splitter whose output ports are monitored by photodetectors. The outcome of a single repetition of an interferometric measurement is a pair of integers k, k specifying the number of counts registered by each of the detectors over the duration of the signals.  Figure 2. Discrimination between a pair of hypotheses encoded in interference visibilities V1, V2 for the coherent and the random-phase scenario. (a) Information gained from a single photodetection event for a fixed phase between interfering signals. (b) Maximum information per one detected photon C rnd /(ηn) for signals with a random global phase. (c) Optimal ηn * maximizing the ratio C rnd /(ηn). White squares on the diagonal in (b), (c) represent the case |V1| = |V2| when the two hypotheses are indistinguishable. a shared phase reference, and ensures robustness against channel attenuation. The performance is characterized using average error probability, whose asymptotic behavior is investigated with the help of a refined Chernoff bound [14,15]. Interestingly, we show that when the protocol cost is quantified in terms of the total transmitted optical energy, the optimal strategy is to realize multiple repetitions of the interference visibility measurement in the fewphoton regime with a determination of the complete photocount statistics.
Let us first consider interference between two mutually coherent optical signals. Each signal has the form of a pulse sequence depicted in Fig. 1 and carries optical energyn/2 expressed in photon number units. The receiver combines the signals on a balanced beam splitter. The time-integrated light intensity at the two output ports of the beam splitter labeled with indices '+' and '−' can be written as I ± (V) = ηn(1 ± ReV)/2, where η is the channel transmission for each of the signals [16]. Here V is the interference visibility which carries information about the relation between the input datasets. In the ideal case it is equal to the overlap V =´dt u(t)v * (t) between the normalized waveforms u(t) and v(t) describing the two received signals. For identical inputs V = 1, which corresponds to completely destructive interference at the '−' output port. Hence registering a photocount at that port unambiguously indicates that the inputs were unequal. This observation underpins the quantum fingerprinting protocol which aims at deciding whether datasets in posession of two parties are identical or different while revealing the smallest possible amount of information to the external referee. The protocol employs classical error correction to guarantee that for any pair of unequal inputs the visibility remains below a certain threshold value. Given that experimental imperfections, such as detector dark counts and misalignment of optical beams, lower the effective visibility [16], the hypotheses of identical or unequal inputs correspond to two distinct ranges of the visibility parameter separated by a gap. In order to perform a practical test between these two hypotheses one needs to devise a decision rule based on the measured photocount statistics.
Let the detectors at the the output ports of the the beam splitter be able to resolve up to K photocounts over the signal duration. The probability p ± k of registering k photocounts on one detector reads p ± k (V) = exp[−I ± (V)][I ± (V)] k /k! for k = 0, 1, . . . , K − 1 and . Non-unit efficiency of the detectors can be included in the channel transmission η. Suppose now that the signal pairs are received with a promise that the visibility takes only one of two equiprobable values V 1 or V 2 . For the fingerprinting protocol one value corresponds to identical inputs, while the second one can be taken as the highest visibility occurring in the case of unequal inputs. The task is to discriminate between the two visibility hypotheses on the basis of the photocount sample collected in N repetitions of the interferometric measurement. The probability ε of erroneously identifying the actual visibility is upper bounded by the so-called Chernoff bound ε ≤ exp(−N C)/2 [14], where C stands for the Chernoff information given explicitly by (1) In the above expression, summation is carried out over all possible measurement outcomes, which in our setup have the form of two integers k and k specifying the number of counts registered by individual detectors, and P kk (V) denotes the probability of obtaining a specific combination kk for the visibility V.
For the coherent signal scenario considered so far, the probability of registering respectively k and k counts has the product form P coh . Assuming full photon number resolution with K → ∞, the Chernoff information can simplified to It is seen that the Chernoff information is proportional to the received optical energy ηn. The proportionality factor given by the ratio C coh /(ηn) can be interpreted as the amount of information gained from the detection of one photon. In Fig. 2(a) we depict this factor as a function of the real parts of visibilities ReV 1 and ReV 2 . Generally, it pays off to maintain a large distance between the visibilities with the maximum information attained for The above picture becomes much more nuanced if the sending parties have no access to a shared phase reference, which implies that the signals arrive with a random relative phase. However, in each individual realization the signals are described by coherent waveforms whose overlap is given by V up to an overall phase factor. In such a scenario, the joint photocount distribution reads The explicit analytical expression for P rnd kk (V) is derived in Supplemental Material [16]. Obviously, after averaging over the global phase only the absolute value |V| of the visibility parameter is relevant. The above probability distribution can be used to calculate the respective Chernoff information C rnd according to Eq. (1). As before, the ratio C rnd /(ηn) has the interpretation of the amount of information gained per one received photon.
In Fig. 3 we depict C rnd /(ηn) as a function of the received optical energy ηn for an exemplary pair of visibilities V 1 = 0.98 and V 2 = 0.56. The linear scaling of the ratio C rnd /(ηn) with ηn for ηn 1 is explained by the fact that for very weak signals detection of minimum two photons in a single realization of the measurement is necessary to obtain any meaningful information [13]. Consequently, in this regime the leading term of the Chernoff information C rnd is proportional to (ηn) 2 , which gives unfavorable quadratic scaling with the channel transmission. Beyond the two-photon regime corresponding to low optical energies, the ratio C rnd /(ηn) exhibits a well pronounced maximum in ηn. This observation can be used to draw the following operational conclusion. Suppose that the total optical energy available at transmitters isn tot . Ifn photons are used in a single realization of the interferometric measurement, one can afford N =n tot /n repetitions. Let us rewrite the Chernoff bound on the error probability as exp(−N C rnd )/2 = exp[−ηn tot C rnd /(ηn)]/2. Assuming a fixedn tot , which can be taken as the overall cost of implementing the communication primitive, it is beneficial to optimize C rnd /(ηn) for a single realization.
Remarkably, the optimum of C rnd /(ηn) occurs for ηn in the few-photon range and information needed for hypothesis testing is distributed in a non-trivial manner across the entire joint photocount statistics. To illustrate this point, in Fig. 3 we depict also the noticeably lower ratio C rnd /(ηn) calculated for detection that could resolve only up to K = 2 photocounts over the signal duration. Further, using only the marginal distribution for the photocount number difference P ∆k (V) = k P k,k+∆k (V) reduces significantly the Chernoff information, as also shown in Fig. 3. The above observations are universal as long as one of the two visibilities is sufficiently high, which is the case of quantum protocols motivating this study. In Fig. 2(b) we plot the maximum C rnd /(ηn) as a function of the absolute values of the visibilities |V 1 | and |V 2 | to be discriminated between, along with the optimal average photon number that should be used in a single realization shown in Fig. 2(c). Generally, the amount of Chernoff information per unit optical energy is lower than in the coherent scenario depicted in Fig. 2(a), which is easily explained by the lack of the phase reference. Nevertheless, the available information also scales linearly with the optical energy, which implies that the scaling advantage over classical protocols should be analogous to the coherent case.
We performed a proof-of-principle experimental demonstration of binary hypothesis testing for a pair of visibilities V 1 = 0.98 and V 2 = 0.56 using a collinear interferometric setup presented in Fig. 4(a). We employed a continuous-wave 800 nm laser diode attenuated by a series of neutral-density filters down to ≈ 10 −14 W of power followed by a polarizer ensuring a well-defined linear polarization. The beam is subsequently sent through a combination of a quarter-and a half-wave plate whose respective rotation angles θ and φ define the normalized intensities after the Wollaston polarizer as I ± = (1 ± Re[e 4iφ−2iθ cos(2θ)])/2. Hence our experimental setup can be viewed as a fully equivalent simulation of a standard interferometer with the complex visibility tunable in the entire phase and absolute value range by an appropriate rotation of the wave plates. To realize the random phase scenario we collected data for 50 half-wave plate angles φ uniformly probing a full period of the visibility phase. Both output beams were monitored by free-running avalanche photodiodes (APD) connected to a time tagger based on field-programmable gate array architecture, which registered photocounts with 3.3 ns temporal resolution. The time tagged counts for each of the two detectors were grouped over 80 µs-long time intervals. With the used input power, this interval corresponds to the mean photocount number ηn = 6.3 which gives the partitioning of the total optical energy that nearly maximizes information per one detected photon. The numbers of photodetection events accumulated over an individual interval yield the single realization outcome kk . The 50 ns dead time of the detectors used in the setup did not noticeably distort the measured photon statistics. We collected approx. 1.5 × 10 6 pairs kk for each of the two visibilities. This allowed us to determine the joint probability distributions P rnd kk (V 1,2 ) depicted in Fig. 4(b), which within the resolution of the graphs match perfectly the theoretical values given by Eq. (3). A detailed analysis is presented in Supplemental Material [16].
In order to experimentally determine the error probability of binary hypothesis testing one needs to repeat the test procedure multiple times feeding it with independent sets of experimental data obtained for a fixed visibility. We realized this by selecting from the experimental results an ensemble of M = 1.5×10 4 datasets [(kk ) 1 ,...,(kk ) N ] 1 , [(kk ) 1 ,...,(kk ) N ] 2 ,...,[(kk ) 1 ,..., (kk ) N ] M consisting of N photocount pairs. We applied the Neyman-Pearson test [14] to each dataset selecting as the test result the visibility yielding a higher likelihood of photocounts group observation i.e.
and V 2 other-wise. The probability of error was evaluated as the ratio of erroneous hypothesis determinations to the number of groups M used for testing. That way we estimated the conditional error ε(V 1 |V 2 ) of inferring visibility V 1 when V 2 was the true one and the reverse error ε(V 2 |V 1 ).
In Fig. 4(c) we compare the average error probability determined from experimental data ε = [ε(V 1 |V 2 ) + ε(V 2 |V 1 )]/2 with both the standard Chernoff bound for the random phase scenario and the refined Chernoff bound [15] derived explicitly in Supplemental Material [16]. In accordance with theoretical predictions, the experimental error remains below the upper bound provided by the Chernoff bound [14] reaching its refined version for asymptotically large number N of outcomes used for hypothesis testing [15]. For the fingerprinting protocol, the case of unequal inputs would hold the laxer promise of the visibility V 2 ≤ 0.56. The shadowed grey region in Fig. 4(c) indicates the range of error values obtained from Monte Carlo simulated photon count statistics with V 1 = 0.98 and 0 ≤ V 2 ≤ 0.56, and processed using the Neyman-Pearson test designed for V 2 = 0.56. It is seen that the decision rule works also in this more general scenario.
Let us close by discussing the parameter regime required to demonstrate quantum advantage for the fingerprinting protocol based on the primitive presented here. For input datasets n bits long, in the classical scenario it is necessary to reveal at least O( √ n) bits of information [17]. As shown in Supplemental Material [16], in the absence of an external phase reference the strategy presented here makes it possible to maintain the exponential enhancement in the number of revealed bits scaling as O(log 2 n), analogously to the coherent protocol [5]. For the error probability ε = 10 −4 our protocol beats the best currently known classical protocol [18] for n ≥ 2.3 × 10 5 and the ultimate classical limit [10] for n ≥ 6.3 × 10 8 bits. It is assumed here that for identical inputs the deviation of the visibility V 1 = 0.98 from one stems from experimental imperfections, while unequal inputs are guaranteed to produce maximum visibility V 1 = 0.56 with the same contribution from imperfections. In this scenario the attainable code rate for mapping input datasets onto binary phase patterns is R = 0.12, which implies that the quantum advantage can be observed for pattern lengths exceeding 1.9 × 10 6 and 5.2 × 10 9 to beat the best known classical protocol and the classical limit respectively. If the optical signals are modulated with 100 GHz bandwidth available for standard LiNbO 3 electro-optic modulators technology [19], one would require laser sources correspondingly with a kHz or a few-Hz linewidth to ensure phase stability over the signal duration. While the former requirement can be met by commercial single-frequency lasers, in the latter case more sophisticated, yet available, laser systems would be needed [20,21].
In conclusion, we described and verified experimentally a strategy to identify the modal overlap between two optical signals with a random relative phase using higher-order interference. It can be viewed as an implementation primitive for a number of quantum-enhanced protocols, when a shared phase reference is not available. As illustrated by the quantum fingerprinting example, this approach offers analogous scaling advantage compared to classical protocols as schemes utilizing first-order coherence. The experimental demonstration of the quantum advantage should be within the reach of current technology.
We thank E. Kashefi, N. L. Lütkenhaus, F. Xu, and Q. Zhang for insightful discussions. This work was supported by the Foundation for Polish Science under the TEAM project "Quantum Optical Communication Systems" co-financed by the European Union under the European Regional Development Fund. M. Jachura was supported by the Foundation for Polish Science.
Supplemental Material for "Visibility-based hypothesis testing using higherorder optical interference" This document provides supplementary information to "Visibility-based hypothesis testing using higherorder optical interference". We present here derivation of the interference visibility of light in two partially overlapping modes and show how dark counts can be incorporated into the effective visibility. We also derive an analytic expression for photocounts probability in the random phase scenario and compare it with experimentally measured statistics. Additionally we derive a refined Chernoff bound for the error probability in binary hypothesis testing. Finally we describe optical quantum fingerprinting protocol for coherent and incoherent scenario and show that lack of shared phase reference does not destroy exponential advantage of communication complexity over classical protocols.

Interference visibility
Consider two optical signals with amplitudes α and β, described by normalized complex waveforms in the temporal domain u(t) and v(t), dt |u(t)| 2 =ˆdt |v(t)| 2 = 1. (4) The signals are combined at a balanced beam splitter. The time-integrated intensities at the output ports of the beam splitter can be written as: This expression has the form I ± (V) = ηn(1±ReV)/2 given in the main text with |α| 2 + |β| 2 = ηn and When the signals have equal amplitudes, α = β, the visibility parameter V is given directly by the scalar product between the normalized signal waveforms. If the signals are misaligned at the beam splitter, e.g.
in the transverse spatial degree of freedom, V is additionally multiplied by a spatial integral characterizing the overlap between spatial field distributions. If the detectors employed to determine the photocount statistics exhibit dark counts characterized by Poissonian statistics with the meann dark over the signal duration, the time integrated intensities need to be replaced by I ± → I ± +n dark . It is straightforward to show that in this scenario they can also be cast into the standard form with the following substitutions: Thus dark counts additionally reduce the effective visibility by the factor ηn/(ηn + 2n dark ). For phase-keyed signals composed of sequences of m pulses with imprinted phase patterns φ A 1 , φ A 2 , . . . , φ A m and φ B 1 , φ B 2 , . . . , φ B m , the waveforms can be written as where u 1 (t), u 2 (t), . . . , u m (t) are normalized waveforms describing individual pulses in the sequence. Assuming that individual pulse waveforms are mutually orthogonal, the overlap between the signal waveforms readŝ Quantum fingerprinting with binary shift keyed signals An edifying example of a protocol to which our discrimination strategy can be directly applied is quantum fingerprinting. In the quantum fingerprinting protocol the Referee needs to decide whether nbit long strings x, y in possession of two separate parties Alice and Bob, are identical or different. It is known that classically the communication complexity of such a task is O( √ n), i.e. both Alice and Bob need to reveal O( √ n) bits to the external Referee [17]. On the other hand it can be shown [4] that by using quantum communication it is sufficient to reveal only O(log n) bits of information which is an exponential improvement over the classical case.
In the first step of the protocol Alice and Bob convert their strings into m-bit codewords using an error correcting code E, which ensures that the relative Hamming distance between any two different codewords δ is greater or equal than some minimal value δ min . The relation between lengths of the codewords m and input bit strings is characterized by the rate of the code r = n/m. The Gilbert-Varshamov bound [22] states that the maximum attainable code rate for a given δ min is given by where is the binary entropy function.
In the optical realization of quantum fingerprinting, Alice and Bob send to a Referee a sequence of coherent light pulses with information about codewords encoded in the phase patterns of the light field. For instance, they may use binary phase shift keyed (BPSK) signals [5] in which each bit of the codeword is encoded in one of two phases of coherent pulses 0 → 0, 1 → π. The Referee then interferes the sequences received from Alice and Bob on a balanced beamsplitter and monitors two output ports of the beamsplitter using single photon detectors. If the initial bit strings x and y are equal then all respective pulses of both sequences have the same phases and therefore no counts are observed in one of the output ports. On the other hand, for unequal input bit strings, some pulses have different phases which results in the possibility to observe photocounts in both output ports. Using the optical terminology, the difference between codewords affects the interference visibility. For BPSK modulation the number of pulses is equal to the length of the codewords m. The phase of the jth pulse is given by e iφj = (−1) Ej (x) , where E j (x) denotes the value of the jth bit in the codeword x, and analogously for y. Plugging this in Eq. (6) and Eq. (10) gives the visibility where δ is the relative Hamming distance between E(x) and E(y). The fingerprinting task is thus converted into the determination of interference visibility.
In the scenario with the random relative phase only the absolute value of visibility can be measured which leads to ambiguity |V| = | − V| meaning that in particular identical δ = 1 and maximally different δ = 0 codewords yield the same interference result. Ref. [13] proposed a modification of the error correcting code by appending additional bits with the same values for any codeword. For any pair of different codewords this restricts the possible value of relative Hamming distance ∆ to the range ∆ min ≤ ∆ ≤ 1 − ∆ min , where ∆ min is expressed by the parameters of the original code as ∆ min = δ min /(1 + δ min ). The cost is increased length the codewords equal to m → m(1 + δ min ). Consequently, the rate of such a modified code is lower than the original code rate given in Eq. (11) and reads The interference visibility for modified codewords with the relative Hamming distance ∆ is given by an expression analogous to Eq. (12): In the case of experimental imperfections, the right hand side should be multiplied by a factor that includes the effects of beam misalignment, dark counts, etc.
The actual task of the Referee in the fingerprinting protocol is to distinguish between two alternative hypotheses of identical inputs with associated visibility V 1 or unequal inputs with associated visibility less or equal to V 2 < V 1 . Assuming that the deviation of V 1 stems from experimental imperfections that have the same effect in the case of unequal inputs, V 2 is given by V 2 = V 1 (1 − 2∆ min ). As noted in the main manuscript this can be efficiently accomplished using Neyman-Pearson test designed for binary hypothesis testing taking as the second visibility the maximum allowed value V 2 .

Experimental vs Theoretical joint photocounts distribution
Below we shall derive the explicit expression for theoretical phase-averaged distribution of joint photocounts P rnd kk (V) as well present its quantitative comparison with experimental data.
Let us begin with the formula describing the output intensities I ± for the visibility V = e iϕ |V| The phase-averaged distribution is given by the integral: Since the count statistics on one photodetector is given by a Poissonian distribution p ± k (V) = exp[−I ± (V)][I ± (V)] k /k! the joint phase-averaged distribution can be expressed as: After plugging the explicit formulas for the intensities (15) into the integrand, the RHS of Eq. 17 becomes: which after applying binomial expansions can be written as: We can simplify the expression by reordering the integral and the double-sum: The phase-averaged integer powers of cosine function can be calculated analytically yielding a compact solution: If the joint photocount statistics is truncated up to K photocounts. Since all possible pairs kk with single or both photocounts above this threshold value contribute to probabilities P Kk ,P kK , or P KK , the determined distribution P kk (V) is given by one of four expressions: To avoid information loss resulting from the distribution truncation we carefully adjusted the beam intensity and counts grouping time to resolve virtually all photocounts.
In Fig. 5 we present a comparison between experimentally measured numbers N kk of observed photocount pairs kk Fig. 5(a) and the theoretical prediction given by P rnd kk (V)N total Fig. 5(b), where N total = 15 k,k =0 N kk stands for the total number of registered pairs. The point-to-point difference between experimental statistics and the theoretical distribution presented in Fig. 5(c) does not generally exceed two standard deviations of measured counts number given by √ N kk .

Refined Chernoff bound
Here we will present a derivation of a refined version of the Chernoff bound following from [15]. Assume that we want to distinguish between two equiprobable hypotheses, characterized by random distributions p(x) and q(x), based on N repetitions of the experiment in a way that minimizes the average error. If we perform Neyman-Pearson test [14] we choose as the correct one the hypothesis that yields larger likelihood. The average probability of error is then given by where Pr[q(x 1 ) . . . q(x N ) ≥ p(x 1 ) . . . p(x N )|p] denotes the probability that we will obtain values x 1 , . . . , x N for which q(x 1 ) . . . q(x N ) ≥ p(x 1 ) . . . p(x N ) assuming the correct hypothesis is given by p(x), and analogously for Pr[p(x 1 ) . . . p(x N ) ≥ q(x 1 ) . . . q(x N )|q]. Let us consider the first probability, i.e. Pr[q(x 1 ) . . . q(x N ) ≥ p(x 1 ) . . . p(x N )|p]. We may take the condition q(x 1 ) . . . q(x N ) ≥ p(x 1 ) . . . p(x N ) and divide both sides by the right hand side which after taking a natural logarithm and dividing by N translates into Let us introduce a new random variable y = f (x) = − log p(x) q(x) . Since x is distributed according to p(x), y is distributed according to an induced probability distribution P y (y) = p(f −1 (y)). The moment generating function of y is given by ϕ y (t) = y e ty P y (y) = y e ty p(f −1 (y)) = We may now write Eq. (23) as a condition for the mean value of y According to [15] the probability that a mean value is larger than 0 can be written as Pr where ρ = ϕ y (τ ) and γ = στ , where σ 2 = ϕ y (τ )/ϕ y (τ ) and τ is defined implicitly by the equation ϕ y (τ )/ϕ y (τ ) = 0. Using Eq. (24) we may rewrite the definition of τ as where p * (x) = p(x) 1−τ q(x) τ /( x p(x) 1−τ q(x) τ ) and D(p||q) = x p(x) log p(x) q(x) is the relative entropy. Figure 6. Information revealed in a fingerprinting protocol as a function of input bit string length n for the error probability = 10 −4 and visibilities V1 = 0.98 and V2 = 0.56. Coherent protocol -green; incoherent protocol -black; lower bound on any classical protocol -blue, dashed; best known classical protocol -blue, solid. Gray shaded region represents the performance worse than the best known classical protocol.
The solution to the above equation is given by the coefficient τ = α * optimizing the Chernoff information C = min 0≤α≤1 log x q(x) α p(x) 1−α [14]. Using this result, other quantities required in Eq. (26) are given by Repeating calculations for the second probability, Pr[q(x 1 ) . . . q(x N ) ≥ p(x 1 ) . . . p(x N )|p], and adding the results we eventually obtain a refined bound on the average error probability Quantum advantage The amount of information revealed by Alice and Bob in coherent and incoherent quantum protocols using BPSK is upper bounded by the capacity of a lossless bosonic 2m-mode channel withn average number of photons [23], where m is the number of pulses used by Alice and Bob each. Note that since code rates for both protocols are different and given by Eq. (11) and Eq. (13), the pulse sequence length for the incoherent protocol differs from that in the coherent scenario. For large m the capacity is approximately equal ton log 2 m. Since m is proportional to the length of the input string m = n/R for long input strings the revealed number of bits scales like O(log 2 n) also for the protocol with the random relative phase. In the following we will assume unit channel transmission, η = 1.
In Fig. 6 we plot the revealed information in various fingerprinting protocols. The assumed probability of error is ε = 10 −4 and visibilities are the same as in the main text of the article V 1 = 0.98 and V 2 = 0.56 which corresponds to the minimum Hamming distance δ min = ∆ min = 0.21 for both coherent and incoherent protocols respectively. The respective code rates are given by r = 0.25 and R = 0.12. The average number of photons in the signal for the incoherent protocol was taken to ben = 6.6 which maximizes Chernoff information per photon. It is seen that both coherent and incoherent protocols have the same complexity scaling O(log 2 n), although naturally the former one requires less information to be revealed. To compare our scheme with a classical scenario, in the plot we also present two standard benchmarks i.e. a lower bound on any possible classical protocol [10] and the actual performance of the best known classical protocol [18]. The former reads I cl = (1 − 2 √ )( n 2 ln 2 − 1) while the latter is equal to I best = 4 1 2 log 2 1 √ n which for our probability of error yields I best = 28 √ n. It is seen that incoherent protocol excels the best known classical one for about n = 2.3 × 10 5 bits which corresponds to a sequence of about m = 1.9 × 10 6 time bins on each side. Such operating regime can be easily attained using commercially available lasers and electro-optical modulators. Beating the lower bound on any classical protocol is more demanding as it requires at least n = 6.3 × 10 8 bits which corresponds to sequences of m = 5.2 × 10 9 time bins. Although this regime is still within the reach of current technology it would require a modulation rates of at least tens of GHz, and significantly longer laser coherence times [20,21].