Protocols for estimating multiple functions with quantum sensor networks: geometry and performance

We consider the problem of estimating multiple analytic functions of a set of local parameters via qubit sensors in a quantum sensor network. To address this problem, we highlight a generalization of the sensor symmetric performance bounds of Rubio et. al. [J. Phys. A: Math. Theor. 53 344001 (2020)] and develop a new optimized sequential protocol for measuring such functions. We compare the performance of both approaches to one another and to local protocols that do not utilize quantum entanglement, emphasizing the geometric significance of the coefficient vectors of the measured functions in determining the best choice of measurement protocol. We show that, in many cases, especially for a large number of sensors, the optimized sequential protocol results in more accurate measurements than the other strategies. In addition, in contrast to the the sensor symmetric approach, the sequential protocol is known to always be explicitly implementable. The sequential protocol is very general and has a wide range of metrological applications.


I. INTRODUCTION
It is well-established that entanglement in quantum metrology often facilitates more accurate measurements compared to what is possible with unentangled probes [1][2][3][4][5].This fact has been demonstrated exhaustively for the cases of measuring a single parameter [6] or a single analytic function of many parameters [7][8][9][10][11][12][13][14] using quantum sensor networks, which are highly general models of quantum metrology.In these models, one considers an array of d quantum sensors, each coupled to a local parameter.One then seeks to optimally measure these local parameters directly (or some functions thereof) by selecting an initial state ρ 0 for the sensors, a unitary evolution U by which the local parameters are encoded in the state, and a choice of measurement specified by a positive operator-valued measure (POVM).
While measuring a single analytic function of multiple parameters in this setting is a bona fide multi-parameter problem, the fact that one seeks a single quantity makes the problem of finding the information-theoretic optimum for the variance of the desired quantity easier than a more general multi-parameter problem; in particular, one can make clever use of rigorous bounds originally derived for the single-parameter case [7,11,12].However, when one genuinely seeks to estimate multiple quantities, one must solve the general problem of designing provably optimal protocols for multi-parameter quantum estimation.This has proven to be a challenging problem, and has attracted a large amount of interest theoretically [5,10, and experimentally [37][38][39].Despite these extensive research efforts, the general problem has not yet been solved.Here, we consider another step towards this goal; in particular, we consider the case of measuring n ≤ d analytic functions with a quantum sensor network of d qubit sensors and develop a protocol that outperforms previously proposed protocols in many cases.We also emphasize the geometric aspects of this problem, meaning the orientations of vectors of coefficients associated with our functions, and how this geometry determines the protocol performance.
We begin by noting that, analogous to Ref. [11], one can reduce the problem of measuring n analytic functions of the parameters to that of measuring n linear functions.In particular, one can consider spending some asymptotically (in total time t) vanishing time t 1 measuring the local parameters to which the sensors are coupled and then the rest of the time t 2 = t − t 1 measuring the n linear combinations that result from a Taylor expansion of each analytic function about the true values of the local parameters estimated in the previous step.While provably optimal in the single-function case (n = 1), this reduction from analytic functions to linear functions is not necessarily optimal in the multi-function case.While we conjecture that the optimality of this reduction from analytic to linear functions does generalize to the multifunction case, as we do not claim general optimality of the protocols in this work, the reduction may be freely made without having to prove the veracity of this conjecture.
Having made this reduction to the problem of measuring multiple linear functions in a quantum sensor network, we can connect to previous works addressing the same problem, subject to various simplifying constraints [8,10,36].Leaving the details of these previous approaches for after we have introduced more mathematical formalism, we note that we may qualitatively divide protocols for this problem into three classes: local, global, and sequential [10].In a local estimation protocol, one optimizes only over unentangled input states and local measurements of the sensors.In a global protocol, one simultaneously estimates all the desired functions by optimizing over all (possibly entangled) input states and all (possibly non-local) measurements.Finally, in a sequential protocol, we divide the experiment into n steps, where in each part we measure a single function (which may be a linear combination of the original set {f 1 , • • • , f n }), preparing a new (optimal) initial state and performing a new measurement in each step.See Fig. 1 for diagrammatic representations of these different protocol types.
For the special case of measuring n = d orthogonal, linear functions, it has been known for some time that the functions can be measured optimally with a local protocol [8,10], but for general functions, proofs of optimal protocols are lacking.In fact, the only entanglementenhanced approach in the literature for measuring n > 1 general linear functions in a quantum sensor network is given in Ref. [36].The bound on performance given there is for global protocols and is derived from the quantum Cramér-Rao bound [15,16,40,41] subject to the restriction that one considers only a special set of so-called sensor symmetric states.However, even within this restriction, beyond the case of d = 2, it is an open question whether the states and measurements (POVMs) required to saturate the derived bound exist for all problems [42].
Here, we highlight a generalization of this approach, by deriving similar bounds using so-called signed sensor symmetric states.However, the generalized version also does not guarantee that the optimal states and measurements exist in general.Targeted at this shortcoming, we also consider an alternative, sequential protocol, subject to different restrictions, for which we can explicitly describe a protocol which achieves its theoretical performance.In addition to presenting this alternative protocol, we lay out how the precise geometric features of a given problem impact the performance of this sequential protocol compared to the signed sensor symmetric approach and the simple local protocol.

II. PROBLEM SETUP
With the general approach established, we now present the rigorous formulation of the problem.We consider a quantum sensor network of d qubit sensors prepared in some initial state ρ 0 .We then encode with σx,y,z i the Pauli operators acting on the i th qubit, and θ i the local parameter measured by the i th sensor.The term Ĥc (t) is a time-dependent control Hamiltonian that may include coupling to ancilla qubits.When measuring a single function, this time-dependent control is not necessary to achieve an optimal protocol [6,7], but one may use such control to design optimal protocols with simpler requirements on the choice of input state ρ 0 [7].Using this setup, our goal is to optimally measure In the following, we use i, j = 1, . . ., d to label qubits and , m = 1, . . ., n to label functions.Boldface is used to denote vectors.
To compare the accuracy of the different approaches and to eventually optimize them, we employ a standard figure of merit, which we denote as M, given as where f are estimators of the functions and w = (w 1 , • • • , w n ) T is a vector of weights.Since an accurate protocol should yield small variances, we seek to minimize M. In this context, given a total evolution time t, a protocol is defined by choice of initial state ρ 0 , control Hamiltonian Ĥc (t), measurements, and estimator f for f .The figure of merit M is lower bounded via the Helstrom quantum Cramér-Rao bound [15,16,40,41], which yields where N is the number of trials (which from now on we set to one for concision and consider just the single-shot Fisher information) and F Q (f ) is the quantum Fisher information matrix with respect to the functions f .While this bound is not generally saturable, in the setting of Eq. (1) it is [43].
While saturable in the setting considered, the right hand side of Eq. ( 3) is not easily evaluated in general.However, it has been proven [7] that, if we seek to measure a single linear function f (θ) = α • θ of the parameters θ, we may evaluate this bound and obtain that the minimum (asymptotically in time t and number of trials) attainable variance of an estimator f of f (θ) over all quantum protocols is This bound can be explicitly saturated by the protocols given in Ref. [7].As previously described, if f (θ) is a more general analytic function, one may attain a similar bound using a two-step protocol.In the first (asymptotically negligible) step, one makes local estimates θ of each of the parameters θ.In the second step, one uses the rest of the time to optimally measure the Taylor expansion of f (θ) about this estimate to linear order in θ [11].
For the case of measuring multiple functions f 1 , . . ., f n , we assume without loss of generality that the f are linear functions in the parameters θ, because more general analytic functions could be similarly linearized in asymptotically negligible time.We parameterize the linear functions by real coefficient vectors α such that . . .
Defining the matrix elements A i = (∂f /∂θ i ) θ = (α ) i , i.e., α T is the th row of A, we can phrase the problem as that of optimally measuring the n-component vector Without loss of generality we assume normalization of the coefficient vectors, because any non-unit length can be absorbed into the weights w in Eq. (2).
Recall, the problem of measuring n = d linear functions of independent parameters with quantum sensor networks has been considered in the literature in the case where the n functions are orthogonal (in which case local, global and sequential protocols are equivalent) [8,10] and for general linear functions for global protocols when the input states ρ 0 are restricted to be sensor symmetric [36].Here, we generalize the sensor symmetric approach and derive a performance bound when using so-called signed sensor symmetric input states (defined rigorously below).We refer to the variance obtained by the signed sensor symmetric protocol as M ss .
In this work, we also introduce an optimized sequential protocol for solving the n function estimation problem.We consider dividing our protocol into n sequential steps where, within each step, the protocol is provably information-theoretic optimal (i.e., saturates the quantum Cramér-Rao bound).In particular, for each step ∈ {1, . . ., n} taking time t , we measure a single function optimally using the protocols from Refs.[7,11].We cannot, however, prove that the full protocol is optimal in an information-theoretic sense.The naive version of this protocol is to measure the n given functions {f 1 , . . ., f n } one after another with some optimal choice of the time t spent on each function.We denote the figure of merit of the naive sequential protocol by M naive .
However, the naive sequential protocol is not the only option for sequentially measuring multiple functions.Indeed, the coefficient vectors {α 1 , • • • , α n } span a linear subspace of R d , and we may instead sequentially measure any set of linear functions whose vectors of coefficients {α 1 , • • • , α n } span the same subspace and then (after the measurements) calculate the original functions {f 1 , . . ., f n }.To help understand this visually, this approach is depicted in the diagram in Fig. 2 for n = 2 functions and d = 3 sensors.We denote the figure of merit obtained via this method by M opt .
To be explicit, define the n × n matrix C encoding the change of linear functions via where T is the matrix whose rows are the coefficient vectors of the new linear functions we measure.The variance of measuring any individual α is given by the optimal linear protocol [7] where we introduce (11) Note that this corresponds to Eq. ( 4) for every .We denote by µ the vector with entries µ , and by µ the analogous vector for the original functions [obtained by setting C = I in Eq. ( 11)].The figure of merit for estimating the original functions f with the optimized sequential protocol is then formally given by which takes into account optimization over C and over the division of the total time into time steps t ; the factor C 2 m comes from the standard expression for a linear combination of variances and accounts for the linear change of functions.A more practical form of M opt will be derived below.If the naive sequential protocol were optimal, then the minimum of M opt would be attained at C = I.However, we will show in the following that choosing suitable C = I often gives a significant improvement.This matches one's intuitive expectationsfor example, if the coefficient vectors of all the functions are nearly aligned, we might expect that the optimal approach is to spend most of the time measuring a single function whose coefficient vector is in that general direction, and the rest of the time measuring functions with orthogonal coefficient vectors to distinguish the small differences in the functions we care about.We will see that this intuition is correct.
Furthermore, we note that for this approach, we do not consider taking advantage of potential parallelization that may arise for certain choices of functions to measure-in particular, those sets of functions that depend on completely disjoint sets of sensors.More formally, when one chooses functions to measure such that A is the direct sum of matrices representing linear functions on disjoint sets of qubits, one could simultaneously measure functions that depend on disjoint sets of sensors, and thus spend more time measuring them, improving the accuracy.Therefore, purposefully choosing functions to measure that allow for such parallelization could potentially (although not necessarily) perform better than our protocol, which does not take this possibility into account.However, improved performance via parallelization is not guaranteed as Eq. ( 12) depends on both the time t spent measuring a function and the infinity-norm of the coefficient vector, µ = ||α || ∞ -whereas parallelization improves the former, it may worsen that latter.
At this point, we have commented on four approaches to our problem: (1) the local strategy with variance M local (defined in Eq. ( 13) below), ( 2) the (global) signed sensor symmetric strategy generalized from Ref. [36] with variance M ss , (3) the naive sequential strategy with variance M naive , (4) the optimized sequential strategy with variance M opt .Importantly, none of these strategies is optimal in general.Depending on the geometry of the linear functions to be measured, each of these strategies could be the preferable one (excluding the naive strategy, which, of course, in the best case, has M naive = M opt ).The term "geometry" here refers to the absolute and relative orientations of the coefficient vectors {α }.The question of what is the ultimate information-theoretic limit on M for multiple linear functions remains open.Here, we demonstrate cases in which each of these known strategies is preferable with an emphasis on the geometric interpretation.We emphasize that, in many instances, both the signed sensor symmetric and the optimized se-quential strategy can out-perfom the local unentangled strategy, which is of great importance for practical applications.

III. THE STRATEGIES
In this section, we determine the figure of merit M for the four strategies considered in this work.We emphasize that while the local and sequential strategies have explicit protocols to obtain the corresponding figure of merit, the figure of merit for the signed sensor symmetric is not proven to be always be attainable beyond d = 2.

A. Local Strategy
First we consider the local strategy, which does not utilize entanglement.Since we can measure each local parameter θ i simultaneously, with a variance of 1/t 2 [44], we arrive at where we used the normalization of the α and introduce We emphasize that the local protocol performs independently of the geometry of the measured linear functions.

B. Signed Sensor Symmetric Strategy
Next we review the results of Ref. [36] for the sensor symmetric approach, using our notation and emphasize a generalization of their approach to what we call signed sensor symmetric states.We emphasize that, given the restriction to (signed) sensor symmetric states, this approach gives a rigorous lower bound on the figure of merit M.However, as previously discussed, unlike the local or sequential strategies, for d > 2 one cannot guarantee that the figure of merit M ss obtained via this approach is saturable [36].
Define the generators of translations in parameter space as for evolution under the unitary U .Following Ref. [36], for this strategy, we specifically consider the Hamiltonian in Eq. (1) with Ĥc (t) = 0, so that U = exp(−i Ĥt) and K i = σz i t/2.This restriction of Eq. ( 1) to evolution under a time-independent Hamiltonian is not necessary for the sequential protocols considered later.However, the single linear function results from Ref. [7], which we use as a subroutine of our sequential protocol, presents two protocols, one that matches this restriction and one that does not (see section IV therein).Therefore, when explicitly comparing the sequential protocol to the signed sensor symmetric problem, we assume we are considering the former.
Given the generators of translations K i , we define the inter-sensor correlations [8,22] by for i = j, where we have used Given this definition, we define sensor symmetric states as those such that for all i = j, Specifically, for evolution under the time-independent version of Eq. ( 1).we have for all i = j.The authors of Ref. [36] define such states in analogy with path-independent states in optical interferometry [22,45], which, in addition to the analytic accessibility provided by such states, motivates this construction.The case of uncorrelated sensors, of course, is included for J = 0. Now we turn to a generalization of the sensor symmetric states considered in Ref. [36] that we call signed sensor symmetric states.This generalization is natural as the (unsigned) sensor symmetric state construction of Ref. [36] picks out functions with coefficient vectors α aligned along the vector of all ones 1 = (1, 1, • • • , 1) T as being favorable, but we know the positive orthant is not special, and one can immediately generalize from 1 being the favorable orientation to any ω ∈ {−1, 1} d (of which 1 is just one example).The reason such functions are most favorable is also intuitively clear-entanglement is most helpful when one measures global, average-like quantities, which is precisely what functions with coefficient vectors aligned along some ω are.We emphasize this generalization is very direct, as one can consider mapping any problem using a general ω to the case of Ref. [36] merely by applying a Pauli-X operator on all qubit sensors corresponding to negative elements of ω and correspondingly flipping the signs of all corresponding coefficients specified by α .However, to fairly compare to the sequential protocol, it is important we consider all such ω, as different choices can lead to an improved figure of merit.Therefore, we relax the restriction on the numerator of J ij as presented in Ref. [36] by defining and then restrict our consideration to states such that where ω ∈ {−1, 1} d is a vector with all entries ±1 and c is a constant.The entries of Ω ij are also ±1 and so c ij = ±c.We keep the definition J = c/v for our newly defined c, but note that now When restricted to the (unsigned) sensor symmetric initial states, i.e. when ω = 1 with 1 = (1, . . ., 1) T the vector of all ones, the authors of Ref. [36] were able to evaluate the quantum Cramér-Rao bound and determine the minimal achievable value for M, given the requirement of sensor symmetric input states.For the signed sensor symmetric states, the calculation is similar to that in Ref. [36], so we just state the result for our generalized approach here and present the details in Appendix A.
First define the ω-dependent geometry parameter G(ω), which encodes the geometric relationship between the coefficient vectors {α } of the n linear functions and the vector ω.We have Here φ ω, is the angle between the vectors α and ω.
. Again, we note that the relevance of this geometric quantity is intuitively clear as entanglement provides the biggest benefit when measuring functions aligned along some ω-that is, those functions for whom φ ω, ≈ 0. The ω-dependent lower bound on the figure of merit is found to be where we have used 4v = t 2 as in Ref. [36] to obtain the lowest bound.Under this condition on v, and the assumption that J ∈ (1/(1 − d), 1), so that the quantum Fisher information is invertible, the minimum is attained for One can then obtain the theoretical best performance for a signed sensor symmetric strategy as Importantly, the obtainable accuracy is intimately related to the geometry of the linear functions we seek to measure.In particular, one finds the best performance for this strategy when G is approximately d − 1; that is, when φ ω, ≈ 0. This corresponds to the situations where the sensor symmetric states have the largest inter-sensor correlations J opt (i.e. are most entangled).We emphasize again, that there is no guarantee that this performance is always achievable, although in Ref. [36] it was proven for d = 2 and demonstrated for a large set of problems for d > 2.

C. Naive Sequential Strategy
In the naive sequential protocol, we sequentially measure the n linear functions {f 1 , . . ., f n } using an optimal single linear function protocol [7].For this, we determine the optimal times t spent to measure the th function by minimizing Eq. ( 12) for C = I with respect to {t 1 , • • • , t n } under the constraint t = t.The solution to this Lagrange multiplier problem, presented in Appendix B, reads As an important example, consider equal weights, w ≡ N /n.Then we have Indeed, the upper bound is obtained for unfavourable functions {f } such that µ = 1 n ("worst case"), with 1 n the n-component vector of ones, whereas the lower bound is obtained for favourable functions {f } with µ = 1 n / √ d ("best case").These are the two extreme possible cases.Compared to the local protocol figure of merit of N /t 2 for any choice of w , we see that in the worst case, the local protocol is always superior to the naive sequential protocol.Furthermore, even in the best case, we must have d > n 2 to obtain an advantage from the naive sequential protocol compared to the local protocol, implying a relatively large number of sensors.This shows that the naive sequential protocol, with C = I, is not very competitive.On the other hand, as we show now, by optimizing over C a significant gain in accuracy over the local protocol can be achieved.

D. Optimal Sequential Strategy
Finally, we consider the optimal sequential protocol.The minimization over time in Eq. ( 26) proceeds as in the naive case but with a general C. Therefore, again leaving details to Appendix B, we obtain for the optimal sequential protocol that with optimal time to measure the th function given by Inserting the definition of µ from Eq. ( 11), we arrive at Note that due to the appearance of both C and C −1 in the expression with the same powers, the result is invariant under a change in the normalization of the columns of C. Therefore we may fix these column normalizations and introduce the constraint that for each .Under this constraint, we obtain the simpler expression with optimal time per function given by Geometrically, the constraint in Eq. ( 29) corresponds to restricting the columns of C to the surface of an (n − 1)-dimensional ellipsoid (or (n − 1)-sphere if w m = N /n ∀ m).The columns of C can then be efficiently parametrized by elliptical (or spherical) coordinates, and the optimization amounts to finding the best choice of corresponding angular variables.We emphasize that this choice of normalization can be made without loss of generality.
We have now fully characterized our optimized sequential protocol.In particular, one can numerically perform the minimization over matrices C in Eq. ( 28) subject to the constraint in Eq. ( 29).However, while for practical purposes we have solved the problem, many questions of more general nature arise at this point.In particular, what kind of advantage is provided by the optimized sequential protocol over the naive one?What geometries of coefficient vectors correspond to the best performance for the sequential protocol?How does it compare to the signed sensor symmetric approach?These questions will be addressed in the following section.All of the figures of merit calculated in this section are summarized in Table I.

IV. PERFORMANCE AND GEOMETRY
To compare the performance of the different strategies, we first study some analytically accessible limits and then turn to a numerical analysis of the related optimization problem.The opening angle of the cone is given by φ and the angular displacement from φ for a particular α is specified by ω, , as defined in Eq. (32).

A. Geometrically Symmetric Limit
We begin by considering what we refer to as the geometrically symmetric limit of the signed sensor symmetric strategy.This limit will be useful for comparing to the optimized sequential protocol in the following subsections.For this, we consider a situation where the coefficient vectors α are all approximately the same angle φ from some ω, which we recall is a vector with all elements ±1.This results in a particularly useful simplification of the expression for the geometry parameter G.We then define the parameter so that ω, may be treated as a small parameter for a perturbative expansion, see Fig. 3 The geometry parameter of the signed sensor symmetric strategy then reads Here we expand in powers of ω, and define the geometry parameter for measuring a single function at an angle φ from ω.The condition on how small ω, needs to be depends on φ , but for any particular problem we can determine the necessary condition.In general, as long as ω, 1/ √ d, the corrections will be negligible.Next we consider Eq. ( 21) in the large-d limit and ob-

Local Naive Sequential
Signed Sensor Symmetric Optimized Sequential tain for arbitrary values of ω.We substitute Eq. ( 33) and obtain, to leading order in the geometrically symmetric limit and for large d, that Note that, for φ = 0, i.e. when all functions are nearly aligned with ω, this reduces to the expected optimal scaling N /(t 2 d).
We will use these results in the following sections as we compare the signed sensor symmetric strategy to the optimized sequential strategy.

B. Nearly Overlapping Functions
Next consider the case when all the vectors α are "close" in each component, i.e. we consider measuring a set of n nearly identical functions.Intuitively, one would expect the optimal sequential strategy in this case to be spending almost all the time measuring the linear combination pointing towards the average of these functions, and then spending a small amount of time measuring in other directions in order to distinguish the small variations in the functions.We find that this intuition is rigorously true.We also find that, in this case, we can analytically determine a scaling advantage (in d) for this protocol relative to the signed sensor symmetric strategy (and, of course, the unentangled strategy).Finally, we consider a particular example from Ref. [36] and find that its implication about the role of entanglement in protocol performance-namely that it can be disadvantageous in certain circumstances-is limited to the consideration of just the (unsigned) sensor symmetric strategy and is not generally true.
To formally define what we mean by "nearly overlapping", consider angles δ associated with each vector of coefficients α as specified by where ā is a vector, with Euclidean norm equal to 1, chosen such that the average angle n −1 n =1 δ is minimized.For δ sufficiently small for all , α ≈ ā for all .Furthermore, for A i = (α ) i .Therefore, with δ = max δ , we obtain from Eq. ( 30) that Leaving the somewhat tedious details to Appendix C, we find that this reduces to the expected result that Note that, in general, δ 1/ √ d ensures that this is a good leading-order approximation.This is a reduction in the variance by a factor of approximately (to order δ 2 ) max i ā2 i ∈ [1/d, 1] compared to the local protocol in Eq. ( 13), or, when compared to the naive sequential protocol in Eq. ( 24), a reduction in the variance by a factor of order O(1/n 2 ).
To compare to the signed sensor symmetric protocol, we note that this nearly overlapping case is merely a special case of the nearly geometrically symmetric case of the sensor symmetric protocol (provided δ is sufficiently small).In particular, δ is the relevant expansion parameter for our asymptotic approximations as ω, ≤ δ for all .Therefore, to compare, we may simply use the previous results from Section IV A with corrections upper bounded by taking ω, → δ.
Furthermore, we note that, to leading order, M ss = N M (n=1) ss , and similarly, Eq. ( 40) also has the leadingorder expression M opt = N M (n=1) opt , where the righthand sides correspond to the accuracy N times the singlefunction estimation figure of merit.Therefore, we see that, in order to compare the accuracy of both protocols for nearly overlapping functions, it is sufficient to compare their performance for single-function estimation.
Of course, for a single function, the "sequential" strategy is provably optimal as we have reduced it to the case of Ref. [7].So, at best, the signed sensor symmetric strategy will perform the same as the "sequential" strategy for a single function.For example, we note that for the best case for both strategies-where all functions are oriented along some ω to order O(δ)-both approaches have a cost to leading order of N /(t 2 d), which is superior to the local protocol by 1/d.Also, for d = 2, the time-independent protocol of Ref. [7] does actually utilize sensor symmetric states, because the initial states are chosen from the set and therefore, for all choices of functions with d = 2 (where both approaches provide explicitly saturable bounds), the two protocols are identical and optimal.
For d > 2, on the other hand, as previously discussed, there may not exist physical states that obtain the figure of merit provided by the signed sensor symmetric strategy.However, even if we assume the figure of merit M ss is attainable, we shall see that the optimized sequential strategy can often be the superior choice.In this context, we consider two examples.First, we demonstrate a scaling advantage in d for the sequential protocol in this nearly overlapping limit.Then we revisit the example from Eq. (38) of Ref. [36] and demonstrate that the implication made that entanglement can be detrimental is an artifact of the (unsigned) sensor symmetric approach and that for the better performing sequential protocol, as well as the more general signed sensor symmetric approach, entanglement is useful.
Example 1: To demonstrate an example of a scaling advantage of the sequential protocol over the signed sensor symmetric strategy, suppose we have n nearly overlapping functions such that δ 1/ √ d relative to the vector of coefficients given by where the first κ elements are (up to normalization) x ∈ R and the last d − κ elements are y ∈ R. We assume x, y = O(1) and κ = O(d β ) for β ∈ [0, 1).Without loss of generality, suppose x > y.In this case, the cost of the optimized sequential strategy is straightforwardly obtained from Eq. ( 40) to be where the second line comes from expanding in powers of κ/d.For the signed sensor symmetric strategy for the same problem, we pick ω such that ω i = sgn(ā i ), which minimizes the angle between ā and ω.In the large d limit, we may then use Eq. ( 36) with We can perform an expansion of the numerator of Eq. ( 44) in powers of κ/d as and expand the denominator as We then have which we may plug into Eq.( 36) for the signed sensor symmetric strategy which demonstrates a scaling advantage by a factor of O κ −1 = O d −β for the optimized sequential protocol in this problem.
Example 2: Now we consider the example of a single function from Eq. (38) of Ref. [36] for d = 3 sensors and coefficient vector [46] The example was chosen in Ref. [36] such that for ω = 1, G(ω) = 0, and thus J opt (ω) = 0, which in turn implies that the optimal (unsigned) sensor symmetric state is unentangled.Equation ( 21) then implies which is larger than the true optimal figure of merit, which is obtained by the "sequential" protocol: We also note that, even within the framework of sensor symmetric strategies, the result obtained from Ref. [36] is not the best one can do.If we extend to the signed sensor symmetric approach, one can consider ω = (1, 1, −1) T and do better.In particular, in this case, one obtains which is only slightly worse than the true optimum, and, crucially, also involves entanglement.Therefore, from this example, we learn that (a) entanglement is helpful for measuring the function in Eq. ( 49), just not when we restrict to (unsigned) sensor symmetric states, and (b) accuracy is (unsurprisingly) potentially decreased when restricting ourselves to sensor symmetric states.
For convenience, we summarize the analytic results comparing the signed sensor symmetric and optimized sequential strategies in Table II.

C. Numerical Results
In the previous sections, we found that both the optimized sequential and signed sensor-symmetric strategies perform identically (and optimally) when measuring many functions whose coefficient vectors {α } are aligned along a particular ω.More generally, the optimized sequential protocol always performs at least as well as, and typically outperforms the signed sensor symmetric strategy when measuring many functions with nearly overlapping coefficient vectors, and in fact, we can obtain a scaling advantage in d for certain problems (Example 1).However, while informative, the nearly overlapping limit considered above is such that the optimized sequential strategy performs its best.Therefore, it is of interest to also consider a broader class of examples and to consider where the signed sensor symmetric strategy outperforms the optimized sequential strategy.
Unfortunately, however, a full analytic comparison between the different approaches is beyond reach as far as we know, so for a general problem, one must therefore compare the two approaches explicitly to see which one is the correct choice for a given situation.Here, to better understand the expected performance in such cases, we turn to numerics on random problem instances.Our key result is to demonstrate that generically, for large d, many problems are best approached using our optimized sequential protocol as opposed to the sensor symmetric or local strategies.
Numerically, the optimization over C in Eq. ( 30), subject to Eq. ( 29), to obtain the cost of the optimized sequential protocol can be fairly costly in terms of computation time, as the optimization is non-convex and in a high dimensional parameter space.This is not necessarily an issue for particular applications, where only a limited number of such optimizations must be performed.As an example, consider n = 2 functions, d ≥ n sensors, and equal weights in the figure of merit (w 1 = w 2 = 1).The normalization condition (29) implies that the columns of the 2 × 2 matrix C have unit length.We can parametrize this by two angles via Given the coefficient vectors α 1,2 of the two functions to be estimated, the numerical optimization over ϕ, ϕ is accomplished straightforwardly.For n = 3 functions, six angles ϕ 1 , . . ., ϕ 6 are needed, making the optimization more challenging for larger n.
The two functions, represented by the two normalized coefficient vectors α 1,2 , depend on 2(d − 1) real parameters.In this context, we randomly sample coefficients for the two functions from a uniform distribution and calculate the cost of the signed sensor symmetric strategy and the optimized sequential strategy.For d = 2 k for k ∈ [1,6], we consider 1000 such problems where for simplicity we assume that α 1,2 are sampled from the positive orthant so that the optimal ω is necessarily 1 and plot the results in Fig. 4.
We observe that the signed sensor symmetric strategy is never worse than the local protocol, whereas the optimized sequential protocol can be at small d.In the particular case of n = d = 2, the sequential strategy is never better than the signed sensor symmetric strategy.As previously mentioned, it is well known that, for this problem, when the two functions are orthogonal, a local protocol obtains the optimal variance (that is, M = N /t 2 is optimal) [8,10].In this case, as demonstrated in Ref. [36], the sensor symmetric strategy matches this known optimal result.In particular, the sensor symmetric strategy predicts an optimal geometry parameter G(ω) = 0, corresponding to no inter-sensor correlations and, therefore, a local protocol.We observe this behavior in panel a) of Fig. 4 where the G = 0 points correspond to M ss = M local = 2.Note that cases of G ≈ 0 that correspond to nearly orthogonal coefficient vectors are only those points where M opt ≈ 4, as can be concluded from Fig. 5 where we plot M opt versus α 1 •α 2 .As d increases, however, the optimized sequential protocol is almost always superior to both the local and signed sensor symmetric strategies for these randomized problem instances.

Setting
Signed Sensor Symmetric Optimized Sequential Geometrically symmetric limit (large d) Nearly overlapping limit Same as geometrically symmetric limit Functions aligned along ā

Best Case
Functions aligned along some ω Example 1 (Scaling)

Scaling advantage for Mopt
Functions aligned along Eq. ( 42)  Observe that the signed sensor symmetric approach is never worse than the local protocol, whereas the optimized sequential protocol can be.However, as d increases the optimized sequential protocol is almost always superior.Also recall, that for d > 2, Mss is generically just a lower bound, and it is not guaranteed one can achieve this figure of merit with physical states.Therefore, one can think of Mss as a best case scenario for a physically realized signed sensor symmetric protocol.

V. CONCLUSION AND OUTLOOK
In this work, we explored the potential of sequential protocols to measure multiple functions with quantum sensor networks.We highlighted both analytical and numerical aspects, and compared the protocol to a generalized version of the sensor symmetric bounds for the same problem from Ref. [36].We find that, when d is large, the sequential protocol is typically superior for generic problem instances.The sequential strategy also has the advantage of having an explicit protocol to obtain its given performance, whereas beyond d = 2, while shown to be saturable in certain cases [36], the lower bound when restricted to signed sensor symmetric states is not guaranteed to always be attainable.However, for a particular problem, one should compare both strategies, as neither is always superior.
Our results, together with those in Ref. [36], point to an intriguing interplay between the geometric configuration of the functions to be measured and the performance of various protocols.In particular, our optimized sequential protocol performs best with nearly overlapping functions; the signed sensor symmetric approach performs best when the set {α } is nearly aligned along some ω.Beyond carefully tuned examples, we note that for most problems where we seek to estimate a collection of analytic functions of local field amplitudes, our protocol is the best known choice, especially with more than a small number of sensors d.
Our sequential protocol could directly be extended to the case where the sensors are each coupled to correlated field amplitudes as in the recent work by some of the authors [13]; that is, instead of considering independent field amplitudes θ i coupled to the sensors, one could consider the case where θ is specified by a known analytic parameterization by some set of k ≤ d parameters.
Our sequential protocol could also be extended to other physical settings beyond qubit sensors-namely, for any quantum sensor network where one may measure a single linear combination of field amplitudes, one can apply our sequential approach.For example, a collection of d Mach-Zehnder interferometers could replace the qubit sensors, where the role of the local fields is played by interferometer phases [28,[47][48][49][50][51].Here, the limiting resource is the number of photons N available to distribute among interferometers as opposed to the total time t.In this context, it was conjectured in Ref. [9] that one could measure a single function with variance M = ||w|| 2 1 N 2 -this replaces Eq. ( 4), and otherwise everything remains the same.However, there are subtleties in the case where the average number of photons is not known [52], which we do not consider here.Another relevant setting is the measurement of linear combinations of field-quadrature displacements as considered using an entanglement-enhanced continuous-variable protocol in Ref. [29].A variation of this protocol was experimentally implemented in Ref. [53].One could also consider a combination of these settings where some field amplitudes are coupled to qubits, some to Mach-Zehnder interferometers, and some to field-quadrature displacements.
While the importance of geometry is striking, the general question of the information-theoretic optimal strategy that minimizes the quantum Fisher information for this problem remains a pressing open question.Additionally, our results are asymptotic and ignore the potential effects of decoherence.Understanding the performance of the sequential protocol in the non-asymptotic regime (i.e., via Bayesian analysis as considered in Ref. [36]) and under the effects of decoherence remains a question of great importance.These limitations aside, our findings advance the understanding of measuring multiple functions with quantum sensor networks and provides an alternative protocol that practically performs better than previously considered schemes in many instances.

VI. ACKNOWLEDGMENTS
In this appendix, we demonstrate that the explicit calculation of the inverse of the quantum Fisher information in Ref. [36] for sensor symmetric states can be extended to the signed sensor symmetric states of Eq. (19).The calculation largely follows that in that reference.
Begin by defining the symmetric matrix Ω = ωω T for ω a vector with all elements ±1, as defined in the main text.For example, Now, given an orthonormal basis {ê i } i∈ [1,d] for the real space where our vectors of coefficients {α i } are defined, we can write, for pure signed, sensor symmetric states, As 1 ⊥ is a subspace geometrically represented as a hyperplane through the origin, it necessarily intersects the ellipsoid (centered on the origin) specified by Eq. ( 29) forming an ellipsoid of dimension n − 1.Therefore, we can satisfy all constraints and saturate Eq. (C2).and so we have confirmed we may obtain the equality as in Eq. (C6).

FIG. 1 .
FIG. 1.The protocols for measuring n ≤ d linear functions {f1(θ), . . ., fn(θ)} of d parameters θ = (θ1, . . ., θ d ) considered in this work can be classified into three groups: (a) Local protocols do not utilize entanglement and measure the parameters locally, allowing for large parallelization.(b) Global protocols simultaneously estimate all functions.(c) Sequential protocols divide the problem into n parts, where each part is optimized to estimate a single function from the set {f 1 , . . ., f n }, which may consist of linear combinations of the original set {f1, . . ., fn}.

FIG. 2 .
FIG. 2. A visualization for n = 2 functions and d = 3 sensors of how we can optimally select a set of functions to measure whose coefficient vectors {α } span the same subspace as the coefficient vectors {α } of the functions we care about.The vectors are the coefficient vectors and the planes indicate the subspace they span.The axes are labeled by standard basis unit vectors {e1, e2, e3}.

FIG. 3 .
FIG. 3. (a) A visualization for n = 2 functions and d = 3 sensors of geometrically symmetric functions.In particular, the coefficent vectors lie near the surface of a cone centered on some ω.(b)The opening angle of the cone is given by φ and the angular displacement from φ for a particular α is specified by ω, , as defined in Eq. (32).

FIG. 4 .
FIG. 4. Mss versus Mopt for 1000 random samples from the positive orthant of α1, α2 with n = 2, w1 = w2 = 1 for different numbers of sensors d.Dashed lines correspond to M local .Colors correspond to the geometry parameter for the problem instance.Observe that the signed sensor symmetric approach is never worse than the local protocol, whereas the optimized sequential protocol can be.However, as d increases the optimized sequential protocol is almost always superior.Also recall, that for d > 2, Mss is generically just a lower bound, and it is not guaranteed one can achieve this figure of merit with physical states.Therefore, one can think of Mss as a best case scenario for a physically realized signed sensor symmetric protocol.

Furthermore, we can
confirm this choice of C also sat-

TABLE I .
Summary of figures of merit.Recall, that for all strategies other than signed sensor symmetric strategy, we have an explicit physical protocol to achieve the given figure of merit.For the signed sensor symmetric strategy, beyond d = 2, we are not necessarily guaranteed that a state exists that achieves the figure of merit, and therefore it is a lower bound, given the signed sensor symmetric state restriction.

TABLE II .
Summary of analytic results comparing the signed sensor symmetric strategy and optimized sequential strategy.