A Quantum-inspired Algorithm for General Minimum Conical Hull Problems

A wide range of fundamental machine learning tasks that are addressed by the maximum a posteriori estimation can be reduced to a general minimum conical hull problem. The best-known solution to tackle general minimum conical hull problems is the divide-and-conquer anchoring learning scheme (DCA), whose runtime complexity is polynomial in size. However, big data is pushing these polynomial algorithms to their performance limits. In this paper, we propose a sublinear classical algorithm to tackle general minimum conical hull problems when the input has stored in a sample-based low-overhead data structure. The algorithm's runtime complexity is polynomial in the rank and polylogarithmic in size. The proposed algorithm achieves the exponential speedup over DCA and, therefore, provides advantages for high dimensional problems.


Introduction
Maximum a posteriori (MAP) estimation is a central problem in machine and statistical learning [5,22].The general MAP problem has been proven to be NP hard [33].Despite the hardness in the general case, there are two fundamental learning models, the matrix factorization and the latent variable model, that enable MAP problem to be solved in polynomial runtime under certain constraints [24,25,29,30,32].The algorithms that have been developed for these learning models have been used extensively in machine learning with competitive performance, particularly on tasks such as subspace clustering, topic modeling, collaborative filtering, structure prediction, feature engineering, motion segmentation, sequential data analysis, and recommender systems [15,25,30].A recent study demonstrates that MAP problems addressed by matrix factorization and the latent variable models can be reduced to the general minimum conical hull problem [41].In particular, the general minimum conical hull problem transforms problems resolved by these two learning models into a geometric problem, whose goal is to identify a set of extreme data points with the smallest cardinality in dataset Y such that every data point in dataset X can be expressed as a conical combination of the identified extreme data points.Unlike the matrix factorization and the latent variable models that their optimizations generally suffer from the local minima, a unique global solution is guaranteed for the general minimum conical hull problem [41].Driven by the promise of a global solution and the broad applicability, it is imperative to seek algorithms that can efficiently resolve the general minimum conical hull problem with theoretical guarantees.
The divide-and-conquer anchoring (DCA) scheme is among the best currently known solutions for addressing general minimum conical hull problems.The idea is to identify all k extreme rays (i.e., k extreme data points) of a conical hull from a finite set of real data points with high probability [41].The discovered extreme rays form the global solution for the problem with explainability and, thus, the scheme generalizes better than conventional algorithms, such as expectation-maximization [10].DCA's strategy is to decompose the original problem into distinct subproblems.Specifically, the original conical hull problem is randomly projected on different low-dimensional hyperplanes to ease computation.Such a decomposition is guaranteed by the fact that the geometry of the original conical hull is partially preserved after a random projection.However, a weakness of DCA is that it requires a polynomial runtime complexity with respect to the size of the input.This complexity heavily limits DCA's use in many practical situations given the number of massive-scale datasets that are now ubiquitous [38].Hence, more effective methods for solving general minimum conical hull problems are highly desired.
To address the above issue, we propose an efficient classical algorithm that tackles general minimum conical hull problems in polylogarithmic time with respect to the input size.Consider two datasets X and Y that have stored in a specific low-overhead data structure, i.e., a sampled-based data structure supports the length-square sampling operations [34].Let the maximum rank, the Frobenius norm, and the condition number of the given two datasets be k, H F , and κ, respectively.We prove that the runtime complexity of the our algorithm is Õ(k 6 κ 12 H 6 F / 6 ) with the tolerable level of error .The achieved sublinear runtime complexity indicates that our algorithm has capability to benefit numerous learning tasks that can be mapped to the general minimum conical hull problem, e.g., the MAP problems addressed by latent variable models and matrix factorization.
Two core ingredients of the proposed algorithm are the 'divide-and-conquer' strategy and the reformulation of the minimum conical hull problem as a sampling problem.We adopt the 'divideand-conquer' strategy to acquire a favorable property from DCA.In particular, all subproblems, i.e., the general minimum conical hull problems on different low-dimensional random hyperplanes, are independent of each other.Therefore, they can be processed in parallel.In addition, the total number of subproblems is only polynomially proportional to the rank of the given dataset [40,41].An immediate observation is that the efficiency of solving each subproblem governs the efficiency of tackling the general minimum conical hull problem.To this end, our algorithm converts each subproblem into an approximated sampling problem and obtains the solution in sublinear runtime complexity.Through advanced sampling techniques [16,34], the runtime complexity to prepare the approximated sampling distribution that corresponds to each subproblem is polylogarithmic in the size of input.To enable our algorithm has an end-to-end sublinear runtime, we propose a general heuristic post-selection method to efficiently sample the solution from the approximated distribution, whose computation cost is also polylogarithmic in size.
Our work creates an intriguing aftermath for the quantum machine learning community.The overarching goal of quantum machine learning is to develop quantum algorithms that quadratically or even exponentially reduce the runtime complexity of classical algorithms [4].Numerous quantum machine learning algorithms with provably quantum speedups have been proposed in the past decade [17,20,28].However, the polylogarithmic runtime complexity in our algorithm implies that a rich class of quantum machine-learning algorithms do not, in fact, achieve these quantum speedups.More specifically, if a quantum algorithm aims to solve a learning task that can be mapped to the general minimum conical hull problem, its quantum speedup will collapse.In our examples, we show that quantum speedups collapse for these quantum algorithms: recommendation system [21], matrix factorization [13], and clustering [1,27,37].
Related Work.We make the following comparisons with previous studies in maximum a posteriori estimation and quantum machine learning.
1.The mainstream methods to tackle MAP problems prior to the study [41] can be separated into two groups.The first group includes expectation-maximization, sampling methods, and matrix factorization [9,14,31], where the learning procedure has severely suffered from local optima.
The second group contains the method of moments [3], which has suffered from the large variance and may lead to the failure of final estimation.The study [41] effectively alleviates the difficulties encountered by the above two groups.However, a common weakness possessed by all existing methods is the polynomial runtime with respect to the input size.
2. There are several studies that collapse the quantum speedups by proposing quantum-inspired classical algorithms [7,8,16].For example, the study [34] removes the quantum speedup for recommendation systems tasks; the study [35] eliminates the quantum speedup for principal component analysis problems; the study [11] collapses the quantum speedup for support vector machine problem.The correctness of a branch of quantum-inspired algorithms is validated by study [2].The studies [34] and [11] can be treated as a special case of our result, since both recommendation systems tasks and support vector machine can be efficiently reduced to the general conical hull problems [26,40].In other words, our work is a more general methodology to collapse the speedups achieved by quantum machine learning algorithms.
The rest of this paper proceeds as follows.A formal outline of the general minimum conical hull problem is given in Section 2. The computational complexity of our algorithm is explained and analyzed in Section 3. Section 4 discusses the algorithm's correctness.We conclude the paper in Section 5.

Notations
Throughout this paper, we use the following notations.We denote {1, 2, ..., n} as The notation e i always refers to the i-th unit basis vector.Suppose that v is nonzero, we define P v as a probability distribution in which the index i of v will be chosen with the probability P v (i) = (|v i |/ v ) 2 for any i ∈ [n].A sample from P v refers to an index number i of v, which will be sampled with the probability P v (i).Given a matrix X ∈ R n×m , X ij represents the (i, j)-entry of X with i ∈ [n] and j ∈ [m].X(i, :) and X(:, j) represent the i-th row and the j-th column of the matrix X, respectively.The transpose of a matrix X (a vector v) is denoted by X (v ).The Frobenius and spectral norm of X is denoted as X F and X 2 , respectively.The condition number κ X of a positive semidefinite X is κ X = X 2 /σ min (X), where σ min (X) refers to the minimum singular of X. R + refers to the real positive numbers.Given two sets A and B, we denote A minus B as A \ B. The cardinality of a set A is denoted as |A|.We use the notation Õ(k) as shorthand for O(k log(n)).

General minimum conical hull problem
A cone is a non-empty convex set that is closed under the conical combination of its elements.Mathematically, given a set R = {r i } k i=1 with r i being the ray, a cone formed by R is defined as The projection on a one-dimensional hyperplane.we show how to obtain the anchor At as the solution in Eqn.(4).The dataset Y is composed of red and green nodes.The dataset X is a cone that is composed of blue and light blue nodes.Bt represents a random one-dimensional projection hyperplane.On hyperplane Bt, the light blue node X(j * ; :) refers to the result of max j∈[n X ] Xt(j, :) in Eqn.(4).The green node Y (At; :) corresponds to the solution of Eqn.(4), where At refers to the anchor at the t-th random projection.Geometrically, on the hyperplane Bt, the green node is the nearest node among all nodes in Y relative to the light blue node and has a larger magnitude.
is an extreme ray (or an anchor) if it cannot be expressed as the conical combination of elements in R \ r i .A fundamental property of the cone and conical hull is its separability, namely, whether a point in cone(R) can be represented as a conical combination of certain subsets of rays that define cone(R).The above separability can be generalized to the matrix form as follows [41].
Definition 1 (General separability condition [41]).Let X ∈ R nX×m be a matrix with n X rows.Let Y ∈ R nY×m be a matrix with n Y rows, and let In other words, the general separability condition states that, ∀i ∈ [n X ], we have X i ∈ cone(Y A ), where the set Y A = {Y (i; :)} i∈A .Under this definition, the general minimum conical hull problem aims to find the minimal set A, as the so-called anchor set, from the rows of Y .
Definition 2 (General minimum conical hull problem [41]).Given two matrices X ∈ R nX×m and Y ∈ R nY×m with n X and n Y rows, respectively, the general minimum conical hull problem MCH(Y , X) finds a minimal subset of rows in Y whose conical hull contains cone(X): where cone(Y A ) is induced by rows A of Y .
We remark that the general separability condition is reasonable for many learning tasks, i.e., any learning task can be solved by the matrix factorization model or the latent variable model possessing general separability [41].

Divide-and-conquer anchoring scheme for the general minimum conical hull problem
Efficiently locating the anchor set A of a general minimum conical hull problem is challenging.To resolve this issue, the divide-and-conquer anchoring learning (DCA) scheme is proposed [40,41].This scheme adopts the following strategy.The original minimum conical hull problem is first reduced to a small number of subproblems by projecting the original cone into low-dimensional hyperplanes.Then these subproblems on low-dimensional hyperplanes are then tackled in parallel.The anchor set of the original problem can be obtained by combining the anchor sets of the subproblems because the geometric information of the original conical hull is partially preserved after projection.Moreover, the efficiency of DCA schemes can be guaranteed because anchors in these subproblems can be effectively determined in parallel and the total number of subproblems is modest.DCA is composed of two steps, i.e., the divide step and the conquer step.Following the notations of Definition 2, given two sets of points (rows) with X ∈ R nX×m and Y ∈ R nY×m , the goal of DCA is to output an anchor set A such that X(i; : Divide step.A set of projection matrices {B t } p t=1 with B t ∈ R m×d is sampled from a random ensemble M, e.g., Gaussian random matrix ensemble, the ensemble composed of standard unit vectors e i or real data vectors, and various sparse random matrix ensembles [41].The general minimum conical hull problem for the t-th subproblem is to find an anchor set A t for the projected matrices where where Y (i; :) is the i-th row of Y , and 1 At (Y (i; :)) is the indicator function that outputs '1' when the index i is in A t , and zero otherwise.The anchor set A of size k is constructed by selecting k indexes with the largest ĝi .It has been proved, with high probability that, solving p = O(k log k) subproblems are sufficient to find all anchors in A, where k refers to the number of anchors [41].The total runtime complexity of DCA is Õ(max{poly(n X ), poly(n Y )}) when parallel processing is allowed.

Reformulation as a sampling problem
Interestingly, each subproblem in Eqn.(2) can be reduced to a sampling problem when d = 1.This observation is one of the crucial components that make our algorithm exponentially faster than the conventional DCA schemes [41].
Fix d = 1.Following the result of [39], Eqn. where We give an intuitive explanation of Eqn.(4) in Figure (1).Note that Eqn. ( 4) can then be written as where P Xt and P Yt refer to the distributions of X t and Y t , and ξ t = X t / Y t is a constant at t-th random projection to rescale the value max j∈[nX] (P Xt (j)) .We will explain how to efficiently approximate ξ t in Section 3.2.

Main Algorithm
Our algorithm generates two distributions P Xt and P Ŷt to approximate the targeted distributions P Xt and P Yt as defined in Eqn. ( 5) for any t ∈ [p].The core of our algorithm consists of the following major steps.In the preprocessing step, we reconstruct two matrices, X and Ỹ , to approximate the original matrices X and Y so that the t-th projected subproblem can be replaced with Xt = XB t and Ỹt = Ỹ B t with little disturbance.The main tool for this approximation is the subsampling method with the support of the square-length sampling operations [34].This step also appears in [8,16,34].
All subproblems are processed in parallel, following the divide-and-conquer principle.The divide step employs two sampling subroutines that allow us to efficiently generate two distributions P Xt and P Ŷt to approximate P Xt and P Ỹt (equivalently, P Xt and P Yt ).We then propose the general heuristic post-selection rule to identify the target index A t by substituting (P Yt (i) − ξ t max j∈[nX] (P Xt (j)) + in Eqn ( 5) with (P Ŷt (i) − ξ t max j∈[nX] (P Xt (j)) + .Lastly, we employ the selection rule in Eqn.
(3) to form the anchor set A for the original minimum conical hull problem.Before elaborating the details of the main algorithm, we first emphasize the innovations of this work, i.e., the reformulation of the general conical hull problem as a sampling problem and the general heuristic post-selection rule.The sampling version of the general conical hull problem is the precondition to introduce advanced sampling techniques to reduce the computational complexity.
The general heuristic post-selection rule is the central component to guarantee that the solution of the general conical hull problem can be obtained in sublinear runtime.In particular, the intrinsic mechanism of the general conical hull problem enables us to employ the general heuristic post-selection rule to query a specific element from the output in polylogarithmic runtime.
In this section, we introduce the length-square sampling operations in Subsection 3.1.The implementation of our algorithm is shown in Subsection 3.2.The computation analysis is given in Subsection 3.3.We give the proof of Theorems 5 and 6 in supplementary material C and E, respectively.

Length-square sampling operations
The promising performance of various sampling algorithms for machine learning tasks is guaranteed when the given dataset supports length-square sampling operations [8,16,34].We first give the definition of the access cost to facilitate the description of such sampling operations, Definition 3 (Access cost).Given a matrix W ∈ R n×m , we denote that the cost of querying an entry W (i, j) or querying the Frobenius norm W F is Q(W ), the cost of querying the 2 norm of W (i, :) is N (W ), and the cost of sampling a row index i ∈ [n] of H with the probability F or sampling an index j ∈ [m] with the probability P W (i,:) (j) is S(W ).We denote the overall access cost of W as L(W ) := S(W ) + Q(W ) + N (W ).
We use an example to address the difference between query access and sampling access.Given a vector v ∈ R n , the problem is to find a hidden large entry v(i).It takes Ω(n) cost to find v(i) with just query access, while the computation cost is O(1) with query and sample access.
The length-square sampling operations, as so-called 2 norm sampling operations, are defined as: Proposition 4 ( 2 norm sampling operation).Given an input matrix H ∈ R n×m with s non-zero entries, there exists a data structure storing H in space O(s log 2 m), with the following properties: The overall cost of accessing H is therefore L(H) = O(poly(log(mn))).
Remark.The 2 norm sampling operation can be efficiently fulfilled if the input data are stored in a low-overhead data structure, e.g., the binary tree structure (BNS) [21] (more details about BNS are given in supplementary material A).

The implementation of the algorithm
Our algorithm consists of three steps, the preprocessing step, the divide step, and the conquer step.The first step prepares an efficient description for X and Y .The second step locates anchors {A t } p t=1 by solving p subproblems in parallel.The last step obtains the anchor set A.
Preprocessing step.The preprocessing step aims to efficiently construct X and Ỹ such that the matrix norm X − X 2 and Ỹ − Y 2 are small.This step employs the subsampling method [34] to construct an approximated left singular matrix ṼH of H, where H ∈ R nH×mH can be either X or Y , so that H = ṼH ṼH H.If no confusion arises, the subscript H can be disregarded.
We summarize the subsampling method in Algorithm 1 to detail the acquisition of Ṽ .We build the matrix R ∈ R n×s by sampling s columns from H and then build the matrix C ∈ R s×s by sampling s rows from R. After obtaining R and C, we implicitly define the approximated left singular matrix Ṽ ∈ R n×k as where {σ (i) } k i=1 and {ω (i) } k i=1 refer to k singular values and right singular vectors of C1 .
Result: The singular value decomposition of C.Here we only focus on locating the anchor A t for the t-th subproblem, since each subproblem is independent and can be solved in the same way.The divide step employs Eqn.(5) to locate A t .In particular, we first prepare two distributions P Xt and P Ŷt to approximate P Xt and P Yt , and then sample these two distributions to locate A t .
The preparation of two distributions P Xt and P Ŷt is achieved by exploiting two sampling subroutines, the inner product subroutine and the thin matrix-vector multiplication subroutine [34].We detail these two subroutines in supplementary material B.2. Recall that the two approximated matrices at the t-th subproblem are Xt = ṼX ( Ṽ Instead of directly computing qX,t and qY,t , we construct their approximated vectors qX,t and qY,t using the inner product subroutine to ensure the low computational cost, followed by the thin matrix-vector multiplication subroutine to prepare probability distributions P Xt and P Ŷt , where Xt ≡ ṼX qX,t and Ŷt ≡ ṼY qY,t .The closeness between P Xt (resp.P Ŷt ) and P Xt (resp.P Y t ) is controlled by the number of samplings s, as analyzed in Section 4.
The rescale parameter ξ t defined in Eqn. ( 5) can be efficiently approximated by employing the inner product subroutine.Recall that ξ t = Y t / X t .We can approximate ξ t by ξt = Ŷt / Xt .Alternatively, an efficient method of approximating Ŷt and Xt is sufficient to acquire ξ t .Let H be the general setting that can either be X or Y .The 2 norm of Ĥt can be efficiently estimated by using the inner product subroutine.Intuitively, the 2 norm of Ĥt can be expressed by the inner product of Ĥt , i..e, Ĥt 2 = Ĥ t Ĥt .Recall that Ĥt has an explicit representation Ĥt = ṼH qH,t , and the efficient access cost for ṼH and qH,t enables us to use the inner product subroutine to efficiently obtain Ht .
Given P Xt , P Ŷt , and ξt , we propose the general heuristic post-selection method to determine A t .Following the sampling version of the general minimum conical hull problem defined in Eqn. ( 5), we first sample the distribution P Xt with N X times to obtain a value We next sample the distribution P Ŷt with N Y times to find an index Ât approximating The following theorem quantifies the required number of samplings to guarantee Ât = A t , where A t is defined in Eqn.(5).
Theorem 5 (General heuristic post-selection (Informal)).Assume that P Xt and P Ŷt are multinomial distributions.Denote that and ∀j = i, and |P Ŷt (A t ) − P Ŷt (j)| > ε for ∀j = A t and a small constant ε, then for any δ > 0 with a probability at least 1 − δ, we have Ât = A t with N X , N Y ∼ O(κ X , polylog(max{n X , n Y })).
Conquer step.After the divide step, a set of potential anchors {A t } p t=1 with p = O(k log(k)), where k refers to the number of anchors with k ∼ min{log(n X ), log(n Y )}, is collected [13,39,41].We adopt the selection rule defined in Eqn.(3) to determine the anchor set A. This step can be accomplished by various sorting algorithms with runtime O(poly(k log(k))) [41].
We summarize our algorithm as follows.

Computation complexity of the algorithm
The complexity of the proposed algorithm is dominated by the preprocessing step and the divide step.Specifically, the computational complexity is occupied by four elements: finding the left singular matrix Ṽ (Line 5 in Algorithm 2), estimating qt by qt (Line 7 in Algorithm 2), preparing the approximated probability distribution P Ĥ (Line 8 in Algorithm 2), and estimating the rescale factor ξt (Line 10 in Algorithm 2).We evaluate the computation complexity of these four operations separately and obtain the following theorem Theorem 6.Given two datasets X ∈ R nX×m and Y ∈ R nY×m that satisfy the general separability condition and support the length-square sampling operations, the rank and condition number for X (resp.Y ) are denoted as k X (resp.k Y ) and κ X (resp.κ Y ).The tolerable error is denoted as .The Algorithm 2: A sublinear runtime algorithm for the general minimum conical hull problem Data: and s Y ← (See Theorem 18); ) .

Correctness of The Algorithm
In this section, we present the correctness of our algorithm.We also briefly explain how the results are derived and validate our algorithm by numerical simulations.We provide the detailed proofs of Theorem 7 in the supplementary material D. The correctness of our algorithm is determined by the closeness between the approximated distribution and the analytic distribution.The closeness is evaluated by the total variation distance [34], i.e., Given two vectors v ∈ R n and w ∈ R n , the total variation distance of two distributions P v and P w is P v , P w T V := 1 2 n i=1 |P v (i)−P w (i)|.The following theorem states that P Xt −P Xt T V and P Ŷt − P Y t T V are controlled by the number of samplings s: Theorem 7. Given a matrix H ∈ R n×m with 2 norm sampling operations, let R ∈ R n×s , C ∈ R s×s be the sampled matrices from H following Algorithm 1.The distribution prepared by the proposed algorithm is denoted as with probability at least (1 − η), we always have P Ĥt − P Ht T V ≤ .
We apply the proposed algorithm to accomplish the near separable nonnegative matrix factorization (SNMF) to validate the correctness of Theorem 7 [40,23].SNMF, which has been extensive applied  to hyperspectral imaging and text mining, can be treated as a special case of the general minimum conical hull problem [41].Given a non-negative matrix X, SNMF aims to find a decomposition such that X = F X(R, :) + N , where the basis matrix X(R, :) is composited by r rows from X, F is the non-negative encoding matrix, and N is a noise matrix [12].
The synthetic data matrix used in the experiments is generated in the form of Y ≡ X = F X A + N G , where F = [I k ; U ], the entries of noise N G are generated by sampling Gaussian distribution N (0, µ) with noise level µ, the entries of X A and U are generated by i.i.d.uniform distribution in [0, 1] at first and then their rows are normalized to have unit 1 norm.Analogous to DCA, we use the recovery rate as the metric to evaluate the precision of anchor recovery.The anchor index recovery rate ρ is defined as ρ = |A ∩ Â|/|A|, where Â refers to the anchor set obtained by our algorithm or DCA.
We set the dimensions of X as 500 × 500 and set k as 10.We generate four datasets with different noise levels, which are µ = 0, 0.1, 0.5, 2. The number of subproblems is set as p = 100.We give nine different settings for the number of sampling s for our algorithm, ranging from 500 to 8000.We compare our algorithm with DCA to determine the anchor set A. The simulation results are shown in Figure 2.For the case of µ = 0, both our algorithm and DCA can accurately locate all anchors.With increased noise, the recovery rate continuously decreases both for our algorithm and DCA, since the separability assumption is not preserved.As shown in Figure 2 (b), the reconstruction error, which is evaluated by the Frobenius norm of X − X F , continuously decreases with increased s for the noiseless case.In addition, the variance of the reconstructed error, illustrated by the shadow region, continuously shrinks with increased s.For the high noise level case, the collapsed separability assumption arises that the reconstruction error is unrelated to s.

Conclusion
In this paper, we have proposed a sublinear runtime classical algorithm to resolve the general minimum conical hull problem.We first reformulated the general minimum conical hull problem as a sampling problem.We then exploited two sampling subroutines and proposed the general heuristic post-selection method to achieve low computational cost.We theoretically analyzed the correctness and the computation cost of our algorithm.The proposed algorithm benefit numerous learning tasks that can be mapped to the general minimum conical hull problem, especially for tasks that need to process datasets on a huge scale.There are two promising future directions.First, we will explore other advanced sampling techniques to further improve the polynomial dependence in the computation complexity.Second, we will investigate whether there exist other fundamental learning models that can be reduced to the general minimum conical hull problem.One of the most strongest candidates is the semi-definite programming solver.

Note Added
Recently we became aware of an independent related work [6], which employs the divide-and-conquer anchoring strategy to solve separable nonnegative matrix factorization problems.Since a major application of the general conical hull problem is to solve matrix factorization, their result can be treated as a special case of our study.Moreover, our study adopts more advanced techniques and provides a better upper complexity bounds than theirs.
We organize the supplementary material as follows.In Section A, we detail the binary tree structure to support length-square sampling operations.We detail the inner product subroutine and the thin matrix-vector multiplication subroutine in Section B. We then provide the proof of Theorem 5 in Section C. Because the proof of Theorem 6 cost employs the results of Theorem 7, we give the proof of Theorem 7 in Section D and leave the proof of Theorem 6 in Section E.
A The Binary Tree Structure for Length-square Sampling Operations As mentioned in the main text, a feasible solution to fulfill 2 norm sampling operations is the binary tree structure (BNS) to store data [21].
Here we give the intuition about how BNS constructed for a vector.For ease of notations, we assume the given vector has size 4 with v ∈ R 4 .As demonstrated in Figure 3, the root node records the square 2 norm of v.The i-th leaf node records the i-th entry of v(i) and its square value, e.g., {|v(i)| 2 , sign(v(i))}.Each internal node contains the sum of the values of its two immediate children.Such an architecture ensures the 2 norm sampling opeartions.

B Two Sampling Subroutines
Here we introduce two sampling subroutines, the inner product subroutine and the thin matrix-vector multiplication subroutine [34], used in the proposed algorithm.

B.1 Inner product subroutine
In our algorithm, the inner product subroutine is employed to obtain each entry of qt in parallel, i.e., qt (i) estimates qt (i) ≡ ṽ(i) H t , where ṽ(i) ∈ R n×1 , H t ≡ HB t , H ∈ R n×m and B t ∈ R m×1 .Let Z be a random variable that, for j ∈ with probability We can estimate qt (i) using Z as follows [8].Fix , δ > 0. Let We first sample the distribution P Z with N Z times to obtain a set of samples {z i } N Z i=1 , followed by dividing them into N p groups, {S 1 , • • • , S Np }, where each group S i contains N q samples.Let zi = Nq i=1 z i /N q be the empirical mean of the i-th group S i , and let z * be the median of {z i } Np i=1 .Then [16,Lemma 12] and [8, Lemma 7] guarantee that, with probability at least 1 − δ, the following holds.
The computational complexity of the inner product subroutine is: Lemma 8 ([16, Lemma 12] and [8,Lemma 7]).Assume that the overall access to H is L(H) and the query access to B t and Ṽ is Q(B t ) and Q( Ṽ ), respectively.The runtime complexity to yield Eqn. (9) is Proof.We first recall the main result of Lemma 12 in [16].Given the overall access to H and query access to the matrix Ṽ and B t with complexity Q( Ṽ ) and Q(B t ), the inner product qt (i) ≡ ṽ(i) H t can be estimated to precision H F ṽ(i) B t F with probability at least 1 − δ in time With setting the precision to instead of H F ṽ(i) B t F , it can be easily inferred that the runtime complexity to estimate qt (i) ≡ ṽ(i) H t with probability at least

B.2 Thin matrix-vector multiplication subroutine
Given a matrix Ṽ ∈ R n×k and qt ∈ R k with the 2 norm sampling access, the thin matrix-vector multiplication subroutine aims to output a sample from P Ṽ qt .The implementation of the thin matrix-vector multiplication subroutine is as follows [16].
For each loop 1. Sample a column index j ∈ [k] uniformly.
We execute the above loop until a sample l is successfully output.The complexity of the thin matrix-vector multiplication subroutine, namely, the required number of loops, is: Lemma 12] and [34,Proposition 6.4]).Let Ṽ ∈ R n×k and qt ∈ R k .Given 2 norm sampling access to Ṽ , we can length-square sample from P Ṽ qt in the expected runtime complexity C General Heuristic Post-selection (Proof of Theorem 5) Recall that the anchor A t is defined as The procedure of the general heuristic post-selection is as follows.
1. Sample P Xt with N X times and order the sampled items to obtain X = {I X,1 , I X,2 , ..., I X,wX } with w X ≤ n X ; .
An immediate observation is that, with N X , N Y → ∞, we have Ât = A t , where The general heuristic post-selection is guaranteed by the following theorem (the formal description of Theorem 5).
Theorem 10 ((Formal) General heuristic post-selection).Assume that P Xt and P Ŷt are multinomial distributions.If P Xt (J X,1 ) ≥ ε T , P Xt (J X,1 ) − P Xt (J X,2 ) > ε and Remark.The physical meaning of ε can be treated as the threshold of the 'near-anchor', that is, when the distance of a data point and the anchor point after projection is within the threshold ε, we say the data point can be treated as anchors.The real anchor set A therefore should be expanded and include these near anchors.In other words, ε and ε T can be manually controllable.The parameter ξt is bounded as follows.
In this work, we set An equivalent statement of Theorem 10 is: Problem 11.How many samples, N X and N Y , are required to guarantee I X,1 = J X,1 and We use Breteganolle-Huber-Carol inequality [36] to prove Theorem 10, i.e., Lemma 12 (Breteganolle-Huber-Carol inequality [36]).Let P D be a multinomial distribution with l event probabilities {P D (i)} l i=1 .We randomly sample N events from P D and let N i be the number of event i appeared.Then, the following holds with a probability at least 1 − δ for any δ > 0, Proof of Theorem 10.The proof is composed of two parts.The first part is to prove that the index I X,1 with I X,1 = J X,1 can be determined with sampling complexity O(1/(ε 2 ε 2 T )).The second part is to prove that the index I Y,wY with I Y,wY = A t can be determined with sampling complexity O(1/(ε 2 ε 2 T )).For the first part, we split the set X into two subsets X 1 and X 2 , i.e., X 1 = {I X,1 } and X 2 = {I X,2 , ..., I X,wX }.The decomposition of X into two subsets is equivalent to setting l = 2 in Eqn.(10).
The assumption P Xt (J X,1 ) ≥ ε T guarantees that J X,1 ∈ X by sampling P Xt with O(1/ε T ) times.In addition, since we have assumed that P Xt (J X,1 ) − P Xt (J X,2 ) > ε, it can be easily inferred that, when N X ≥ 2 log (4/δ) with a probability at least 1 − δ, there is one and only one value that is in the ε-neighborhood of P Xt (I X,1 ).We therefore conclude that I X,1 = J X,1 can be guaranteed by sampling P Xt with N X ≥ max{ 2 log (4/δ) For the second part, we split the set .., I Y,wY }.Analogous to the above case, the decomposition of Y into three subsets for the case of sampling P Ŷt is equivalent to setting l = 3 in Eqn.(10).In particular, we The above inequality implies that, when δ = 4 exp −NYλ 2
Since we have assumed P Xt (J X,1 ) ≥ ε T , we have } with a probability at least 1 − δ, there is one and only one value Combing the results of two parts together, it can be easily inferred that, with the sampling complexity Õ(max{ 1 ε , 1 ξtεT }), we have A t = Ât with probability 1 − δ.

D Proof of Theorem 7
In this section, we give the proof of Theorem 7. We left the detailed proof of Theorem 14 and 15 in subsection D.1 and D.2, respectively.
Proof of Theorem 7. Recall that Ĥt = Ṽ qH,t , and qH,t is an approximation of qH,t = Ṽ H t .The triangle inequality yields where Q T V is the total variation distance of Q.In the following, we bound the two terms on the right-hand side of Eqn. ( 16) respectively.Correctness of P Ht − P Ht T V .The goal here is to prove that By Lemma 13 below, Eqn (17) follows if the following inequality holds: Finally, the inequality in Eqn. ( 18) is guaranteed to hold because of Theorem 14.
Lemma 13 (Lemma 6.1, [34]).For x, y ∈ R n satisfying x − y ≤ , the corresponding distributions P x and P y satisfy P x , P y T V ≤ 2 x .Theorem 14.Let the rank and the condition number of H ∈ R n×m be k and κ, respectively.Fix Then, Algorithm 2 yields Ht − H t ≤ Ht 4 with probability at least (1 − η).
Correctness of P Ĥt − P Ht T V .Analogous to the above part, we bound to yield And Eqn. ( 19) can be obtained by the following theorem.
Theorem 15.Let the rank of H ∈ R n×m be k.Set the number of samplings in the inner product subroutine as Then, Algorithm 2 yields Ĥt − Ht ≤ 4 Ht with at least 1 − δ success probability.

D.1 Proof of Theorem 14
Due to H t = V V HB t , we have where Π (H) = i v (i) v (i) and v (i) is the left singular vectors of H.The first inequality of Eqn. ( 21) is obtained by exploiting the submultiplicative property of spectral norm [18], i.e., for any matrix M ∈ R n×m and any vector z ∈ R m , we have M z ≤ M 2 z .The second inequality of Eqn. ( 21) comes from the submultiplicative property of spectral norm and B t = 1.To achieve Ht − H t ≤ 4 H t in Eqn.(18), Eqn.(21) indicates that the approximated left singular matrix Ṽ should satisfy The spectral norm k i=1 ṽ(i) ṽ(i) − Π (H) 2 can be quantified as: Theorem 16.Suppose that the rank of H is k, and ṽ(i) refers to a approximated singular vector of H such that ṽ(i) ṽ(j Then, we have The proof of Theorem 16 is given in Subsection D.1.1.Theorem 16 implies that to achieve Eqn. ( 22) (or equivalently, Eqn. ( 18)), we should bound α as We use the following lemma to give an explicit representation of α by the sampled matrix R and C, Lemma 17. Suppose that ω (l) refers to the right singular vector of C such that Π (C) = l ω (l) ω (l)  and ω (i) C Cω (j) = δ ij (σ (i) ) 2 , where (σ (i) ) 2 ≥ 4/(5κ 2 ).Suppose that the rank of both R and Let ṽ(l) := Rω (l) /σ (l) , then we have The proof of Lemma 17 is presented in Subsection D.1.2.
In conjunction with Eqn.(23) and Eqn.(26), we set α = (5γκ 2 )/4 and rewrite Eqn.(25) as In other words, when γ ≤ 3 85kκ 2 , Eqn. ( 18) is achieved so that Ht − H t ≤ 4 H t .Recall that γ is quantified by R R − CC 2 as defined in Eqn.(26), we use the following theorem to bound γ, i.e., Theorem 18.Given a nonnegative matrix H ∈ R n×m , let R ∈ R n×s , C ∈ R s×s be the sampled matrix following Algorithm 1. Setting s as s = , with probability at least The proof of Theorem 18 is given in D.1.3.Combining the result of Theorem 16 and Lemma 17, we know that with sampling s rows of H, the approximated distribution is 2 -close to the desired result, i.e., P Ht − P Ht T V ≤ 2 .

D.1.1 Proof of Theorem 16
We first introduce a lemma to facilitate the proof of Theorem 16, i.e., Lemma 19 (Adapted from Lemma 5, [16]).Let A be a matrix of rank at most k, and suppose that W has k columns that span the row and column spaces of A.

D.1.3 Proof of Theorem 18
We introduce the following lemma to facilitate the proof.
Lemma 20 (Adapted from Theorem 4.4, [19]).Given any matrix R ∈ R n×s .Let C ∈ R s×s be obtained by length-squared sampling with Hence, for s ≥ 4 ln (2n/η)/ 2 , with probability at least Proof of Theorem 18.The Lemma 20 indicates that the sample complexity of s determines Let the right hand side of the above inequality be η, i.e., where the inequality comes from R F ≤ R F and R F = H F .Therefore, with setting s as

D.2 Proof of Theorem 15
Proof of Theorem 15.We first give the upper bound of the term Ĥt − Ht , i.e., The first inequality comes from the the submultiplicative property of spectral norm.
An observation of the above equation is that we have We use the result of the inner product subroutine to quantify the required number of samplings to achieve Eqn.(39).The conclusion of Lemma 8 is that when E The Complexity of The Algorithm (Proof of Theorem 6) Proof of Theorem 6.As analyzed in the main text, the complexity of the proposed algorithm is dominated by four operations in the preprocessing step and the divide step, i.e., finding the left singular vectors Ṽ , estimating the inner product to build qt , preparing the approximated probability distribution P Ĥ , and estimating the rescale factor ξt .We evaluate the computation complexity of these four operations separately and then give the overall computation complexity of our algorithm.
In this subsection, we first evaluate the computation complexity of these four parts separately and then combine the results to give the computation complexity of our algorithm.Due to same reconstruction rule, we use a general setting H ∈ R n×m that can either be X or Y to evaluate the computation complexity for the four parts.
Complexity of Finding Ṽ .Supported by the 2 norm sampling operations, the matrix C can be efficiently constructed following Algorithm 1, where O(2s log 2 (mn)) query complexity is sufficient.Applying SVD onto C ∈ R s×s with s = runtime complexity.Once we obtain such the SVD result of C, the approximated left singular vectors Ṽ can be implicitly represented, guaranteed by the following Lemma: Lemma 21 (Adapted from [34]).Let the given dataset support the 2 norm sampling operations along with the description of Ṽ ∈ R n×s , We can sample from any ṽ(t) in O(Ks 2 ) expected queries with K = κ H 2 F and query for any particular entry Ṽ (i, j) in O(s) queries.Complexity of Estimating qt by qt .The runtime complexity to estimate qt by qt obeys the following corollary, i.e., Corollary 22.Let B t ∈ R m×1 be the input vector, H ∈ R n×m be the input matrix with rank k, and Ṽ ∈ R n×k be the approximated left singular matrix.We can estimate qt = Ṽ HB t by qt to precision with probability at least 1 − δ using O 4k(1 + ) Following the result of Lemma 8, the runtime complexity to obtain qt (i) is Since ṽ(i) ≤ √ 1 + for any ṽ(i) indicated by Eqn.(37), we rewrite Eqn.(41) as ≤ O H F (1 + ) ) log( 1 δ ) .
Since each entry can be computed in parallel, the runtime complexity to obtain qt is also ) log( 1 δ ) .
Complexity of Sampling from P Ĥt .Recall that the definition of Ht is Ĥt = Ṽ qt .We first evaluate the computation complexity to obtain one sample from P Ĥt .From Lemma 9, we know the expected runtime complexity to sample from P Ĥt is O( k qt 2 Ṽ qt 2 (S( Ṽ ) + kQ( Ṽ ))).Specifically, we have where the first inequality employs the triangle inequality, the second inequality employs the submultiplicative property of spectral norm, and the third inequality utilizes H t ≤ H 2 B t ≤ H F B t with B t = 1.Concurrently, we have Ṽ q t = Ω(1).Employing Lemma 21 to quantify S( Ṽ ) and Q( Ṽ ), the complexity to obtain a sample from P Ĥt is With substituting s with its explicit representation in Theorem 7, the complexity is

Figure 2 :
Figure 2: Finding anchor set A of a 500 × 500 matrix of rank 10 on five noise levels and six different settings of the number of samples.In Figure (a), the label 'Ours x' refers to the noise level µ = x, e.g., 'DCA 2' represents µ = 2. Similarly, the label 'Noise x' in Figure (b) refers to noise level µ = x.The shadow region refers to the variance of the error, e.g., the green region refers to the variance of reconstruction error with µ = 0.5.

2 .
Query the distribution P Xt to obtain the value C * Xt with C * Xt = P Xt (x = I X,1 ); 3. Sample P Ŷt with N Y times and order the sampled items to obtain Y = {I Y,1 , I Y,2 , ..., I Y,wY } with w Y ≤ n Y .4. Locate Ât with Ât = I Y,v * and I Y,v * = arg min z∈Y NY,z NY − ξt C * Xt +
At is the submatrix of Y whose rows are indexed by A t ⊂ [n Y ].Conquer step.This step yields the anchor set A by employing the following selection rule to manipulate the collected {A t } p t=1 .First, we compute, ∀i ∈ [n Y ], Divide step.The obtained approximated left singular matrices ṼX ∈ R nX×kX and ṼY ∈ R nY×kY enable us to employ advanced sampling techniques to locate potential anchors {A t } p t=1 .
1 Independently sample s columns indices [i s ] according to the probability distribution P H ; 2 Set R ∈ R n×s as the matrix formed by H(:, i t )/ sP H(:,it) with i t ∈ [i s ]; 3 Sample a column index t with t ∈ [s] uniformly and then sample a row index j ∈ [n] distributed as P R(j,t) .Sample a total number of s row indexes [j s ] in this way; and {B t } p t=1 with B t ∈ R m×1 .Both X, Y , and B t supports 2 sampling operations.The number of anchors k.Result: Output the set of anchors A with |A| = k.
1Preprocessing step ; Algorithm 1 ; 5 Implicitly define ṼX and ṼY by employing Eqn.(6) ; 6 ( Divide Step) Set t = 0; 7 while t < p (Computing in parallel.)do Xt and P Ŷt via the general heuristic post-selection method; Conquer Step) Output the anchor set A using the selection rule defined in Eqn.(3); runtime complexity of the proposed algorithm for solving the general minimum conical hull problem is max 10 Collect Ât by sampling P 11 t ← t + 1; 12 end 13 ( The goal of the general heuristic post-selection is approximating A t by sampling distributions P Xt and P Ŷt with O(polylog(max{n X , n Y })) times, since the acquisition of the explicit form of P Xt and P Ŷt requires O(poly(n X , n Y )) computation complexity and collapses the desired speedup.Let{x i } NX i=1 be N X examples independently sampled from P Xt with P Xt (x = z) = | Xt (z)| 2 / Xt 2 andN X,z be the number of examples taking value of z ∈ [n X ] with nX z=1 N X,z = N X .Similarly, let {y i } NY i=1 be N Y examples independently sampled from P Ŷt with P Ŷt (y = z) = | Ŷ t (z)| 2 / Ŷ t 2 and N Y,z be the number of examples taking value of z ∈ [n Y ] with nY z=1 N Y,z = N Y .Denote that w X is the total number of different indexes after sampling P Xt with N X times, I X,v and J X,v are two indexes corresponding to the v-th largest value among {N X,z } wX z=1 and P Xt with w X ≤ n X and v ∈ [w X ], respectively 2 .Similarly, denote that w Y is the total number of distinguished indexes after sampling P Ŷt with N Y times, the indexes I Y,v and J Y,v are v-th largest value among {N Y,z } wY z=1 and P Ŷt with w Y ≤ n Y and v ∈ [w Y ], respectively.In particular, we have +.