Spectral theory for the stability of dynamical systems on large oriented locally tree-like graphs

We develop a mathematical theory for the linear stability of stationary states in large dynamical systems modelled by a set of randomly coupled differential equations on a locally tree-like network. Our approach provides analytical expressions for the leading eigenvalue of random matrices that describe the interactions between the degrees of freedom; the sign of the leading eigenvalue characterizes the system stability. We illustrate this approach on oriented random graphs with a prescribed degree distribution and find that the leading eigenvalue is universal in the sense that it only depends on a few ensemble parameters, including the mean degree and a degree correlation coefficient. In addition, we also characterize the unstable mode of the system of interest by deriving analytical expressions for the statistics of the components of the right and left eigenvectors associated with the leading eigenvalue. Finally, we briefly discuss how this approach can be extended to models with diagonal disorder and non-oriented couplings.

We develop a mathematical theory for the linear stability of stationary states in large dynamical systems modelled by a set of randomly coupled differential equations on a locally tree-like network. Our approach provides analytical expressions for the leading eigenvalue of random matrices that describe the interactions between the degrees of freedom; the sign of the leading eigenvalue characterizes the system stability. We illustrate this approach on oriented random graphs with a prescribed degree distribution and find that the leading eigenvalue is universal in the sense that it only depends on a few ensemble parameters, including the mean degree and a degree correlation coefficient. In addition, we also characterize the unstable mode of the system of interest by deriving analytical expressions for the statistics of the components of the right and left eigenvectors associated with the leading eigenvalue. Finally, we briefly discuss how this approach can be extended to models with diagonal disorder and non-oriented couplings.

I. INTRODUCTION
Scientists use networks to characterize the causal interactions between the constituents of large dynamical systems [1][2][3][4][5]. An important question is how network architecture affects the stability of stationary states in large dynamical systems. This question is crucial to understand, inter alia, systemic risk in financial markets, stability of ecosystems, or power outages in power grids. Indeed, the spreading of debt between financial institutions is affected by the architecture of the network of liabilities between these institutions [6][7][8]; ecologists aim to understand how the occurrence of major changes in ecological communities [9][10][11][12][13] -such as the microbiome community in the human gut [14] -is affected by the architecture of food webs; and engineers study how the topology of a power-grid network affects the risk of power outages [15]. In these examples, a stable stationary state is beneficial and associated with a well-functioning system, such as, a flourishing economy or a healthy individual, as is the case of the human gut example. On the other hand, the instability of the stationary state is associated with a period of economic crisis or disease. Hence, if we identify network features that stabilise large dynamical systems, then we could use these insights to reduce risk and instability in these systems.
A theory for the stability of large systems of interacting degrees of freedom in the vicinity of a stationary state has been introduced by May [16,17]. In May's approach, one considers n degrees of freedom y(t) = (y 1 (t), y 2 (t), . . . , y n (t)) ∈ R n that evolve according to a set of randomly coupled linear differential equations where t ≥ 0 is the time variable and A kj are the entries of a square random matrix A n of size n. Notice that the null vector y(t) = 0 is a fixed point or stationary state of the dynamics (1). We consider random matrices instead of deterministic matrices because we are interested in the typical behaviour of an ensemble of systems rather than in the dynamics of one given system. Since Eqs. (1) are linear, the dynamics of y(t) is governed by the eigenvalues λ j (A n ) (j = 1, . . . , n) and their associated right eigenvectors R j and left eigenvectors L j , y(t) = n j=1 R j · y(0) e λj t L j If all eigenvalues have negative real parts, then lim t→∞ y(t) = 0 and the stationary state is stable. On the other hand, if there exist at least one eigenvalue with a positive real part, then the stationary state is unstable. Hence, the question of stability of the stationary state y(t) = 0 boils down to verifying whether the eigenvalue with the largest real part of a random matrix is negative.
Random matrix theory provides mathematical methods to study the properties of the eigenvalue λ 1 (A n ) with the largest real part for n 1, which we call the leading eigenvalue. Fortunately, λ 1 (A n ) often converges to a deterministic value λ 1 for n → ∞ [17,18]. This implies that an ensemble of dynamical systems of the form (1) may exhibit for large n a phase transition between a stable and an unstable phase: if the asymptotic value λ 1 is negative, then the dynamical system is linearly stable, whereas if λ 1 is positive, then the system is linearly unstable. The random-matrix-theory approach for the stability analysis of large dynamical systems aims to compute the asymptotic eigenvalue λ 1 as a function of the parameters that define the random matrix ensemble A n .
Random matrices are also useful to investigate the linear stability of stationary states in a set of randomly coupled non-linear differential equations [17]. According to the Hartman-Grobner theorem [19,20], Eqs. (1) yield a very good approximation for the dynamics of n degrees of freedom x(t) = (x 1 (t), . . . , x n (t)) in a nonlinear dynamical system ∂ t x(t) = f [ x(t)] in the vicinity of a fixed point x * , for which f [ x * ] = 0 with f a generic function that couples the degrees of freedom. In this setting, A is the Jacobian of f and y(t) = x(t)− x * is the deviation vector. Randomly coupled non-linear differential equations have been used to model neural networks [21][22][23], ecological communities [24][25][26], protein signalling networks [27,28], financial markets [29], and synchronization of coupled oscillators [30]. Relations of this type often contain a large number of fixed points [31,32], and the equation (2) describes the dynamics in the vicinity of one given fixed point x * .
One of the simplest random-matrix models, used by May in his original paper [17], is composed of off-diagonal entries A kj that are independent and identically distributed (i.i.d.) random variables with a probability distribution p A (a), and the diagonal entries are set to A jj = −d, where d is a real-valued function depends on n. We call this random-matrix ensemble the i.i.d. random matrix model. In this model, the leading eigenvalue λ 1 is given by [33][34][35][36][37][38] where o n (1) denotes the little-o notation, see section 3.1 in Ref. [39]. The leading eigenvalue thus only depends on the mean value A = da a p A (a) and the second moment A 2 = da a 2 p A (a) of the distribution p A (a), exhibiting a high degree of universality. The result (3) describes how interactions between degrees of freedom can destabilise a large complex system. There exist two qualitatively different regimes: for A > 0, λ 1 is an outlier and it is proportional to n; for A < 0, λ 1 is located at the boundary of the continuous spectrum and it is proportional to √ n. The random-matrix-theory approach to the linear stability of large complex systems has gained significant traction in recent years, mainly in the fields of ecology and neuroscience. With random matrices one can study how statistical properties of the interactions in a system affect its stability; this approach is complementary to mathematical models that rely on a low-dimensional representation of a large complex system. For example, the i.i.d. random matrix model has been generalized in order to describe how the stability of ecosystems depends on predator-prey interactions [12], hierarchical interactions [40], modularity [41], and species abundances [42]. In neuroscience, the i.i.d. random matrix model has been generalized in order to study how the asymptotic dynamics of neural networks is influenced by Dale's principle [43], balance conditions on the excitatory and inhibitory synaptic connections to a neuron [43,44], cell-type specific interactions [45][46][47], and partial random network structure [48]. Other applications are phase separation in multiple component fluids [49] and the stability of a large economy [50]. Note that all of the models mentioned so far share the common feature that they are defined on a dense graph, in the sense that the average number of nonzero elements in each row or column of A n diverges as a function of n.
The random-matrix-theory approach for the linear stability of dynamical systems, although clearly powerful, has been criticized since the original paper of May. First, there is the problem that complex systems defined on dense graphs are unstable if the number of degrees of freedom n is large enough [16,17], since the leading eigenvalue diverges as a function of n. This behaviour is unrealistic, since real systems are often large and stable [9]. A second critique, is that the i.i.d. random matrix model, and its extensions discussed in the previous paragraph, can only account for random networks which are formed by nodes that interact with a finite probability, independently of the system size n. These models cannot account for the nonrandom features observed in real systems [2,11], such as, degree distributions that may have power-law tails [51][52][53][54].
A natural approach to resolve these two issues is to consider sparse random matrices A n . Each row and column of a sparse random matrix contains a finite number of non-zero elements, even in the limit of n → ∞, such that A n is composed of a total number O(n) of nonzero matrix entries. Sparse random matrices can take into account the nonrandom structures observed in realworld systems, such as, networks with a prescribed degree distribution [55][56][57][58][59][60][61][62] or with recurrent motifs [63][64][65][66]. Constraints on the degree distribution of a network are incorporated through constraints on the number of nonzero matrix entries in the columns and rows of A n . As an important consequence, dynamical systems associated with sparse random matrices can be stable even for large values of n: the leading eigenvalue λ 1 is finite since any degree of freedom interacts with a finite number of others. Hence, differential equations coupled through sparse random matrices can describe real-world networks and their dynamics is stable in the limit of large n.
In the present paper we focus on the development of exact mathematical methods to study the stability of large dynamical systems defined on sparse random graphs. To this aim, we use the spectral theory for sparse non-Hermitian random matrices [57,60,62]. For sparse random matrices, the eigenvalue distribution is not universal and requires a numerical procedure to compute [57,62]. However, the leading eigenvalue, as well as the statistics of the components of its associated right and left eigenvectors, exhibit universal properties and can be treated analytically [60]. Since the stability of large dynamical systems are governed by the leading eigenvalue, sparse random matrices provide a useful avenue of approach to study how network architecture affects the stability of large systems.
The aim of the present paper is to provide a better understanding of the theory for the leading eigenvalue of sparse random matrices introduced in [60], from a theoretical and from a more practical point of view. On the theoretical side, we derive explicit analytical expressions for the leading eigenvalue and the first moment of its associated right and left eigenvectors in the case of oriented random graphs with prescribed degree distributions that may allow for correlations between indegrees and outdegrees; this is a generalization of the results presented in [60] valid in the absence of degree correlations. We obtain these results from a set of recursion recursion relations in the components of right eigenvectors and left eigenvectors, which we derive using the Schur formula. From a practical point of view, we illustrate the theory through a large body of data obtained from numerical experiments, and we also challenge the theoretical results by considering adjacency matrices of graphs with powerlaw degree distributions and adjacency matrices of graphs with a small mean degree. Subsequently, we apply the theory to the linear stability of large dynamical systems described by a set of randomly coupled differential equations and identify which network properties stabilize the stationary points in these systems. Finally, we discuss extensions of our theory beyond the setup of oriented random matrices.
The outline of the paper is the following. In Sec. II we define the random matrices and spectral quantities we study in this paper, and in Sec. III we present the main results of this paper for the random matrices defined in Sec. II. In Sec. IV we derive the main results and we also present the theory we use to derive these results. In Sec. V we compare the theoretical results with numerical data for large matrices, while in Sec. VI we apply the theory by analysing the stability of stationary states in networked systems. In Sec. VII we discuss extensions of the theory, presented in Sec. IV, to the cases of adjacency matrices with diagonal disorder and adjacency matrices of non-oriented graphs. Finally, in Sec. VIII we present a discussion of the main results. Appendix A details the algorithm we use to generate graphs with a prescribed degree distribution, and in Appendix B, we discuss the percolation theory for the largest strongly connected component of a directed graph. In Appendix C we use the Schur formula to derive a set of recursive relations for the components of right (left) eigenvectors of a random matrix with a tree-like topology.

A. Notation
We use lower case symbols for deterministic variables, e.g., x and y. We write (column) vectors as x and y, while there adjoint vectors are x † and y † . Matrices are written in boldface, e.g., x and y. If we want to emphasize the dependency on the matrix size n, then we write x n and y n . We write random variables in upper case, e.g., X and Y . The probability distribution of a random variable X is denoted by p X (x). There are a few exceptions to the use of upper case letters to represent random quantities. For example, we use the notation λ j (A) to denote the j-th eigenvalue of a random matrix A, and we write p X (x; A) for the probability distribution of a random variable X that depends on the matrix A. We denote averages with respect to the distribution p A (a) by · .

II. SYSTEM SET UP AND DEFINITIONS
In this section we define the random matrices and their spectral properties that we study in this paper.
A. Adjacency matrices of weighted, oriented, and simple random graphs with a prescribed degree distribution In this paper we study the spectral properties of random matrices of the form where 1 n is the identity matrix, J n is a square matrix with real entries J jk ∈ R that are i.i.d. random variables drawn from an arbitrary probability distribution p J , and where C n is the adjacency matrix of an oriented simple random graph G with a prescribed degree distribution [4,67,68]. The parameter d is a real, constant number and • denotes the Hadamard product, i.e., [J n •C n ] jk = J jk C jk . The j and k indices fulfill j, k ∈ [n], where [n] = {1, 2, . . . , n}.
Since the graph is simple, the entries of its adjacency matrix satisfy C jk ∈ {0, 1} and C jj = 0. We use the convention that if C jk = 1, then the graph G has an edge directed from node j to node k. Therefore, the indegree of the j-th node equals the number of non-zero elements in the j-th column, and the outdegree K out j is given by the number of nonzero elements in the j-th row, The in-neighbourhood ∂ in j and out-neighbourhod ∂ out j of node j are defined by and is the neighbourhood of node j. A directed graph is oriented when C jk C kj = 0 for any pair of nodes. We say that G is a random graph with a prescribed degree distribution if (i) the degrees (K in j , K out j ) are i.i.d. random variables with a joint probability distribution p K in ,K out (k in , k out ) and with the additional constraint n j=1 K in j = n j=1 K out j ; (ii) given a certain degree sequence K in j , K out j n j=1 , the nodes are connected randomly and hence the edges of G are generated by the configuration model [4,67,68]. In the Appendix A we describe in detail the algorithm we use to sample random graphs with a prescribed degree distribution.
In the specific case of J jk = 1 and d = 0, random matrices defined by Eq. (4) are the adjacency matrices of oriented and simple random graphs [2,69,70]. The variables J jk are the weights associated with the links of the graph with adjacency matrix C n , and hence for J jk = 1 the random matrix A n is the adjacency matrix of a weighted graph. The constant parameter d affects the spectral properties of A n only in a trivial manner, but it is important when discussing the stability of dynamical systems on graphs.

B. Spectral observables
Here we define the spectral observables of random matrices that are relevant for the study of the stability of dynamical systems.
The eigenvalues {λ α (A)} α∈[n] are defined as the complex roots of the algebraic equation det(A − λ1 n ) = 0. We sort the eigenvalues in decreasing order, i.e., If there exists a degenerate eigenvalue, then it appears multiple times in this sequence. If there are two or more eigenvalues with the same real part, then we sort them based on their imaginary part. We define call λ 1 the leading eigenvalue and λ 2 the subleading eigenvalue.
A right eigenvector R α (A) and a left eigenvector L α (A) associated with λ α fullfils We use the notation R α,j and L α,j , with j ∈ [1, n], for the components (or entries) of the right and left eigenvectors, respectively. In order to uniquely define the right and left eigenvector associated with a nondegenerate eigenvalue, we consider that eigenvectors are biorthonormal we take the convention that and we set n j=1 |R α,j | 2 = n.
Note that in this convention the norm n j=1 |L α,j | 2 and the complex phase of n j=1 L α,j are functions of the entries of A.
The spectrum is the set of eigenvalues of A n . For finite n, σ(A n ) is discrete, whereas for large n, σ(A n ) often converges to a deterministic set which can contain continuous and discrete parts. We specify the different parts the spectrum σ can have. The discrete part can consist of outlier eigenvalues, eigenvalues with infinite multiplicity and a discrete spectrum that is dense in a region of the complex plane. We will be mainly interested in outlier eigenvalues which are defined as follows. Let b(λ * , ) := {λ ∈ C : |λ * − λ| < } be the open ball with radius centered at the element λ * of the complex plane. We say that λ isol ∈ σ is an outlier eigenvalue if there exists an > 0 such that σ ∩ b(λ isol , ) = {λ isol } and if the algebraic multiplicity of λ isol is finite. The continuous part of the spectrum can be decomposed into an absolute continuous part σ ac , that is a set of non-zero Lebesgue measure, and a singular continuous part, that is a set of zero Lebesgue measure. Note that the different parts of the spectrum σ are defined by applying the Lebesgue-decomposition theorem to the empirical spectral distribution [71,72].
We also study the statistics of the components R α,i and L α,i of the right and left eigenvectors, respectively. To this aim, we define the random variables R α and L α ,which are sampled uniformly at random from the entries of R α and L α , respectively. When we consider the properties of R α and L α for an arbitrary eigenvalue, then we omit the rank α and write simply R α = R and L α = L. If R and L refer to an outlier, then we use the notation R isol and L isol ; if R and L refer to an eigenvalue located at the boundary of σ ac , then we use R b and L b .
The distribution of the random variables R and L are and respectively, where δ(z) is the Dirac-delta distribution in the complex plane. In the limit n → ∞, the distributions p R (r|A) and p L (l|A) converge to deterministic limits p R (r) and p L (l). We denote the moments of the limiting distributions p R (r) and p L (l) by where d 2 r = dRe(r)dIm(r) and d 2 l = dRe(l)dIm(l).

C. Ensemble parameters and universality of spectral quantities
The random matrix ensemble (4) depends on the following parameters: the distribution p J of weights, the joint distribution p K in ,K out of indegrees and outdegrees, the real number d, and the size n.
We often use the moments of p J and p K in ,K out to specify a random matrix model. The m-th moment of p J is defined by and the (m, )-th moment of p K in ,K out is Important quantities are the mean degree c := K in = K out (21) and the degree correlation coefficient The mean degree is the average number of edges that enter or leave a random vertex in the graph. The parameter c J is a measure of the average interaction strength felt by a degree of freedom in a dynamical system defined by Eq. (1). The degree correlation coefficient ρ characterises the correlations between indegrees and outdegrees of a random vertex in the graph; note that this quantity is similar to the assortativity coefficient that considers the correlations between degrees of a randomly drawn edge in the graph, see section 8.7 on assortative mixing in [2].
Here we say that a spectral quantity of a random matrix is universal if it converges, for n → ∞, to a deterministic limit that just depends on the first few moments of p J and p K in ,K out .

III. MAIN RESULTS
In this section we present analytical results for the following spectral properties of random matrices defined by Eq. (4) in the limit of large n: the eigenvalue outliers λ isol , the boundary λ b ∈ ∂σ ac of the continuous part of the spectrum, the eigenvalue with the largest real part λ 1 , and the first moments of the right eigenvectors (left) eigenvectors associated with these eigenvalues.
Our theoretical results hold for infinitely large oriented random matrices with a prescribed degree distribution provided that c(ρ + 1) > 1 and the moments of the distributions p K in ,K out and p J are finite. The condition c(ρ + 1) > 1 is required because otherwise the spectrum of A n converges to a pure-point spectrum, which follows from the fact that oriented random graphs with c(ρ + 1) < 1 do not have a giant strongly connected component and therefore Tr[A m ] = 0 for all m ∈ N. Indeed, c(ρ + 1) = 1 is the critical percolation point for the strongly connected component of oriented graphs (see Appendix B or Ref. [73] for more details).
The moments of the degree distribution p K in ,K out and of the distribution of weights p J are required to be finite, since otherwise the spectral quantities defined in Sec. II B may not have a well-defined limit. In fact, if the tail of the degree distribution is a power-law characterized by a sufficiently small exponent, then the first two moments of λ 1 may diverge for n → ∞.

A. Outlier eigenvalue
If c(ρ+1) > 1 and J 2 < c(ρ+1)| J |, then the matrix ensemble (4) has one real outlier located at and the corresponding entries of the eigenvectors R isol and L isol are real. Moreover, the first moments of R isol satisfy where The mean value of L isol is given by an analogous equation where Notice that ρ = 0 and ρ in 2 = ρ out 2 = 0 for random graphs with uncorrelated indegrees and outdegrees, and therefore we recover in this special case the results in [60].

B. Eigenvalues at the boundary of the continuous part of the spectrum
If c(ρ + 1) > 1, then the spectrum σ of the model (4) has a continuous part. The boundary ∂σ ac of the continuous part consists of points λ b that obey Regarding the components of the eigenvector associated with λ b ∈ σ ac , we need to distinguish between the cases where λ b / ∈ R and λ b ∈ R. In the former case, R b and L b are complex random variables that fulfill If λ b ∈ R, then the eigenvector components are realvalued random variables that fulfill and the second moments R 2 b = 1 and L 2 b > 0. Recall that the latter are fixed by the normalization convention we have chosen in Sec. II B.

C. The leading eigenvalue
From the results in Secs. III A and III B we readily obtain expressions for the leading eigenvalue λ 1 . If c(ρ + 1) > 1, then the leading eigenvalue Thus, λ 1 can be either an outlier or an eigenvalue located at the boundary of the continuous part of the spectrum: for a positive mean value J > 0, λ 1 is an outlier if c(ρ + 1) > J 2 / J 2 and λ 1 ∈ ∂σ ac otherwise. Notice that, if the leading eigenvalue is an outlier, then its value is independent of J 2 , whereas if the leading eigenvalue is located at ∂σ ac , then its value depends on J 2 . This will be an important feature when discussing the stability analysis of dynamical systems. Let us consider the behaviour of Eq. (33) in a few specific cases. If we set c(ρ+1) = n and J = A, then Eq. (33) recovers the expression (3) for i.i.d. random matrices. However, note that the formula (33) holds for graphs with c ∈ O n (1) and therefore the correspondance holds only formally. For oriented random matrices without correlations between indegrees and outdegrees (ρ = 0), Eq. (33) reduces to the expression for λ 1 derived in Ref. [60]. Finally, in the case of adjacency matrices of oriented random graphs where J jk = 1, λ 1 is always an outlier given by λ 1 = c(ρ + 1). In the limit c(ρ + 1) → 1 + , where the giant strongly connected component vanishes, the outlier coalesces with the continuous part of the spectrum.
We also consider the first moments R 1 and L 1 of the eigenvectors associated with λ 1 . Since either λ 1 = λ isol or λ 1 = λ b , we obtain readily An analogous expression holds for the left eigenvector.

D. Spectral gap
The spectral gap is the difference λ 1 − Re[λ 2 ] between the leading eigenvalue and the real part of the subleading eigenvalue. From the results in Secs. III A, III B and III C we readily obtain expressions for the spectral gap. If c(ρ + 1) > 1, then and

E. Relation with the Perron-Frobenius theorem
Here we discuss how our results are related to the celebrated Perron-Frobenius theorem [74], which states that the eigenvalue λ 1 of a nonnegative matrix, and the components of its right (left) eigenvector, are nonnegative numbers. In other words, the Perron-Frobenius theorem implies that R 1,j ≥ 0 for all j = 1, 2, . . . , n.
Interesting conclusions about the localization of eigenvectors of A are drawn if we combine the Perron-Frobenius theorem with the result (34). If c(ρ + 1) ≤ J 2 / J 2 and c(ρ + 1) > 1, such that λ 1 is part of ∂σ ac , then R 1 = 0 and R 2 1 = 1, see Eq. (13). Since according to the Perron-Frobenius theorem R 1 ≥ 0, we obtain that R 1 = 0 holds with probability one. The two conditions lim n→∞ R 1 (A n ) = 0 and lim n→∞ R 2 1 (A n ) = 1 can be simultaneously valid provided that a few components of the eigenvector (34) and the Perron-Frobenius theorem imply that for nonnegative matrices for which the conditions c(ρ + 1) ≤ J 2 / J 2 and c(ρ + 1) > 1 are fulfilled, the right eigenvector R 1 associated with the leading eigenvalue is localized on a few nodes.
The cavity method, as applied to the present problem, consists of three steps. First, we derive a set of recursion relations for the components of right (left) eigenvector of adjacency matrices of tree-like graphs. In a second step, we obtain a set of recursion relations for the eigenvector distributions p R and p L of infinitely large random matrices that are locally tree-like. Finally, we obtain the main results (23-36) from the solutions of certain fixed-point equations for the eigenvector moments, which follow from the recursive distributional equations for p R and p L .
In the next subsection, we explain concepts, such as, tree matrices and locally tree-like matrices, which are important for the cavity method. In the subsequent subsections, we implement the aforementioned steps of the cavity method. Without loss of generality, we can focus on the right eigenvectors, since the left eigenvectors of A are the right eigenvectors of A T .
A. Tree matrices and locally tree-like matrices Let G be an undirected graph represented by a given symmetric adjacency matrix. The graph G is a tree if it is connected and does not contain cycles [69] and G is a forest if it is the union of several isolated trees [69].
Let A n be a matrix and let C n be its associated adjacency matrix, i.e., C kj = 1 when A kj = 0 and C kj = 0 when A kj = 0. We define the matrixC n , with entries C jk = max {C jk , C kj }. Note thatC n is the adjacency matrix of an undirected simple graph. We say that the matrix A n is a tree matrix ifC n is the adjacency matrix of a tree.
We say that a sequence {A n } n∈N of matrices is locally tree-like if in the limit n → ∞ each finite neighbourhood of a node, chosen uniformly at random, is a tree with probability one [58].

B. Recursion relations for the eigenvector elements of oriented tree-like matrices An
Let λ be an eigenvalue of the matrix A n and let R be the right eigenvector associated with λ. Equation (10) implies that for all j ∈ {1, 2, . . . , n}. In general, the random variables R k are correlated with the entries J jk and the degree K out j , and therefore, Eq. (37) is not useful to derive a selfconsistent distributional equation. However, if A n is an oriented tree matrix, or a large oriented locally treelike matrix, then R k is statistical independent from A jk and K out j , and the relation (37) can be closed. The statistical independence between R k and A jk can be understood using a recursive argument. Let A (j) n−1 be the submatrix obtained from A n by deleting its j-th column and row, and let R (j) be the right eigenvector of A (j) n−1 associated with λ. Then, for oriented tree matrices [60] (see also Appendix C) k is the k-th element of the right eigenvector R (j) . Note that we have assumed that λ is an eigenvalue of both A n and A (j) n−1 , which is reasonable when n is large enough and λ is not inside σ ac .
The relations (37) and (38) imply that for all k ∈ [n] and j ∈ ∂ in k . Since we are interested in the statistics of R, we will also use the relation which also follows from (37) and (38).
In the next subsection we use the relations (39) and (40)  We apply the recursion relations (40) and (39) to random matrices of the form Eq. (4) in the limit of n → ∞. In this limit, the random matrices of Eq. (4)) are locally tree-like.
Since we are interested in the limit where A n becomes infinitely large, it is useful to introduce the distributions of right eigenvector elements and where c is the mean degree. We obtain the distribution p R (r|A) by selecting uniformly at random a node j and asking what is the corresponding eigenvector element R j , whereas we obtain the distribution q R (r|A) by selecting uniformly at random an edge j → k asking what is the eigenvector element R (j) k . The limiting distributions p R and q R for large n solve the recursive distributional equations Equations (43) and (44) are obtained from the recursion relations (39) and (40), respectively. We have used the fact that the random variables on the right hand side of (39) and (40) are independent and that random graphs as defined in Sec. II have no boundary, which implies that all nodes are equivalent [77]. Notice that Eqs. (43) and (44) do not apply to large tree graphs because the presence of a (large) boundary. In the special case where we recover the results in [60] because p R (r) = q R (r). We are interested in solutions to the relations (43) and (44) that are normalizable, i.e., The relations (43) and (44) admit two types of normalizable solutions, those associated with eigenvalue outliers λ = λ isol and those associated with values λ = λ b located at the boundary of the continuous part of the spectrum. As a consequence we can obtain expressions for the outliers λ isol and the boundary ∂σ ac by identifying values of λ for which the relations (43) and (44) admit normalizable solutions. This is the program we will pursue in the next subsection.

D. Solutions to the recursion relations
In this subsection we obtain analytical results for the boundary of the continuous part of the spectrum, λ b ∈ ∂σ ac , and outlier eigenvalue λ isol , by identifying values of λ for which the relations (43) and (44) admit a normalizable solution.
Since Eqs. (43) and (44) are linear distributional equations, we can derive a set of fixed-point equations for the lower-order moments of R and L. In order to distinguish averages with respect to p R and q R , we introduce the definitions (47) and where f is an arbitrary function. From Eq. (44), we obtain that and from Eq. (43) we obtain The relations (49-54) admit three kind of solutions. The first type of solution is obtained when R q = 0. We denote this solution by λ = λ isol and R = R isol , since it identifies the outliers of the random matrix ensemble. In this case, (49) implies that which gives the result (23) for the outlier eigenvalue.
Since λ isol ∈ R, it holds that R isol ∈ R. Consequently, we obtain Eq. (24) for R isol by solving Eqs. (49)(50)(51)(52)(53)(54) The second type of solution is obtained when R q = 0 and λ / ∈ R. We denote this solution as λ = λ b and R = R b . Solving (51) we obtain the relation which leads to Eq. (28), if we use the degree correlation coefficient ρ as defined in (22). In this case, R b is a complex random variable and its first two moments are zero.
The third type of solution is obtained when R q = 0 and λ ∈ R, and we denote this solution as λ = λ r and R = R r . Solving (49) we obtain For this solution we have that R r = 0, but the value of R 2 r = 0 depends on the normalization of R r .

V. EXAMPLES
The theoretical results in Sec. III are conjectured to hold for ensembles of the type (4) provided that c(ρ+1) > 1, the moments of p K in ,K out and p J are finite, and n is large enough. In this section we compare theoretical results with direct diagonalization results of matrices of finite size n ∼ O(10 3 ). Such numerical experiments reveal the magnitude of finite size effects, which is important for applications because real-world systems are finite. Moreover, in order to better understand the limitations of the theory, we also consider ensembles for which c(ρ + 1) < 1 or the moments of the degree distribution diverge, such   that the theory in Sec. III does not apply. In these cases we are interested in the deviations between the formulas in Sec. III and results from numerical experiments.
Our numerical experiments are designed as follows. First, we sample random matrices from an ensemble of the type (4) using the algorithm presented in Appendix A. Subsequently, we diagonalize the matrix samples with the subroutine gsl eigen nonsymmv from the GNU Scientific Library, which computes the eigenvalues of a matrix and the entries of their right eigenvectors (see the website "https://www.gnu.org/software/gsl/" for more information). Finally, in order to test the theory in Sec. III, we compute for each matrix sample the leading eigenvalue λ 1 (A), the real part of the subleading eigenvalue λ 2 (A), and the observable which quantifies the mean value of the components of the right eigenvector associated with λ 1 (A). Before we compute R(A) through the above equation, we rotate all the elements R 1,j (A) by a constant phase e iθ , such that the empirical mean n j=1 R 1,j (A) is a positive real number, in accordance with our conventions in Eq. (12). Finally, we compute the mean values λ 1 , λ 2 , and R of the sampled populations, together with the standard deviations for each quantity.
The present section is organized into three subsections.
In Secs. V A and V B we consider adjacency matrices of oriented random graphs with negative degree correlations (ρ < 0) and positive degree correlations (ρ > 0), respectively. In these subsections we focus on random graph ensembles with a prescribed degree distribution having finite moments, namely, Poissonian and geometric random graphs. For such ensembles we expect that the theoretical results of Section III will be well corroborated by direct diagonalization results as long as c(ρ + 1) > 1. In Subsection V C we apply the theoretical results of Section III to adjacency matrices of oriented random graphs with power-law degree distributions that have divergent moments and, therefore, we expect to observe deviations between numerical experiments and theoretical results, which we aim to understand.
A. Adjacency matrices of oriented random graphs with negative degree correlations (ρ ≤ 0) Here we apply the theoretical results of Section III to adjacency matrices of oriented random graphs with negative correlations between indegrees and outdegrees, i.e., for ρ ≤ 0.
We consider Poissonian random graphs -also called Erdős-Rényi random graphs -and geometric random graphs with mean degree c > 0 and degree correlation coefficient ρ ∈ [−1, 0]. For Poissonian random graphs, the prescribed degree distribution where k in , k out ∈ {0, 1, . . . , n − 1}, where and where N p = n−1 k=0 c k /k!. For n → ∞, p p (k; c) is the Poisson distribution with mean degree c and N p = e c . For geometric random graphs, the prescribed degree distribution where k in , k out ∈ {0, 1, . . . , n − 1}, where and where N g = n−1 k=0 c 1+c k . For n → ∞, p g (k; c) is the geometric distribution with mean degree c and N g = c + 1.
Throughout this subsection, we consider unweighted graphs for which J jk = 1 for all j = k, and thus Since a nonzero d results in a constant shift of all eigenvalues by −d, i.e. λ j → λ j −d, we set for simplicity d = 0. In Fig. 1 we compare direct diagonalization results with the theoretical results given by Eqs. (23), (28), (24), (33), (34) and (35). The results in Fig. 1 show how the degree correlation coefficient ρ affects the spectral properties of adjacency matrices of oriented random graphs with mean degree c = 2.
In Figs. 1(a)-1(b), we provide a global picture of the spectra of adjacency matrices of Poissonian random graphs: we compare the spectra of a matrix with ρ = 0 and a matrix with ρ = −0.3. We observe how negative degree correlations contract the spectrum: for ρ = −0.3 the leading eigenvalue is smaller and the spectrum concentrates closer to the origin. In the bulk of the spectrum, we observe two types of eigenvalues, namely, those that are located in the center of the spectrum and have a regular spacing, and those that are located along the rim of the spectrum and are randomly spaced. In the limit of large n, the former give rise to the pure-point spectrum, while the eigenvalues along the rim yield the continuous spectrum. We observe that the pure-point spectrum is smaller in Fig. 1(a) than in Fig. 1(b), which is consistent with the fact that the size of the giant strongly connected component increases as a function of ρ.
In Fig. 1(c) we present a more detailed analysis of the behaviour of the leading eigenvalue λ 1 and of the subleading eigenvalue λ 2 as a function of ρ. As discussed in Sec. III, for the adjacency matrices of unweighted random graphs with c(ρ+1) > 1, it holds that λ 1 = λ isol and Re[λ 2 ] = max {Re[λ b ] : λ b ∈ ∂σ ac }, which is well corroborated by the numerical results in Fig. 1(c). Oriented random graphs do not have a giant strongly connected component when c(ρ + 1) < 1 (see Appendix B), and therefore the Eqs. (23), (28), (33) and (35) do not apply in this regime. In Fig. 1(c), we observe that λ 1 = λ 2 at the critical percolation treshold ρ = −1 + 1/c = −0.5, as predicted by the theory. In the regime ρ < −0.5, we observe large sample-to-sample fluctuations in λ 1 because the generated graph is a disconnected union of a large number of small isolated subgraphs.
In Fig. 1(d) we present a systematic study of the first moment R 1 of the eigenvector R 1 associated with the leading eigenvalue, which is an outlier for ρ ≥ −0.5. The theoretical result (24) is well corroborated by direct diagonalization results for the observable R, defined in Eq. (58). Interestingly, we observe that R behaves as the order-parameter of a continuous phase transition reminiscent of spin models: for c(ρ + 1) > 1, we obtain R 1 > 0 and the system can be considered ferromagnetic, whereas for c(ρ + 1) < 1 we have R 1 = 0 and the system can be considered spin-glass like. A similar type of behaviour has been found in sparse symmetric random matrices [81][82][83][84].
Taken together, the results in Fig. 1 illustrate how the leading eigenvalue of the adjacency matrix of an oriented random graph increases as a function of ρ. Thus, λ 1 can be significantly reduced by rewiring a graph in such a manner that the correlation between indegrees and outdegrees decreases. As a consequence, we can already expect that dynamical systems coupled through oriented graphs characterized by negative values of ρ are more stable in comparison to random graph models with ρ > 0.
In the next subsection we apply the theoretical results of Sec. III to the adjacency matrices of weighted random graphs with a positive ρ.
B. Adjacency matrices of oriented weighted random graphs with positive degree correlations (ρ ≥ 0) We illustrate the theoretical results of Sec. III for the adjacency matrices of weighted oriented random graphs with a positive ρ. We consider again Poissonian and geometric random graphs. The Poissonian ensemble with positive ρ has a prescribed degree distribution where ρ ∈ [0, 1/c], and p p is the truncated Poisson distribution defined by Eq. (60). The geometric ensemble has the prescribed degree distribution where ρ ∈ [0, (c+1)/c], and p g is the truncated geometric distribution defined by Eq. (62). The off-diagonal matrix entries J jk are i.i.d. random variables drawn either from a Gaussian distribution or from a bimodal distribution with the parametrization x 0 = µ 2 + v 2 and 2b = 1 + µ/x 0 . The parameters µ and v denote, respectively, the mean and the standard deviation of each distribution defined above.
Again, without loss of generality, we can set d = 0.
In Fig. 2 we analyze how positive values of ρ affect the spectral properties of adjacency matrices of oriented weighted random graphs. We compare the spectral properties for different values of ρ and fixed parameters c = 2, µ = 1, and v = 1.2. We compare theoretical results from Sec. III (lines) with direct diagonalization results for matrices of size n = 4000 (markers).
In Figs. 2(a) and 2(b) we provide a global picture of the spectra of Poissonian random graphs by comparing the spectrum of a graph without degree correlations (ρ = 0, Fig. 2(a)) with the spectrum of a graph with positive degree correlations (ρ = 0.5, Fig. 2(b)). In the latter case, the correlations are perfect in the sense that K in j = K out j for each node j. The direct diagonalization results corroborate well the formula 28 for the boundary of the continuous part of the spectrum. We also note that the leading eigenvalue λ 1 (A) increases as a function of ρ. Moreover, λ 1 (A) is located at the boundary ∂σ ac for ρ = 0 ( Fig. 2(a)), whereas λ 1 (A) is an outlier for ρ = 0.5 ( Fig. 2(b)).
In Figs. 2(c) and 2(d) we provide a more detailed view of the eigenvalues λ 1 and λ 2 . We observe that both eigenvalues λ 1 and λ 2 are monotonically increasing functions of ρ, and that there is a continuous transition from a gapless phase for ρ < J 2 /(c J 2 ) − 1 ≈ 0.22 to a gapped phase for ρ > J 2 /(c J 2 ) − 1. We observe that the values of λ 1 and λ 2 are universal, in the sense they depend on the distributions p J and p K in ,K out only through the parameters c, ρ, J and J 2 . Theoretical results are again well corroborated with direct diagonalization results, although finite size effects are more significant for the spectral gap.
Finally, Figs. 2(e) and 2(f) compare the theoretical result R 1 of section III with the sampled average R of the quantity R, as defined in Eq. (58). In the gapless phase, we have R 1 = 0, while in the gapped phase we obtain R 1 > 0, which is once more reminiscent of a continuous phase transition between a spin-glass phase and a ferromagnetic phase. We observe that in the gapped phase direct diagonalization results are in very good agreement with the theoretical expressions, whereas in the gapless phase there are significant deviations between theory and direct diagonalization. These deviations are due to finite size effects, which are significant because of our convention to normalize the eigenvectors with Eq. (12). In spite of that, we see that direct diagonalization results slowly converge to the theoretical values as the matrix size n increases.

C. Adjacency matrices of random graphs with power-law degree distributions
In this subsection we analyze the spectral properties of the adjacency matrices of power-law random graphs. Note that the theoretical results in Sec. III are conjectured to hold for random matrices for which the moments of the degree distribution are finite. For power-law random graphs, the moments of the degree distribution diverge and it is therefore a priori not clear whether we can apply the theory of Sec. III in this case.
We consider two ensembles of power-law random graphs, namely, an ensemble without correlations between indegrees and outdegrees (ρ = 0), and an ensemble with perfect degree correlations (ρ > 0), where K in j = K out j for all nodes j. The ensemble without degree correlations has the prescribed degree distribution with k in , k out ∈ {1, 2, . . . , n − 1} and the normalization   (22) and (27) with k in , k out ∈ {1, 2, . . . , (n − 1)/2} and the normalization M pow = (n−1)/2 k=1 k −a . The mean degree is given by provided that a > 2 and n → ∞, with ζ(x) the Riemann zeta function. The ensemble of Eq. (68) has if a > 2 and n → ∞, while the ensemble of Eq. (69) is characterized by if a > 3 and n → ∞.
Power-law random graphs are interesting from a practical point of view, since degree distributions of realworld systems often have tails that are fitted well by power-law distributions [51][52][53][54]. From a theoretical point of view, we expect that the analytic expressions in Sec.III will not describe well the spectral properties of random matrices with power-law degree distributions when a is small enough, since these random graph models display finite size effects and large fluctuations in the properties of their local neighbourhoods.
We now resort to direct diagonalization in order to gain a better understanding of the statistics of the leading eigenvalue of power-law random graphs. In Fig. 3(a) we plot the sample mean λ 1 of the leading eigenvalue λ 1 (A) and the sample mean Re[λ 2 ] of the real part of the subleading eigenvalue λ 2 (A) for the ensemble defined by Eq. (68), with ρ = 0. We observe that the theoretical expressions (23) and (28) for λ isol and |λ b |, which are the leading and the subleading eigenvalue, respectively, are in very good correspondance with direct diagonalization results when a 3. In the regime a 3, we observe significant deviations between theory and numerical experiments. Such deviations are expected, since c → ∞ for a → 2 + , and therefore the theoretical expressions for λ isol and |λ b | also diverge for a → 2 + . Analogously, in Fig. 3(b) we present results for λ 1 and Re[λ 2 ] for the ensemble defined by Eq. (69), with ρ > 0. In this case, the theory works well when a 4, whereas for a 4 we observe significant deviations. Indeed, for a → 3 + the degree correlation coefficient ρ diverges, and therefore the theoretical expressions for λ isol and |λ b | also diverge. Overall, these results show that the relations (23) and (28) work remarkably well for power-law random graphs.
Finally, in Figs. 3(c) and 3(d) we compare the theoretical expression for R isol , shown in Eq. (24), with the empirical mean R obtained from diagonalizing numerically matrices of sizes n = 2000 and n = 4000. There is a reasonable correspondance between theoretical results and numerical experiments, considering that powerlaw random graphs exhibit significant finite-size effects and fluctuations. Interestingly, when decreasing a and thus increasing c, the normalized mean R isol / R 2 isol vanishes at a = 3 and a = 4 for the ensembles defined by the degree distributions (68) and (69), respectively. Since the Perron-Frobenius theorem applies to this ensemble, this is a transition from a delocalized phase ( R isol / R 2 isol > 0) to a localized phase ( R isol / R 2 isol = 0), as argued in Sec. III E. In other words, the leading eigenvector is localized when the exponent a that characterizes the decay of the power-law degree distribution is small enough.

VI. STABILITY OF COMPLEX SYSTEMS
We use the results from Sec. III to analyse the stability of stationary states in large networked systems whose Jacobian matrix A is modeled by random matrices defined by Eq. (4). In this case, Eq. (1) takes the form where d represents the strength of self-regulation at each node of the network. By requiring that d > 0, the stationary state y = 0 is stable in the absence of interactions between the degrees of freedom. However, when the constituents of the system interact strong enough, then small perturbations or fluctuations around the fixed point y = 0 can propagate through the system and the stationary state can become unstable due to these interactions. In this section, we present a quantitative study on how the architecture of the network of interactions between the constituents of the system affects the system stability.
The stability of a networked system can be studied with spectral methods. Indeed, the solution of the linear Eq. (73) is given by (2), which implies that the fixed-point y = 0 is stable if Re[λ j ] < 0 for all j ∈ {1, . . . , n}. Hence, the long-time behaviour of y(t) is governed by the leading eigenvalue λ 1 and its associated right and left eigenvector: If Re[λ 1 ] < 0, then lim t→∞ y(t) = 0 and the stationary state is stable. On the other hand, if Re[λ 1 ] > 0, then y(t) = e λ1t ( R 1 · y(0)) L 1 + O(e (λ2−λ1)t ) and y = 0 is unstable. Notice that the nature of the mode that destabilizes the system takes the form of the left eigenvector L 1 . The nature of the right and left eigenvectors associated with the leading eigenvalue contain thus valuable information about the nature of the modes that destabilize the system. For instance, if the eigenvector L 1 has a positive mean L 1 > 0, then the instability is reminiscent of a ferromagnetic phase whereas if L 1 = 0, then the instability is reminiscent of a spinglass phase [75,76,85].
We study here the stability of large systems coupled through oriented networks defined by the random matrices of the type (4) using the analytical expressions for λ 1 and R 1 , given by Eqs. (33) and (34) in Sec. III, respectively. First of all, note that the leading eigenvalue of a networked system converges to a finite value when n diverges, in contrary to the leading eigenvalue (3) of the mean-field model studied by May [17], which diverges for increasing n. As a consequence, networked models are stable in the limit of large n, which seems to resolve the diversity-stability debate [9]. However, it remains of interest to study how network architecture affects the stability of large dynamical systems, since the leading eigenvalue λ 1 will depend on the network structure.
Interestingly, for the interaction networks defined in Sec. III, the stability of the stationary state is solely governed by three parameters that characterise the network architecture: the effective mean degree c(1 + ρ) that characterizes the effective number of degrees of freedom each node in the network interacts with; the coefficient of variation v J := J 2 − J 2 / J that characterizes the fluctuations in the coupling strengths between the constituents of the system; and the effective interaction strength α := J /d that quantifies the relative strength of the interactions with regard to the rate d of self-regulation. Remarkably, the system stability characterized by the leading eigenvalue λ 1 only depends on these three parameters, and thus enjoys a high degree of universality.
In order to better understand how the three parameters c(1 + ρ), v J and α govern the stability of dynamical systems on oriented graphs, we present in Fig. 4 the phase diagram of the system in the (v J , c(1 + ρ)) plane for a fixed values of α ∈ [0, 1] and for c(1 + ρ) > 1. The reason we choose these parameter regimes is because for α > 1 there exist no stable phase and for c(1 + ρ) < 1 the graph does not have a giant strongly connected component; in the latter regime the system falls apart in the sense that it is a union of a large number of small isolated subsystems, and thus we are not considering anymore the linear stability of a large system of interacting degrees of freedom.
The phase diagram denotes the critical connectivity c * (black lines) that separates the stable phase (Re[λ 1 ] < 0), for systems at low connectivity c(1+ρ), from the unstable phase (Re[λ 1 ] > 0), for systems at high connectivity c(1+ ρ). The critical line is determined by the function that provides us with the effective connectivity c(ρ + 1) at Re[λ 1 ] = 0 as a function of α and v J ; in formula (74) we have used the symbol v 2 * = 1−α 2 α 2 . Since the critical connectivity is finite for all values of α and v J , it follows that for large enough c(1 + ρ) any dynamical system is unstable, which is consistent with the results of May [17] stating that any large enough fully connected system is unstable. However, as we see from Eq. (74) and Fig. 4, the phase transition to the stable phase at low connectivities has three qualitatively different regimes, which we discuss in the following paragraphs.
The critical value v * separates a regime at v J > v * , which does not have stable phase, from a regime at v J < v * , which has a stable phase at low enough connectivity c(ρ + 1) > 1. Hence, for small enough fluctuations in the interaction strengths (v J < v * ) it is possible to stabilize the system by rewiring edges in the graph until the negative correlations between indegrees and outdegrees are large enough. Stabilizing the system by rewiring edges is however not possible when v J > v * .
Moreover, the regime v J < v * consists of two distinct regimes: a gapped regime, which appears when the fluctuations in the interaction strengths are small (v 2 J < 1/α−1), and a gapless regime, which appears when the fluctuations in the interaction strengths are large (v 2 J > 1/α−1). In Fig. 4, these two regimes are separated by the red dotted line. In the gapped regime the leading eigenvalue is an outlier and the critical connectivity c * is independent of v 2 J . This implies that fluctuations do not affect the system stability when the leading eigenvalue is an outlier. On the other hand, in the gapless regime the leading eigenvalue is part of the boundary of the continuous spectrum and the critical connectivity c * decreases as 1/v 2 J . In this regime fluctuations in interaction strengths render the system less stable. The differences the gapped and gapless regimes can be understood in terms of the  (4). The stability diagram is universal and only depends on three parameters, an effective connectivity c(ρ + 1), the coefficient of variation vJ = J 2 − J 2 / J and the mean interaction strength α = − J /d. Black lines separate the unstable phase at large effective connectivity c(ρ + 1) from the stable phase at small connectivity c(ρ + 1) for a given value of α. The red line separates the gapped phase at small vJ from a gapless phase at high vJ . nature of the destabilizing mode. In the gapped regime, the mode that destabilizes the system is ferromagnetic, i.e., L 1 > 0, whereas in the gapless regime, the mode that that destabilizes the system is spin-glass-like, i.e., L 1 = 0. Hence, increasing the fluctuations v J for fixed values of the mean strength α does not affect the ferromagnetic mode, which gives an intuitive understanding why the location of the outlier is independent of v J .
Finally, we can quantify the overall stability of systems coupled through random matrices (4) in terms of a single parameter a stab , defined as the area in figure 4 where the system is stable and c(1 + ρ) > 1. The quantity a stab is given by The area a stab is a monotonic decreasing function of α, which approaches a stab → 0 as α → 1 and a stab → ∞ as α → 0. Thus, the increase of the average interaction strength between the elements of a network system, in the sense that J approaches d, makes the system less stable.

VII. EXTENSIONS
Here we extend the theory in Sec. IV to random matrices with diagonal disorder and non-oriented random matrices. We present relations that for their outliers, the boundary of the continuous part of the spectrum, and for the associated right (left) eigenvectors.

A. Random matrices with diagonal disorder
We consider random matrices of the form where J n and C n are defined in exactly the same way as in (4), but where D n is now is a diagonal matrix with entries [D n ] jj = D j that are i.i.d. random variables with a probability distribution p D (x). We assume that the support of p D is a compact subset of the real line. In this case, the theory of Sec. IV applies with some slight modifications. We first derive a set of relations that are equivalent to (39) and (40), but that take into consideration the fact that the diagonal elements in (75) are not constant. The eigenvector elements R (j) k satisfy now the relations for all j ∈ ∂ in k and k ∈ [n], and for all j ∈ [n]. Note that if D k = d, then (76) and (77) are identical to (39) and (40).
Following the same ensemble averaging procedure as laid out Sec IV D, we obtain which extend the relations (49)(50)(51) to the case with diagonal disorder, and in the specific case of p D (x) = δ(x+d) we recover (49)(50)(51). The outliers of (75) solve the boundary of the continuous part of the spectrum consists of λ b ∈ C for which and the moments of the right eigenvectors associated with either λ = λ isol or λ = λ b are given by The relations (82)(83)(84) generalize the relations (52)(53)(54) for the case of constant D = d, and the relations derived in [60] for graphs without without degree correlations, i.e., p K in ,K out (k in , k out ) = p K in (k in )p K out (k out ).

B. Non-oriented random matrices
We consider random matrices of the form whereC n is the adjacency matrix of a symmetric random graph with a prescribed degree distribution p deg (k), and whereJ n is a random matrix with zero entries on the diagonal and with offdiagonal pairs (J jk ,J kj ) that are i.i.d. random variables with distribution pJ 1 ,J2 (x, y); pJ 1,J2 (x, y) has the symmetry property pJ 1 ,J2 (x, y) = pJ 1,J2 (y, x). Note that if pJ 1,J2 (x, y) = 1 2 p J (x)δ(y) + 1 2 p J (y)δ(x), then (85) is a specific case of the oriented model (4) with degree distribution whereas if pJ 1 ,J2 (x, y) = δ(x − y)p J (x) then A n is a symmetric matrix.
As derived in the Appendix C, the entries R for all k ∈ [n] and j ∈ ∂ k , where is the k-th diagonal element of the resolvent matrix (A (j) n−1 . Analogously, the entries R j of a right eigenvector of the matrix A n satisfy the relations where Note that in the special case of oriented random matrices, G (j) k = G k = 1 −λ+d and therefore for oriented matrices the relations (87) and (89) are equivalent to the relations (39) and (40).
In order to perform the limit n → ∞, we define the joint distributions and Following an analogous approach as in Sec. IV C, we use the recursion relations (87) and (88) to derive the recursive distributional equation Analogously, we use the relations (89) and (90) to obtain the distributional equation The Eqs. (93)(94) can be solved with a population dynamics algorithm, as described in Refs. [76,86,87]. As before, the outliers λ isol and the boundary λ b of the continuous part of the spectrum are found as values of λ for which the relations (93-94) admit a normalizable solution. Moreover, for a given value of λ ∈ ∂σ ac , the relations (93)(94) provides us with the distribution p R (r) = d 2 gp G,R (g, r) of the entries of the right eigenvector associated with λ.

VIII. DISCUSSION
Random matrices have been used to study the linear stability of large dynamical systems of interacting degrees of freedom [12,17,[40][41][42][43][44][45][46][47][48][49][50]. A common feature of these models is that each constituent interacts with a number of degrees of freedom that increases with system size, and therefore the system is unstable when it is large enough. It is however more realistic to consider systems defined on sparse graphs for which each constituent interacts with a finite number of other constituents, independent of the system size. For models on sparse graphs, system stability is independent of system size and the question that arises is how network architecture affects system stability. In this paper we have developed a mathematical method to address this problem.
For dynamical systems defined on oriented random graphs with a prescribed degree distribution, as defined in Sec. II, we have shown that system stability is governed by only three network parameters: the effective mean degree c(ρ + 1), the coefficient of variation v J = J 2 − J 2 / J and the relative interaction strength α = J /d. This result follows from the analytical expression (33) for the leading eigenvalue of the adjacency matrix of the graph of interactions between the constituents of the system.
From the phase diagram we obtain the following interesting conclusions. First, negative correlations between indegrees and outdegrees stabilise large dynamical systems, whereas the mean coupling strength α and fluctuations v J in the couplings render systems less stable. Second, when the fluctuations v J of the coupling strengths are small enough, the stability is controlled by an outlier and is independent of v J . On the other hand, when v J is large enough, then the leading eigenvalue is determined by the boundary of the continuous part of the spectrum and the system stability decreases as a function of v J . Moreover, in the first scenario the unstable mode is ferromagnetic ( J > 0) whereas in the second scenario it is spin-glass-like ( J = 0). Finally, systems with fluctuations v J larger than the critical value v * = 1−α 2 α 2 do not contain a stable phase, no matter how large are the negative correlations between indegrees and outdegrees.
Our results rely on a spectral theory for the eigenvalue outliers and the boundary of the continuous part of the spectrum of large sparse non-Hermitian random matrices. This theory also provides us with the statistics of the entries of the right and left eigenvectors associated with outlier eigenvalues or eigenvalues located at the boundary of the continuous spectrum. Because spectra of directed graphs appear in various research areas, the spectral theory presented in this paper can also be applied to problems other than the linear stability analysis of randomly coupled differential equations. A first example of an application is the stability of dynamical systems in discrete time [88], which are relevant for the systemic risk of networks banks connected through financial contracts [8]. For discrete-time systems the stability is controlled by the spectral radius r(A) = max {|λ 1 |, |λ 2 |, . . . , |λ n |}: when r(A) > 1 the system is unstable and when r(A) < 1 it is stable. A second application is the analysis of spectral algorithms that use the right or left eigenvector associated with the (sub)leading eigenvalue, e.g., spectral clustering algorithms [89,90], centrality measures based on eigenvectors [91][92][93], or the low-rank matrix estimation problem [94]. Moreover, detectability thresholds of spectral algorithms often depend on the location of the leading and subleading eigenvalue [90,[95][96][97]. A third application is the analysis of continuous phase transitions on networks for which the leading eigenvalue or the spectral radius of a nonsymmetric matrix determines the phase transition threshold; examples are the threshold for the onset of a susceptible-infected-susceptible epidemic [98] or the percolation transition [99,100]. A fourth application is the analysis of stochastic processes: the stationary state of a Markov processes is the right (or left) eigenvector of the leading eigenvalue of a Markov matrix [101] and the values of the cumulant generating function of a time additive observable can be expressed as the leading eigenvalue of a Markov matrix [102][103][104][105][106]. Finally, we remark that the subleading eigenvalue provides information about the finite-time dynamics of a set of randomly coupled differential equations [107], and not only about their asymptotic stability. Taken together, we conclude that the spectral theory presented in this paper can be used in various contexts. j j∈ [1,n] is the uniform distribution over the set of all simple oriented graphs with the given degree sequence k in j , k out j j∈ [1,n] . Note that the distribution p E e| k in j , k out j j∈ [1,n] defines the configuration model considered in section 13.2 of Ref. [2] or in the Refs. [67,68,108].

Algorithm
We detail the algorithm we use in this paper to sample graphs from the ensemble defined in A 1. We detail the algorithm for graphs with a prescribed degree distribution of the type This algorithm consists of the following steps: 1. We generate a sequence of n i.i.d. variables k in j from the distribution p K in ; 2. We generate a sample of n i.i.d. Bernouilli random variables x j ∈ {0, 1}, which take the value x j = 1 with probability q and x j = 0 with probability 1−q; 3. If x j = 0, then we set k out j = k in j ; 4. We generate a random permutation ζ on the set of indices j ∈ [1, n] for which x j = 1; 5. If x j = 1, then we set k out j = k in ζ(j) ; 6. For each j, we attribute k in j insockets and k out j outsockets to the corresponding node; 7. We randomly connect pairs of insockets and outsockets. We start with the node with the highest total degree k in j +k out j : we select uniformly and randomly k in j outsockets and k out j insockets associated with the other nodes in the graph. Every time two sockets are connected, we create the corresponding edge. We do not allow for self-links, we do not allow for multiple edges, and we do not allow for non-oriented edges; 8. Sometimes step seven in the algorithm fails because connecting two sockets would create either a selflink, a multiple edge or a non-oriented edge. In this case, we restart step seven all over again. We continue until the algorithm has found a proper set of links that defines an oriented simple random graph.
This algorithm works very well for the random graphs discussed in this paper, except for the power law random graphs with small exponent a, see Section V C. In this case, most of the generated degree sequences are not compatible with the condition that the graph is simple and oriented. Generating graphs with a power law degree distribution with a small exponent a requires more sophisticated algorithms, such as, algorithms based on Monte-Carlo simulations of a Markov chain whose stationary distribution is equal to the distribution p E (e) that defines the ensemble [109,110].
Appendix B: Percolation transition in the giant strongly connected component of a directed graph We discuss briefly the percolation properties of the strongly connected component in directed random graphs [73]. The largest strongly connected component of a graph G is the largest subgraph of G that is strongly connected. A graph is strongly connected if for each pair of vertices in the graph, say i and j, the following two conditions are met: (a) there exist at least one path starting in i and ending in j that connects the two vertices (b) there exist at least one path starting in j and ending in i that connects the two vertices. We say that a sequence of random graphs has a giant strongly connected component if with probability one the largest strongly connected component is of size O(n).
The paper [73] derives a set of recursion relations for the size s n of the largest strongly connected component in random graphs with a prescribed joint degree distribution p K in ,K out (k in , k out ), namely, where x and y solve the equations The percolation transition happens when Using the definitions (21) and (22), this condition can be written as Appendix C: Recursion relations for the elements of the right eigenvectors of the adjacency matrix of a tree In this appendix, we derive Eqs. (39)-(40), (76)- (77), and (87)- (90) for the components of a right eigenvector of the adjacency matrix of a tree-like matrix. We first derive the general result (87)- (90) and derive the specific cases (39)- (40) and (76)-(77) for oriented matrices.

General result
Let A n be a matrix of the form where D n is a diagonal matrix with entries D jj , J n is a matrix with real-valued entries J jk and with zero valued diagonal entries, and C n is the adjacency matrix of a tree that may contain non-oriented edges or of a large tree-like matrix. Let λ be an eigenvalue of A n , and let R and L be the right and left eigenvectors associated with λ, respectively. We assume that n is large enough such that, if λ is an eigenvalue of A n , then λ is also an eigenvalue of the principal submatrix A (j) n−1 , which we obtain from A n by deleting the j-th row and column. In other words, the eigenvalue λ is a stable eigenvalue.
Given the above assumptions, we derive an expression for the eigenvector components R j and L j in terms of the right and left eigenvector components R for all j ∈ [n], where ∂ in j and ∂ out j are the inneighbourhood and the out-neighbourhood of the j-th node as defined in Eqs. (7) and (8), respectively. The asterix z * denotes the complex conjugate of an arbitrary complex number z. The quantities G j (λ) = lim η→0 [G A (λ + η)] jj are the diagonal elements of the resolvent matrix with 1 n the identity matrix of size n. The components R for k ∈ [n] and all j ∈ ∂ k , where G (j) k are the diagonal elements of the resolvent matrix of A (j) n−1 . The resolvent elements G j and G (j) k obey the following set of recursion relations for k ∈ [n] and all j ∈ ∂ k . Note that for symmetric random matrices, Eqs. (C7-C8) are equivalent to the recursion relation for the resolvent derived in [57,86,87]. The relations (C2-C8) have been derived before in Ref. [60]. Here we derive Eqs. (C2-C8) using a different mathematical approach than the one presented in [60]. In particular, we do not map the problem on a graphical model, but instead we make use of basic algebraic relations. Therefore, the approach discussed in this appendix is, in principle, more direct than the one presented in [60].

Eigenvectors and the resolvent
We assume that the right and left eigenvectors form a biorthonormal set, i.e., L α · R β = δ α,β , with α, β ∈ [1, n]; the product x · y denotes the inproduct between the vectors x and y, i.e., x · y = x † y.
The components R α,j of R α are thus given by where G j (λ α − η) = [G An ] j , 1 is the column vector with all components equal to one, and e j is the column vector with all components equal to zero, except for the j-th component, which is equal to one. Analogously, we obtain that Hence, in order to compute the right-hand side of Eqs. (C11) and (C12), we need an expression for the elements G j (λ α − η) of the resolvent matrix.

Schur formula
In order to compute the elements G j (λ) of the resolvent matrix we use the Schur formula, which is a common tool in random matrix theory (see for instance section 2.4.3 in Ref. [18] and also Refs. [37,58,111] is the Schur complement of block a, and is the Schur complement of block d. If a and its Schurcomplement s a are invertible matrices, then the following block inversion formula holds which we call the Schur formula.

Recursion formula for the diagonal elements of the resolvent
We use the Schur formula to derive the recursion relations (C7) and (C8) for the diagonal elements of G An . Before applying the Schur formula, we recall a property of the resolvent matrix G An (λ) that we use repeatedly in our derivations. For |λ| large enough, we can expand G An (λ) as Note that [A m ] jk counts the number of paths of length m that connect node j to node k. If j = k are two disconnected nodes, in the sense that they belong to disjoint connected components of the graph represented by A, then [A m ] jk = 0 for all m ≥ 1 and thus G jk (λ) = 0. We use below this property in order to derive Eqs. (C7) and (C8). We apply the Schur formula (C16) to derive relations for the diagonal elements G j (λ) of G An (λ). This approach is also explained in Ref. [58]. The Schur formula allows us to remove a single row and column from the original matrix, and thus express G j in terms of G Notice that, for j = 1, the formula (C19) is almost identical to the Schur formula (C16). However, if j = 1, then we have first to perform a permutation that swaps the j-th column/row with the 1-st column and row, subsequently apply the Schur formula (C16), and finally perform the permutation again that swaps the j-th column/row with the 1-st column and row. There is no harm in performing a permutation, since it is an orthogonal transformation.
Let us now use the fact that C n is the adjacency matrix of a tree and therefore C (j) n−1 is the adjacency matrix of a forest, which consists of |∂ j | disjoint trees. Consequently, all distinct pairs k = that belong to the set ∂ j are part of isolated trees of the graph C (j) n−1 , and thus G (j) k = 0 for all k = j that belong to ∂ j . Using G (j) k = 0 in (C19), we obtain Eq. (C7).
When we repeat the above argument for the resolvent of the matrix A (j) n−1 , we obtain for all j ∈ ∂ k and k ∈ [n]. The quantity G (k),(j) is the diagonal element of the resolvent matrix of the principal submatrix A (k),(j) n−2 of A n , obtained from A n by deleting both the k-th and j-th rows and columns. For tree graphs, it holds that for all ∈ [n], j ∈ ∂ and k ∈ ∂ j with k = . Indeed, the nodes j and belong then to distinct isolated trees in the forest represented by A ( ) . Using (C21) in (C20) we obtain Eq. (C8).

Recursion formula for the eigenvector elements
We follow an approach similar as in Appendix C 4, but here we apply the Schur formula to the off-diagonal elements G j of the resolvent, viz., where we have substituted A jk by Eq. (C19). Since the eigenvector recursion relations do not depend explicitly on the diagonal entries A ii , we have set A ii = 0 for simplicity. Therefore, we can write (C24) The first term is of order O(|η|) and in the second term we can identify when we set L α · 1 ≈ L (j) α · 1, which is valid for large enough n. Hence, we obtain We can repeat the above reasoning and apply the Schur formula to the resolvent (A (j) n−1 − λ1 n−1 ) −1 . As a result, we get However, since A n is the adjacency matrix of a tree, then R α, , since j, ∈ ∂ k belong to disjoint trees of the forest represented by A (j),(k) n . As a consequence, we obtain the expression Since the relations (C26) and (C28) hold for any eigenvalue λ α of A n such that Eq. (C10) is fulfilled, we simply write λ α = λ, R (j) α,k = R (j) k , and R α,j = R j , and we recover the main Eqs. (C2) and (C5). In an analogous way, one can derive Eqs. (C3) and (C6) for the left eigenvector components.

Adjacency matrices of oriented trees
We discuss the case of adjacency matrices that represent oriented tree-like matrices with arbitrary diagonal matrix elements, as defined by Eq. (C1). The eigenvalues of A n are then given by the diagonal elements of the matrix, and the resolvent reads As a consequence, the relations (C2), (C3), (C5), and (C6) simplify considerably: and R (j) for all k ∈ [n], j ∈ ∂ in k , and i ∈ ∂ out k .