Linear stability analysis for large dynamical systems on directed random graphs

We present a linear stability analysis of stationary states (or fixed points) in large dynamical systems defined on random directed graphs with a prescribed distribution of indegrees and outdegrees. We obtain two remarkable results for such dynamical systems: First, infinitely large systems on directed graphs can be stable even when the degree distribution has unbounded support; this result is surprising since their counterparts on nondirected graphs are unstable when system size is large enough. Second, we show that the phase transition between the stable and unstable phase is universal in the sense that it depends only on a few parameters, such as, the mean degree and a degree correlation coefficient. In addition, in the unstable regime we characterize the nature of the destabilizing mode, which also exhibits universal features. These results follow from an exact theory for the leading eigenvalue of infinitely large graphs that are locally tree-like and oriented, as well as, for the right and left eigenvectors associated with the leading eigenvalue. We corroborate analytical results for infinitely large graphs with numerical experiments on random graphs of finite size. We discuss how the presented theory can be extended to graphs with diagonal disorder and to graphs that contain nondirected links. Finally, we discuss the influence of small cycles and how they can destabilize large dynamical systems when they induce strong enough feedback loops.


I. INTRODUCTION
Scientists use networks to depict the causal interactions between the constituents of large dynamical systems [1][2][3][4][5].Currently, it is not well understood how the stability of a large system is affected by the topology of the underlying interaction network.Relating system stability to network topology is important to understand, among others, how systemic risk in financial markets is governed by the topology of the network of liabilities between financial institutions [6][7][8]; how the resilience of an ecosystem to external perturbations depends on the underlying foodweb of trophic interactions [9][10][11][12][13][14]; and how networks of social interactions determine the spreading of rumours [15][16][17].As these examples illustrate, in order to reduce risk and instability in dynamical systems it is important to identify topological properties of networks that stabilize large systems.
In order to study the stability of large dynamical systems, we consider the linearized dynamics of a large, complex dynamical system in the vicinity of a stationary state or fixed point.We model this dynamics with a set of randomly and linearly coupled differential equations of the form where t ≥ 0 is the time index, ⃗ y † (t) = (y 1 (t), y 2 (t), . . ., y n (t)) ∈ R n is a vector, and A is a random matrix that encodes for the underlying interaction network between the degrees of freedom.We use the notation ⃗ y(t) for column vectors and ⃗ y † (t) = (y 1 (t), y 2 (t), . . ., y n (t)) ∈ R n for their transpose (or conjugate transpose).Models like Eq. ( 1) appear when linearizing a set of nonlinearly coupled differential equations in the vicinity of a fixed point [18][19][20] as occurs, for example, in the study of neural networks [21][22][23][24][25] and ecosystems [13,20,26,27].The vector ⃗ y describes then the deviation of the system from its fixed point, which is located at the origin, i.e., ⃗ y = 0. We will use the generic model, given by Eq. ( 1), to study how network topology affects the stability of stationary states.Since we aim to develop a better understanding on how network topology affects system stability, we write Eq. (1) as where d > 0 represents the rate at which an isolated node relaxes to the stationary state, where C kj ∈ {0, 1} are the entries of the adjacency matrix of a directed graph, and where J kj ∈ R are the strengths of the couplings between two nodes k and j.Note that Eq. (2) and Eq. ( 1) are related by where and 1 n is the identity matrix.The entries of the adjacency matrix C determine who interacts with whom, while the entries of the interaction matrix J denote the arXiv:1908.07092v10[cond-mat.stat-mech]21 May 2024 absolute strength of the interactions and whether these are inhibitory J kj < 0 or excitatory J kj > 0.
In absence of interactions between system constituents (J jk = 0), the fixed point ⃗ y = 0 is stable as d > 0 .However, if the constituents of the system interact strong enough, then a small perturbation around the fixed point ⃗ y = 0 will propagate through the network, grow in size, and destabilize the system.It is the underlying network topology, represented by the adjacency matrix C, and the strength of the interactions, given by J, that determine whether an initial perturbation will grow or fade away.For example, in an online social network, the topology of the network determines whether a rumour spreads throughout the whole system or only reaches a couple of users.
The stability of the fixed point ⃗ y = 0 in the dynamics given by Eq. ( 1) is governed by the sign of the real part of the leading eigenvalue λ 1 (A), which is the eigenvalue with the largest real part.As discussed in Appendix A, if Re[λ 1 (A)] > 0, then the fixed point is unstable and lim t→∞ |⃗ y(t)| = ∞.Conversely, if Re[λ 1 (A)] < 0, then the fixed point is stable and lim t→∞ |⃗ y(t)| = 0.In addition, the left eigenvector associated with the leading eigenvalue determines the nature of the destabilizing mode.
To model dynamical systems on large networks, we consider that C is the adjacency matrix of a random directed graph with a prescribed degree distribution p K in ,K out (k, ℓ) of indegrees K in and outdegrees K out .This is a paradigmatic model for networked systems, such as, the World Wide Web [28,29], neural networks [30][31][32], foodwebs [10], and online social networks [33,34], and it is often called the configuration model [2,4,[35][36][37] or the uniform model [38].The percolation properties of random directed graphs have been well understood, see Refs.[39][40][41], and recently also spectral properties of random directed graphs have been thoroughly studied, see Refs.[42][43][44][45][46], but the properties of the leading eigenvalues of the adjacency matrices of random directed graphs have not been studied so far.
In this paper, we perform a linear stability analysis of fixed points in dynamical systems defined on random directed graphs.To this aim, we present a detailed analysis of the leading eigenvalue λ 1 (A) of the the adjacency matrices A of random directed graphs with a prescribed degree distribution and with randomly weighted links.First, building on Ref. [44], we derive exact analytical expressions for the typical value of λ 1 in the limit of infinitely large n.In addition, we derive in this limit exact expressions for the statistics of the entries of right and left eigenvectors associated with λ 1 .Second, we use these results to depict a phase diagram for the linear stability of fixed points in dynamical systems defined on large directed networks.Third, the theoretical results for infinitely large graphs are compared with numerical results for graphs of finite size, which include random graphs with power-law degree distributions.
Two implications of these results are surprising enough that they deserve further emphasis.First, we find that dynamical systems on infinitely large, random, and directed graphs can be stable, even when the degree distribution has unbounded support.This result is surprising because dynamical systems on random nondirected graphs with a degree distribution that has unbounded support are unstable if the system size is large enough.Indeed, the leading eigenvalue of an nondirected random graphs scales as λ 1 ∼ √ k max [47][48][49], where k max is the expected largest degree of the graph, and therefore the leading eigenvalue of an nondirected graph diverges for large n.In contrast, in this paper we obtain that the leading eigenvalue of a random directed graph with a prescribed degree distribution is in general finite for n → ∞, even when k max diverges.Hence, models on random directed graphs are significantly more stable than their counterparts on random nondirected graphs.
Second, we obtain a universal phase diagram for the stability of networked systems on random directed graphs with a prescribed degree distribution.Put in another way, we show that the leading eigenvalue of these random graphs only depends on a few system parameters, including the mean degree and a parameter that characterizes the correlations between indegrees and outdegrees.
Both the stability and universality of dynamical systems defined on random directed graphs are rooted in a common fact: for large enough n, the local neighborhood of a randomly selected node is with probability one a tree graph that contains only unidirectional links.We call this the locally tree-like and oriented property.Using the property, we derive a set of recursion relations for the components of right and left eigenvectors associated with the leading eigenvalue.These recursion relations have first been derived in Ref. [44] using the cavity method [42,45,[50][51][52][53], a method borrowed from the statistical physics of spin glasses [54,55].In the present paper, we present an alternative derivation of the recursion relations based on the Schur formula [56], which we believe is simpler to understand and thus more insightful.
The outline of the paper is the following.In Sec.II, we define the random matrices and spectral quantities we study in this paper.In Sec.III, we present an overview of the theoretical results derived in this paper.In Sec.IV, we apply these theoretical results to a linear stability analysis of stationary states in networked systems.In Sec.V, we compare theoretical results for infinitely large matrices with numerical data for matrices of finite size.In Sec.VI, we discuss extensions of the theory presented in Sec.III to the cases of adjacency matrices with diagonal disorder and adjacency matrices of random graphs that contain nondirected links.Lastly, in Sec.VII, we present a discussion of the main results.A detailed description of mathematical derivations are presented in the appendices.In Appendix A, we show that a linear set of randomly coupled differential equations, of the form given by Eq. (1), is stable if and only if all the eigenvalues of A are negative.Appendix B details the algorithm we use to generate graphs with a prescribed degree distribution, and in Appendix C, we discuss properties of oriented ring graphs.In Appendix D, we show that the algebraic multiplicity of the zero eigenvalue of a random directed graph is related to the size of its strongly connected component.Lastly, in Appendices E-I, we derive recursion relations for the entries of right and left eigenvectors of random and directed graphs, which are based on the Schur formula.

A. Notation
We use lower case symbols for deterministic variables, e.g., x and y.We write (column) vectors as ⃗ x and ⃗ y, while for adjoint row vectors we write ⃗ x † and ⃗ y † .The inproduct ⃗ x • ⃗ y = ⃗ x † ⃗ y = n k=1 x * k y k , where x * k is the complex conjugate of x k .Matrices are written in boldface, e.g., x and y.We write random variables in upper case, e.g., X and Y .The probability distribution of a random variable X is denoted by p X (x).There are a few exceptions to the use of upper case letters to represent random quantities.For example, we use the notation λ j (A) to denote the j-th eigenvalue of a random matrix A, and we write p X (x; A) for the probability distribution of a random variable X that depends on the matrix A. We denote averages with respect to the distribution p A (a) by ⟨•⟩.We denote the identity matrix by 1 n and we use {1, 2, . . ., n} = [n].We write R dxf (x) for an integral over the real line and C d 2 zf (z) = dxdyf (x + iy) for an integral over the complex plane.We denote the Dirac distribution over the real line by δ(x) and we denote the Dirac distribution over the complex plane by δ(z) = δ(x)δ(y), where z = x + iy ∈ C.

II. SYSTEM SETUP AND DEFINITIONS
In this section, we define the random matrices and the spectral properties we study in this paper.
A. Adjacency matrices of random directed graphs with a prescribed degree distribution We consider random matrices A, as defined by Eq. ( 3), where J is a square matrix of size n with real entries J jk ∈ R that are i.i.d.random variables drawn from a distribution p J , and where C is the adjacency matrix of a random and directed graph G of size n with a prescribed degree distribution p K in ,K out (k, ℓ) of indegrees K in and outdegrees K out [2,4,37]; note that we call the number of vertices of a graph its size and not the number of links.
For a simple graph G the entries of the adjacency matrix satisfy C jk ∈ {0, 1} and C jj = 0. We use the convention that C jk = 1 if the graph G has a directed edge from node j to node k.Therefore, the indegree K in j of the j-th node equals the number of nonzero elements in the j-th column of C, and the outdegree K out j of the j-th node equals the number of nonzero elements in the j-th row, The inneighborhood ∂ in j and the outneighbourhod ∂ out j of the j-th node are defined by and respectively, and is the neighborhood of node j.
We say that G is a random graph with a prescribed degree distribution p K in ,K out (k, ℓ) if the following properties hold: (i) the degrees (K in j , K out j ) are i.i.d.random variables with a joint probability distribution p K in ,K out (k, ℓ) and with the additional constraint , the nodes are connected randomly and hence the edges of G are generated by the configuration model [2,4,37].In the Appendix B, we describe in detail the algorithm we use to sample random graphs with a prescribed degree distribution.
In the specific case when J jk = 1 and d = 0, A is the adjacency matrix of a random directed graph.The variables J jk are the weights associated with the links of the graph represented by the adjacency matrix C, and hence for J jk ̸ = 1 the matrix A is the adjacency matrix of a weighted graph.The constant parameter d affects the spectral properties of A in a trivial manner, but plays an important role in a stability analysis of dynamical systems.

B. Ensemble parameters
The random matrix ensemble, defined by Eq. ( 3), depends on the following parameters: the distribution p J of weights J ij , the joint distribution p K in ,K out of indegrees and outdegrees, the real number d, and the size n.
We often use the moments of p J and p K in ,K out to specify the model of interest.The m-th moment of p J is defined by and the (m, o)-th moment of p K in ,K out is given by Among those, important parameters are the mean degree and the degree correlation coefficient The mean degree is equal to the average number of edges that enter or leave a uniformly and randomly selected vertex in the graph.The parameter c⟨J⟩ is the average interaction strength felt by a degree of freedom in the dynamical system governed by Eq. ( 2).The degree correlation coefficient ρ characterizes the correlations between indegrees and outdegrees of vertices in the graph.If ⟨K in j K out j ⟩ = ⟨K in j ⟩⟨K out j ⟩, then ρ = 0, which means that indegrees and outdegrees are uncorrelated.If ρ > 0 (ρ < 0), then indegrees and outdegrees are positively (negatively) correlated.

C. Topology of directed graphs
We discuss properties of the topology of random, directed graphs with a prescribed degree distribution that will be relevant to understand their spectra, namely, connected components, percolation transitions, the locally tree-like and oriented structure, and oriented rings.

Connected components of directed graphs
Connected components are subgraphs that characterize the topology of a directed graph.In particular, the connected components determine which nodes in the graph are affected by a local perturbation.
The topology of a directed graph can be depicted with a bow-tie diagram, see Fig. 1 and Refs.[28,40,41,57].The bow-tie diagram depicts the following subgraphs of a directed graph: the largest strongly connected component (SCC), the incomponent (IN), and the outcomponent (OUT).Besides these three components, directed graphs also have a largest weakly connected component (WC) and isolated components (IC), also depicted in Fig. 1.Finally, directed graphs contain tendrils [40,41].Since tendrils play a minor role in the spectral properties of directed graphs, we omit them in Fig. 1.
We present definitions of the abovementioned subgraphs.The SCC is the largest subgraph that is strongly connected.A subgraph is strongly connected if for each pair of vertices in the subgraph, say j and k, the following two conditions are met: (a) there exist at least one path starting in j and ending in k (b) there exist at least one path starting in k and ending in j.The IN consists of all nodes that can reach the strongly connected component and the OUT consist of all nodes that can be reached from the strongly connected component (by following the edges of the directed graph).The WC is the largest connected component obtained by ignoring the directionality of edges.The tendrils consist of all vertices that belong to the weakly connected component, but do not belong to the incomponent and outcomponent.Finally, the IC are connected subgraphs that are disconnected from the largest weakly connected component.

Size of the connected components of random directed graphs with a prescribed degree distribution
For random directed graphs with a prescribed degree distribution p K in ,K out (k, ℓ), the relative sizes of the connected components are deterministic in the limit of large n.We denote the limiting value of the relative size of the SCC by s sc (i.e., the fraction of nodes that belong to the SCC), and analogously, we use s in , s out , s wc , s t and s ic , for the limiting values of the relative sizes of the incomponent, outcomponent, largest weakly connected component, tendrils, and isolated components, respectively.
We say that a random graph has a giant SCC when s sc > 0 and, analogously, we say that a random graph has a giant IN, OUT, or WC when, respectively, s in > 0, s out > 0, or s wc > 0.
For small enough values of c(ρ+1), it holds that s sc = 0 and s wc = 0, whereas for large enough values of c(ρ + 1), it holds that s sc > 0 and s wc > 0. The percolation transitions associated with a giant SCC and a giant WC take place at the threshold values of c(ρ + 1) for which the quantities s sc and s wc vanish, respectively.Since by definition s in ≥ s sc and s out ≥ s sc , and s in = s out = 0 if s sc = 0, the percolation transition associated with the IN and OUT is identical to the percolation transition associated with the SCC.Hence, in directed graphs there exist two percolation transitions, namely a transition associated with the SCC and one associated with the WC.
In Ref. [40], an exact set of equations have been derived for the relative sizes of the various connected components in directed graphs.It was found that and where a and b are the smallest nonnegative solutions to the equations The size of the SCC is given by where The percolation transition of the SCC happens when s sc turns positive, which happens when Using in Eq. ( 20) the definitions ( 12) and ( 13) for, respectively, the mean degree c and the degree correlation coefficient ρ, we obtain that at the critical connectivity a giant SCC emerges in a directed random graph with a prescribed degree distribution.Equation (21) implies that random graphs with positively correlated indegrees and outdegrees percolate at lower connectivities than random graphs with negatively correlated indegrees and outdegrees.

Oriented and locally tree-like structure
If the mean degree c is finite, then random graphs with a prescribed degree distribution are locally tree-like and oriented.This means that for large enough n, the finite neighborhood of a randomly selected node is with probability one an oriented tree [58].We say that a graph is a tree if it is connected and does not contain a cycle and we say that a graph is oriented if all its edges are unidirectional, i.e., C ij C ji = 0 for each pair (i, j).For a precise mathematical definition of locally tree-like graphs, we refer to the section 2.1 of Ref. [38].

Oriented rings
Since random directed graphs with a prescribed degree distribution are locally tree-like, one may think that cycles of finite length are not important to describe their spectral properties in the limit of large n.However, this is only partly true since in the limit n → ∞ there nevertheless exists a finite number of cycles of finite length ℓ, and these cycles may affect the value of the leading eigenvalue.
We focus on subgraphs that are oriented rings since only their contribution matters to the spectrum of A. An oriented ring of length ℓ is an ℓ-tuple of nodes i 1 , i 2 , . . ., i ℓ for which In the limit n → ∞, the average number of oriented rings of length ℓ in a random directed graph with a prescribed degree distribution is given by (see Appendix C) and the total number of oriented rings of finite length reads Note that ⟨N ⟩ diverges for c(ρ + 1) → 1.

Finite matrices
The eigenvalues {λ α (A)} α∈[n] are the complex roots of the algebraic equation [60] det We sort the eigenvalues in decreasing order, so that If an eigenvalue is degenerate, then it appears more than once in the sequence.We call λ 1 the leading eigenvalue of A and λ 2 the subleading eigenvalue.
A right eigenvector ⃗ R(A) and a left eigenvector ⃗ L(A) associated with an eigenvalue λ α are nonzero vectors that fulfil We use the notation R j and L j for the components or entries of the right and left eigenvectors, respectively, where The number m of linearly independent right eigenvectors (or left eigenvectors) is smaller or equal than the size of the matrix and greater or equal than the number of eigenvalues of A. If m = n, then the matrix is diagonalizable.
Right and left eigenvectors of A can be chosen biorthonormal, where α, β ∈ [m] is a label to identify the m linearly independent right (left) eigenvectors.Biorthonormality is not sufficient to uniquely characterize right and left eigenvectors since they can be rescaled as c α ⃗ R α and c −1 α ⃗ L α , with c α ∈ C. In order to uniquely define the right and left eigenvectors, we take the convention that and we set The relation (29) specifies the argument of c α and the relation (30) specifies its norm.When using the conventions ( 28)- (30), the norm n j=1 |L α,j | 2 and the argument of n j=1 L α,j are functions of the entries of A.

Infinitely large matrices
In order to characterize properties of random matrices in the limit of n → ∞, we use sets and distributions.The spectrum of A is the set of eigenvalues of A. For finite n, σ(A) is discrete.For large n, the closure of the spectrum σ(A) converges to the limit where σ is a deterministic set and Γ is a random set.The deterministic spectrum consists of a continuous part σ c and a discrete part σ d .The continuous part consists of a set σ ac of nonzero Lebesgue measure, which we call the absolutely continuous part, and a set σ sc of zero Lebesgue measure, which we call the singular continuous part.We will be interested in the boundary ∂σ ac of the set σ ac and use the notation for eigenvalues located at the boundary of σ ac .The discrete part of the spectrum consists of deterministic outlier eigenvalues, which we denote by λ isol .We say that λ isol ∈ σ is an outlier eigenvalue -sometimes also called an isolated eigenvalue -if there exists an ϵ > 0, such that, In the examples considered in this paper, there will be maximal one deterministic outlier eigenvalue.Lastly, the limiting spectrum in Eq. ( 32) may contain a random set Γ that consists of stochastic (outlier) eigenvalues.
The spectral distribution denotes the relative number of eigenvalues that occupy a certain region of the complex plane, and we denote its asymptotic expression by The support of the distribution is the closure of the set {λ ∈ C : µ(λ) ̸ = 0}.Since in general µ(λ isol ) = 0, the outliers do not belong to the support of µ, and therefore the support of µ is a subset of σ.
We are also interested in the statistics of the components of right and left eigenvectors.Let ⃗ R ( ⃗ L) be the right (left) eigenvector associated with an eigenvalue λ.We define the random variable R (L) as a uniformly randomly sampled entry of the eigenvector.If R and L refer to an outlier, then we use the notation R isol and L isol ; if R and L refer to an eigenvalue located at the boundary of σ ac , then we use R b and L b .
The distributions of the random variables R and L are defined by and respectively, where δ(z) is the Dirac-delta distribution in the complex plane.In the limit n → ∞, the distributions p R (r|A) and p L (l|A) often converge to deterministic limits We denote the moments of the limiting distributions p R (r) and p L (l) by where d 2 r = dRe(r)dIm(r) and d 2 l = dRe(l)dIm(l).We say that a spectral quantity of a random directed graph is universal if it converges for n → ∞ to a deterministic limit that only depends on the first few moments of the distributions p J and p K in ,K out .

III. SPECTRAL PROPERTIES OF INFINITELY LARGE RANDOM AND DIRECTED GRAPHS
In this section, we present the main theoretical results in the limit of large n for the spectral properties of adjacency matrices of random directed graphs with a prescribed degree distribution [as defined in Eq. ( 3)].
The giant SCC plays an important role in the spectrum of random directed graphs.Let us therefore recollect that for directed random graphs with a prescribed degree distribution and This section is organized as follows.First, we discuss in Sec.III A how the spectral distribution µ(λ) depends on the size of the SCC.Second, we discuss in Sec.III B how the deterministic part σ of the spectrum is governed by the SCC.In particular, we show that if c(ρ + 1) > 1, then σ contains a continuous part σ ac and (possibly) a deterministic outlier λ isol , both determined by the SCC.On the other hand, if c(ρ + 1) < 1, then the spectrum σ = {−d}.In Sec.III B we also discuss how the nondeterministic part Γ of the spectrum is determined by oriented ring graphs.Third, in Sec.III C, we present recursion relations in the distribution of entries of right eigenvectors associated with deterministic outliers λ isol or with eigenvalues λ b located at the boundary of σ ac .Subsequently, we use in Secs.III D and III E these recursive distributional equations to derive analytical results for the boundary of σ ac and the deterministic outliers λ isol , respectively.In Sec.III F, we present results for the leading eigenvalue λ 1 .We obtain exact analytical expressions for the typical value of the leading eigenvalue λ 1 in the regime where c(ρ + 1) > 1, while for c(ρ + 1) < 1 we show that the leading eigenvalue is governed by oriented ring graphs.Lastly, in Sec.III G, we discuss the spectral gap, and in Sec.III H, we comment on the relation between the derived results and the Perron-Frobenius theorem [61].
We focus on right eigenvectors since the left eigenvectors of A are simply the right eigenvectors of A T .Therefore, results for left eigenvectors can be obtained from the expressions for right eigenvectors through the substitutions "R → L" and "in ↔ out".

A. Spectral distribution
We discuss how the spectral distribution µ(λ) of an adjacency matrix of a random directed graph depends on the size of its connected components.In Appendix D, we show that the spectral distribution µ(λ) takes the form where μ(λ) is a normalized distribution associated with the SCC and supported on σ ac ; see Fig. 9 of Ref. [45] for an example of μ(λ) in the case of directed Erdős-Rényi ensembles.
Eq. ( 45) implies that the algebraic multiplicity of the −d-eigenvalue is equal to The high degeneracy of the −d-eigenvalue follows from the fact that (i) random, directed graphs with a prescribed degree distribution are locally tree-like and oriented and (ii) an oriented tree graph has only zero eigenvalues, and in the present case where the diagonal elements are all set equal to −d, all eigenvalues of an oriented tree graph are equal to −d.Hence, a random directed graph develops eigenvalues that differ from −d trough the presence of oriented rings, which are defined by Eq. ( 22) in Sec.II C 4.

B. Spectrum
The spectrum σ ∪ Γ of a random directed graph in the limit of infinitely large n is determined by three topological components, namely the SCC, nodes that do not belong to the SCC, and oriented rings of finite length.
If c(ρ + 1) > 1, then the deterministic part σ of the spectrum consists of a continuous set σ ac and (possibly) an outlier λ isol , both determined by the SCC.
In addition, due to the presence of cycles of finite length, random and directed graphs can contain stochastic outliers.Stochastic outliers appear in the spectrum due to the presence of oriented rings in the directed random graph.As shown in the Appendix C, the eigenvalues of an oriented ring of length ℓ are located on a circle of radius where J j are the random weights attributed to the ring graph.If c(ρ + 1) < 1, then these eigenvalues appear as outliers in the spectrum.On the other hand if c(ρ + 1) > 1, then the eigenvalues of an oriented rings form stochastic outliers only when γ is large enough, so that they do not belong to σ ac .As a consequence, unweighted graphs, i.e., with J ij = 1, do not contain stochastic outliers when c(ρ + 1) > 1.However, if the graph has weighted links, then stochastic outlier eigenvalues exist, even though the probability to observe them is in general small.

C. Recursive distributional equations for right eigenvectors
In Appendix E, we derive a set of recursive distributional equations for the asymptotic distributions p R as defined in Eq. ( 41) for the right eigenvectors associated with deterministic eigenvalue outliers λ isol and with eigenvalues located at the boundary of the continuous part σ ac .In particular, we show that the distribution p R solves the recursive distributional equation where q R is a distribution that solves When it holds that p R (r) = q R (r) and we recover the results from Ref. [44].
The relations (48) and ( 49) admit, for any value of λ, the trivial solution which cannot be associated with a right eigenvector of the random matrix A. However, the relations ( 48) and ( 49) also admit normalizable solutions for which there exist a positive number α > 0 so that These normalizable solutions are associated with right eigenvectors of the random matrix A.
As a consequence, we can obtain explicit expressions for the outliers λ isol and the eigenvalues λ b ∈ ∂σ ac by identifying values of λ for which the relations ( 48) and ( 49) admit normalizable solutions.This is the program that we pursue in Appendix G, while we present the main results of those derivations in the next subsections.

D. Eigenvalues at the boundary of the continuous part of the spectrum
The spectrum σ contains a continuous part σ ac if c(ρ + 1) > 1, as we have shown in Sec.III A. For values λ = λ b ∈ ∂σ ac located at the boundary of σ ac , the relations ( 48) and ( 49) admit a normalizable solution.Using this criterion, we obtain in Appendix G that The relations ( 48) and ( 49) provide us also with the statistics of right eigenvectors ⃗ R b associated with eigenvalues λ b .We distinguish between the cases where λ b / ∈ R and λ b ∈ R. In the former case, the components R b are complex-valued random variables with On the other hand, if λ b ∈ R, then the components are real-valued random variables with In addition to these results, we show in Appendix H that the distribution p R b (r) contains a delta peak at the origin due to all nodes that do not belong to the giant outcomponent, i.e., where s in is the size of the giant incomponent given by Eq. ( 15), and pR b (r) is a normalized distribution.

E. Outlier eigenvalue
There exists a second type of normalizable solutions to the Eqs.( 48) and (49), which are associated with deterministic outlier eigenvalues λ isol .If c(ρ + 1) > 1 and ⟨J 2 ⟩ < c(ρ + 1)|⟨J⟩|, then there exists a deterministic eigenvalue outlier located at Reference [62] observes that Eq. ( 57) describes well the largest eigenvalue of unweighted adjacency matrices of random graphs with a prescribed degree distribution.In Appendix G, we show that Eq. ( 57) is in fact an exact expression for the deterministic outlier.
The entries of the eigenvector ⃗ R isol are real, and the first moment of R isol satisfies where For uncorrelated indegrees and outdegrees it holds that ρ = 0 and ρ out 2 = 0, and we recover the results in Ref. [44].Analogous to Eq. ( 56), the distribution p R isol takes the form where s in is the size of the giant incomponent (15) and pR isol (r) is a normalized distribution (see Appendix H).

FIG. 2.
Distribution of the leading eigenvalue.Sketch of the distribution p λ 1 of the leading eigenvalue λ1 of random matrices A, as defined in Sec.II, in the regime c(ρ + 1) > 1.
The distribution consists of a delta distribution at the typical value λ * given by Eq. ( 61) and a continuous distribution p cycle with a total weight ν ≈ 0.

F. Leading eigenvalue
We discuss the implications of the results derived in Secs.III D and III E for the leading eigenvalue λ 1 of random graphs with a prescribed degree distribution p K in ,K out .

Distribution of λ1
The theory in Secs.III E and III D provides exact expressions for the boundary ∂σ ac of the continuous part of the spectrum, which is given by the eigenvalues λ b in Eq. ( 53), and the deterministic eigenvalue outlier λ isol , which is given by Eq. ( 57), in random directed graphs that are infinitely large.The question remains how the leading eigenvalue λ 1 is related to λ b and λ isol .
If we neglect the contributions from cycles of finite length ℓ, then the leading eigenvalue of an infinitely large random directed graph is given by where λ isol and λ b are given by Eqs. ( 57) and ( 53), respectively.Hence, if a random directed graph contains no cycles of small length ℓ in the limit n → ∞, then Eq. ( 61) is exact.However, as we have discussed in Sec.II C 4, random directed graphs with a prescribed degree distribution p K in ,K out typically contain a finite number of cycles of a given length ℓ, even in the limit n → ∞, and therefore we need to discuss how these cycles will affect λ 1 .
Cycles that are oriented rings may contribute stochastic outlier eigenvalues to the spectrum, see Sec.II C 4. As a consequence, λ 1 is not a self-averaging quantity but is instead a random variable with a distribution of nonzero variance.The distribution p λ1 takes the form where ν is the probability that the leading eigenvalue is a stochastic outlier contributed by an oriented ring, and p cycle (x) is the distribution of those stochastic outliers that are leading eigenvalues.Note that the distribution p cycle (x) is supported on the half line [λ * , ∞), see Fig. 2 for a sketch of p λ1 .Since for c(ρ + 1) < 1 it holds that λ * = −d, oriented rings will play an important role in p λ1 (x) when A does not have a giant SCC.On the other hand, if A has a giant SCC, i.e. c(ρ + 1) > 1, then it will be unlikely that the leading eigenvalue is a stochastic outlier.We show this explicitly in the next subsection for unweighted graphs, and subsequently we discuss the case of weighted graphs.

Unweighted graphs
We consider adjacency matrices A of unweighted graphs, such that J ij = 1 for all values of i and j.In this case, we obtain an exact expression for p λ1 (x).Indeed, since the eigenvalues of oriented rings with J ij = 1 are located on a circle of radius 1 centred around −d, see Eq. ( 47), it holds that where p + is the probability that the graph contains at least one oriented ring graph, given by Eq. ( 24).Moreover, it holds that and that Using Eqs.(64)(65)(66) in Eq. ( 63), we obtain that Hence, the leading eigenvalue of an unweighted random directed graph is deterministic and given by the value λ * if the graph contains a giant SCC.On the other hand, if there is no giant SCC, then with probability p + an oriented ring determine the leading eigenvalue.
From Eq. ( 67), we obtain the average leading eigenvalue, which is given by and its variance where p + is given by Eq. (24).Note that var[λ 1 ] = 0 if A has a giant SCC, and the leading eigenvalue is thus self-averaging in this regime, while var[λ 1 ] > 0 if A does not have a giant SCC, and the leading eigenvalue is thus not self-averaging in this regime.
In the next section, we discuss how these results extend to the case of weighted graphs for which the J ij are drawn from a nontrivial distribution p J .

Weighted graphs
In the general case of weighted graphs, it is difficult to obtain exact expressions for ν and p cycle (x).However, we can discuss the qualitative features of p λ1 (x) in the two regimes c(ρ + 1) < 1 and c(ρ since it is unlikely that an oriented ring contributes an eigenvalue to the spectrum that is larger than λ * ; this would require that γ, given by Eq. (47), is larger than λ * .Therefore, if the graph has a giant SCC, then the variance of λ 1 will be small and the typical value of λ 1 is given by λ * in Eq. ( 61).As a consequence, if the graph has a giant SCC, then since λ * is the typical value of λ 1 .
On the other hand, when c(ρ + 1) < 1, then λ * = 0, and therefore the leading eigenvalue is with a probability a stochastic outlier coming from an oriented ring graph.Hence, in the absence of a SCC, the variance of p λ1 (x) is large.

Right eigenvector associated with λ1
We derive exact expressions for the first moment ⟨R 1 ⟩ of eigenvectors associated with the leading eigenvalue λ 1 .
We first consider the case c(ρ + 1) > 1.Assuming that the leading eigenvalue takes its typical value λ * , given by either the outlier λ isol or the maximum value of Re[λ b ], see Eq. ( 71), we obtain that where ⟨R isol ⟩ 2 /⟨|R isol | 2 ⟩ is given by Eq. ( 58).
On the other hand, if c(ρ + 1) < 1, then the right eigenvector of λ 1 will be localized on a finite number of nodes and Interestingly, we observe in Eq. ( 73) that ⟨R 1 ⟩ behaves as an order-parameter of a phase transition between a ferromagnetic phase (⟨R 1 ⟩ > 0) and a spin glass phase (⟨R 1 ⟩ = 0).A similar type of behaviour has been found in sparse symmetric random matrices [49,[63][64][65].The analogy between ⟨R 1 ⟩ and the order parameter of a ferromagnetic phase can be made explicit.Indeed, the leading right eigenvector ⃗ R 1 is the stationary state of a spherical model defined on the graph represented by the adjacency matrix A, see equations ( 45) till (52) in Ref. [45].The spherical model at zero temperature exhibits either a ferromagnetic phase or a spin-glass phase, see Ref. [66], and ⟨R 1 ⟩ serves as the order parameter for this phase transition.Notice that the ⟨R 1 ⟩ = 0 regime does not correspond to a paramagnetic phase since the spherical model will be frozen into the configuration represented by the leading right eigenvector [66].

Limiting case of dense graphs
We discuss the limit of dense graphs by setting c = n and ρ = 0. Eq. ( 71) then reduces to which is the well-known expression for the leading eigenvalue λ 1 of a random matrix with independent and identically distributed matrix elements drawn from a distribution p J , see Refs.[67][68][69][70][71][72], as well as Refs.[13,20].However, note that the formula (71) holds for graphs with c ∈ O n (1) and therefore the correspondence holds only formally.Analogously, we obtain in this limit that

G. Subleading eigenvalue and spectral gap
The spectral gap is the difference λ 1 − Re[λ 2 ] between the leading eigenvalue and the real part of the subleading eigenvalue.From the results in Secs.III E, III D and III F, we readily obtain an expression for the typical value of the spectral gap when c(ρ + 1) > 1, namely, The expected value of the entries of the right eigenvector associated with the subleading eigenvalue satisfy H. Perron-Frobenius theorem Here we discuss how the theoretical results are related to the celebrated Perron-Frobenius theorem [61], which states that the eigenvalue λ 1 of a nonnegative matrix, and the components of its right (left) eigenvector, are nonnegative numbers.In other words, the Perron-Frobenius theorem implies that R 1,j ≥ 0 for all j = 1, 2, . . ., n.

IV. STABILITY OF COMPLEX SYSTEMS ON RANDOM AND DIRECTED GRAPHS
We apply the results from the previous section to a linear stability analysis of dynamical systems defined on random directed graphs.
Let ⃗ x † (t) = (x 1 (t), . . ., x n (t)) be the state vector of a large dynamical system of interest, and let be a set of nonlinearly coupled differential equations that describe the dynamics of the system of interest.We consider a fixed point or stationary state ⃗ x * and study the dynamics described by Eq. ( 79) in the vicinity of ⃗ x * .A stationary state is a vector that satisfies Note that a nonlinear system may contain several stationary states [73], but here we are only interested in the dynamics of ⃗ x(t) in the vicinity of one given stationary state.According to the Hartman-Grobner theorem [18,19], the dynamics described by Eq. ( 79) is in the vicinity of the fixed point ⃗ x * well approximated by the set of linearly coupled equations given by Eq. ( 1) with A the Jacobian of f and the deviation vector.
The stability of the stationary state ⃗ x * is determined by the sign of the real part of the leading eigenvalue λ 1 (A).Indeed, if the matrix A is diagonalizable, then the dynamics of ⃗ y † (t) is governed by the eigenvalues λ j (A) and their associated right eigenvectors ⃗ R j (A) and left eigenvectors ⃗ L j (A) [60], namely, In the case when all eigenvalues have negative real parts, then lim t→∞ ⃗ y † (t) = 0, which implies that the stationary state is stable.On the other hand, if there exists at least one eigenvalue with a positive real part, then the stationary state is unstable.With a bit more effort, one can show that the stability criterion based on the sign of the real part of the leading eigenvalue also holds for systems described by nondiagonalizable matrices, see Appendix A. From Eq. ( 82), we also observe that right and left eigenvectors associated with the leading eigenvalue contain valuable information about the dynamics of a system in the vicinity of a fixed point.In particular, the nature of the mode that destabilizes the system takes the form of the left eigenvector ⃗ L 1 .For instance, if the eigenvector ⃗ L 1 has a positive mean ⟨L 1 ⟩ > 0, then the instability is reminiscent of a ferromagnetic phase, whereas if ⟨L 1 ⟩ = 0, then the instability is reminiscent of a spinglass phase [54,55,74].
We study here the stability of large systems coupled through random matrices A defined on random directed graphs with a prescribed degree distribution p K in ,K out , as defined in Sec.II.To this aim, we use the theory from Sec. III F for the leading eigenvalue λ 1 and the associated values of ⟨L 1 ⟩ and ⟨R 1 ⟩ (which in this ensemble are equivalent).
A first interesting observation is that for random directed graphs λ 1 is finite, even in the limit n → ∞.This stands in contrast with the leading eigenvalue of nondirected random graphs [47,48], which diverges for increasing n.As a consequence, random directed graphs with a prescribed degree distribution are stable in the limit of large n, which provides an interesting take on the diversity-stability debate [9].The remarkable stability of large dynamical systems defined on directed graphs follows from their locally tree-like and oriented nature.Since the local neighborhood of a randomly selected node is an oriented tree, there exist no feedback loops that can amplify the amplitude of local perturbations.On the other hand, in nondirected random graphs local perturbations are amplified through feedback loops provided by the nondirected links.As a consequence, dynamical systems on locally tree-like networks with unidirectional interactions are much more stable than dynamical systems defined on networks with bidirectional interactions.
It remains of interest to study how network architecture affects the stability of large dynamical systems defined on random directed graphs.Since λ 1 is a random variable in the limit of infinitely large n, we focus first on its typical value λ * , given by Eq. (71).Interestingly, for the interaction networks defined in Sec.III, the eigenvalue λ * is solely governed by three parameters that characterize the network architecture: the effective mean degree that characterizes the effective number of degrees of freedom each node in the network interacts with; the coefficient of variation that characterizes the fluctuations in the coupling strengths between the constituents of the system; and the effective interaction strength that quantifies the relative strength of the interactions with regard to the rate d of decay.Hence, the system stability, characterized by the typical value of the leading eigenvalue λ 1 , only depends on these three parameters, and thus enjoys a high degree of universality.
In order to better understand how the three parameters c(1 + ρ), v J , and α govern the stability of dynamical systems on random directed graphs, we present in Fig. 3 the phase diagram of the system in the (v J , c(1 + ρ)) plane, for fixed values of α ∈ [0, 1] and c(1 + ρ) > 1.The reason we choose these parameter regimes is because for α > 1 there exist no stable phase and for c(1 + ρ) < 1 the graph does not have a giant strongly connected component; in the latter regime, the system falls apart in the sense that it is a union of a large number of small isolated subsystems, and thus we are not considering anymore the linear stability of a large system of interacting degrees of freedom.
The phase diagram shows the critical connectivity c * (black lines) that separates the stable phase (Re[λ * ] < 0), for systems at low connectivity c(1+ρ), from the unstable phase (Re[λ * ] > 0), for systems at high connectivity c(1+ ρ).If α > 0, then the critical line is determined by   3, but now for negative α.In this case there is no gapped (or ferromagnetic) phase.critical connectivity is finite for all values of α and v J , it follows that for large enough c(1 + ρ) any dynamical system is unstable, which is consistent with the results in May's paper [20] that states that any large enough fully connected system is unstable.However, as we see from Eq. ( 86) and Fig. 3, the phase transition to the stable phase at low connectivities has three qualitatively different regimes, which we discuss in the following paragraphs.
The critical value v * separates the regime v J > v * , which does not have stable phase, from the regime v J < v * , which has a stable phase at low enough connectivity c(ρ + 1) > 1.Hence, for small enough fluctuations in the interaction strengths (v J < v * ) it is possible to stabilize the system by rewiring edges in the graph such that the correlation ρ between indegrees and outdegrees decreases.Stabilizing the system by rewiring edges is however not possible when v J > v * .
Moreover, the regime v J < v * consists of two distinct regimes: a gapped regime, which appears when the fluctuations in the interaction strengths are small (v 2 J < 1/α − 1), and a gapless regime, which appears when the fluctuations in the interaction strengths are large (v 2 J > 1/α−1).In Fig. 3, these two regimes are separated by the red dotted line.In the gapped regime the leading eigenvalue is an outlier and the critical connectivity c * is independent of v 2 J .This implies that fluctuations do not affect the system stability when the leading eigenvalue is an outlier.On the other hand, in the gapless regime the leading eigenvalue is part of the boundary of the continuous spectrum and the critical connectivity c * decreases as 1/v 2 J .In this regime, fluctuations in the interaction strengths render the system less stable.The differences between the gapped and gapless regimes can be understood in terms of the nature of the destabilizing mode.In the gapped regime, the mode that destabilizes the system is ferromagnetic, i.e., ⟨L 1 ⟩ > 0, whereas in the gapless regime, the mode that destabilizes the system is spin-glass-like, i.e., ⟨L 1 ⟩ = 0. Hence, increasing the fluctuations v J for fixed values of the mean strength α does not affect the ferromagnetic mode, which gives an intuitive understanding why the location of the outlier is independent of v J .
We can quantify the overall stability of systems coupled through random matrices (3) in terms of a single parameter a stab , defined as the area in figure 3 where the system is stable and c(1 + ρ) > 1.The quantity a stab is given by The area a stab is a monotonic decreasing function of α, which approaches a stab → 0 as α → 1 and a stab → ∞ as α → 0. Thus, the increase of the average interaction strength between the elements of a network system, in the sense that ⟨J⟩ approaches d, makes the system less stable.
In Fig. 4 we present the phase diagram for α < 0 or equivalently ⟨J⟩ < 0. Since in this case the outlier is negative, the critical connectivity is Note that for small values of v 2 J the system is more stable in the case of negative ⟨J⟩ since then there exists no outlier that renders the system less stable.
Finally, we discuss how the phase diagrams, given by Figs. 3 and 4, are modified by the presence of small cycles in the network.As illustrated in Fig. 2 and discussed in Sec.III F, there is a finite, albeit small, probability ν that the leading eigenvalue λ 1 is larger than λ * .This happens when a random directed graph contains a cycle that generates a strong enough feedback loop.As a consequence, one should interpret the phase diagrams Figs. 3 and 4 as describing the typical behaviour of dynamical systems defined on random directed graphs in the limit n → ∞.There is however a small nonzero probability that a random directed graph contains a cycle that destabilizes the system through the feedback loop that it generates.

V. NUMERICAL EXAMPLES ON MATRICES OF FINITE SIZE
In this section, we compare theoretical results for infinitely large matrices with direct diagonalization results on matrices of finite size n ∼ O(10 3 ).Such numerical experiments reveal the magnitude of finite size effects, which are important for applications because real-world systems are finite.Moreover, this comparison allows us to better understand the potential limitations of the theory.
Since a nonzero d results in a constant shift of all eigenvalues by −d, i.e. λ j → λ j − d, we set in all examples The numerical experiments are designed as follows.First, we use the algorithm presented in Appendix B to sample a matrix from a random-matrix ensemble of the type given by Eq. ( 3).Second, we use the subroutine gsl eigen nonsymmv from the GNU Scientific Library to compute the n eigenvalues of the sampled matrix and the entries of their right eigenvectors.Third, in order to test the theory in Sec.III, we compute for each matrix sample A the leading eigenvalue λ 1 (A), the real part of the subleading eigenvalue λ 2 (A), and the observable which quantifies the mean value of the components of the right eigenvector associated with λ 1 (A).Before we compute R 1 (A) with the above equation, we rotate all the elements R 1,j (A) by a constant phase e iθ , such that the empirical mean n j=1 R 1,j (A) is a positive real number, in accordance with our conventions in Eq. (29).Lastly, we compute the mean values λ 1 , λ 2 , and R 1 of the sampled populations, together with the standard deviations for each quantity.Empirical mean values for, say λ 1 , are compared with either the theoretical ensemble averages ⟨λ 1 ⟩ or with the typical value of λ 1 provided by the deterministic outlier λ isol or the by the boundary |λ b | of the continuous part of the spectrum.Note that we use the notation ⟨λ 1 ⟩ for theoretical ensemble averages, while λ 1 is used for empirical mean values over the sampled populations, which forms an estimate of ⟨λ 1 ⟩.
The present section is organized into three subsections.In Sec.V A, we consider adjacency matrices of directed random graphs with negative degree correlations (ρ < 0) and unweighted links (J ij = 1).For this ensemble, we have derived in Sec.III F 2 exact results for the statistics of the leading eigenvalue λ 1 in the limit n → ∞.Hence, we expect a good correspondence between theory and experiment in all parameter regimes.Deviations between theory and experiment will be due to finite size effects and finite sampling statistics only.
In Sec.V B, we consider the adjacency matrices of directed random graphs with positive degree correlations (ρ > 0) and weighted links.For this ensemble, we have derived in Sec.III F 3 exact results for the typical value of λ 1 in the regime c(ρ + 1) > 1 Hence, we expect in this regime a good correspondence between theory and experiment, and deviations between theory and experiment will be due to finite size effects, finite sampling statistics, and because of deviations between the mean and typical value of λ 1 .
In Sec.V C, we apply the theoretical results of Sec.III to adjacency matrices of random directed graphs with power-law degree distributions, which have diverging moments.Since the graphs are unweighted, the theory of Sec.III F 2 applies.However, since for power-law random graphs the tails of the degree distributions decay very slowly, we expect to observe deviations between theory and experiment, and in Sec.V C we test the limitations of the theory for power-law random graphs.
Lastly, in Sec.V D, we test the predictions given by Eqs. ( 56) and (60) for the number of zero-valued entries in the right eigenvector ⃗ R 1 .
A. Adjacency matrices of unweighted and directed random graphs with negative degree correlations We consider the adjacency matrices of Poissonian random graphs -also called Erdős-Rényi random graphsand geometric random graphs with negative degree correlation coefficient ρ ∈ [−1, 0] and with constant weights For Poissonian random graphs, the prescribed degree distribution is given by where k, ℓ ∈ {0, 1, . . ., n − 1} and where with N p = n−1 k=0 c k /k! the normalization constant.For n → ∞, p p (k; c) is the Poisson distribution with mean degree c and N p = e c .For geometric random graphs, the prescribed degree distribution is given by where k, ℓ ∈ {0, 1, . . ., n − 1} and where with For n → ∞, p g (k; c) is the geometric distribution with mean degree c and N g = c + 1.
Throughout this subsection, we consider unweighted graphs for which J jk = 1 for all j ̸ = k, and thus In Fig. 5, we shows how the degree correlation coefficient ρ affects the spectral properties of adjacency matrices of directed random graphs with mean degree c = 2.We compare the theoretical results given by Eqs. ( 57), ( 53), ( 58), ( 68), ( 73) and ( 77) with direct diagonalization results.
In the Panels (a) and (b) of Fig. 5, we provide a global picture of the spectra of adjacency matrices of Poissonian random graphs by comparing the spectra of matrices with ρ = 0 and ρ = −0.3.We observe how negative degree correlations contract the spectrum: for ρ = −0.3 the leading eigenvalue is smaller than for ρ = 0, and the spectrum concentrates around the origin when ρ is more negative.
In Panel (c) of Fig. 5, we present a more detailed analysis of the behaviour of the leading eigenvalue λ 1 and the subleading eigenvalue λ 2 as a function of ρ.As discussed in Sec.III, for c(ρ + 1) > 1, the leading and subleading eigenvalues are self-averaging and given by λ 1 = λ isol and Re[λ 2 ] = |λ b |, respectively.These findings are well corroborated by the numerical results in Fig. 5(c).We observe that λ 1 = λ 2 at the critical percolation threshold ρ = −1 + 1/c = −0.5, as predicted by the theory.For c(ρ + 1) < 1, there does not exist a giant strongly connected component, see Sec.II C 2, and therefore the leading eigenvalue is either 0 or 1, depending on whether the graph contains an oriented ring or not.In this regime, the mean value ⟨λ 1 ⟩ is given by Eq. ( 68) and its variance is given by Eq. ( 69), both findings which are well corroborated by numerical results in Fig. 5(c).
In Fig. 5(d), we present a systematic study of the first moment ⟨R 1 ⟩ of the eigenvector ⃗ R 1 associated with the leading eigenvalue, which is an outlier for ρ ≥ −0.5.The theoretical result Eq. ( 58) is well corroborated by direct diagonalization results for the empirical observable R 1 , defined in Eq. ( 89).We observe a phase transition from a phase with ⟨R 1 ⟩ = 0, for ρ < −0.5, to a phase with ⟨R 1 ⟩ > 0, for ρ > −0.5.Note that ⟨R 1 ⟩ = 0 for ρ < −0.5 since in this regime there exists no giant SCC, and therefore the right eigenvector is localized on an isolated component with a finite number of nodes.Taken together, the results in Fig. 5 illustrate how the leading eigenvalue of the adjacency matrix of a random directed graph increases as a function of ρ.These results imply that one can reduce λ 1 significantly reduced through a rewiring procedure that decreases correlations between indegrees and outdegrees.These results are in agreement with the phase diagram in Fig. 3, which shows that dynamical systems coupled through graphs with negative ρ are more stable than those coupled through graphs with positive ρ > 0.

B. Adjacency matrices of weighted and directed random graphs with positive degree correlations
We analyze the spectral properties of the adjacency matrices of Poissonian and geometric random graphs with positive ρ and random weights.
The Poissonian ensemble with positive ρ has a prescribed degree distribution where ρ ∈ [0, 1/c], and where p p is the truncated Poisson distribution defined by Eq. ( 91).The geometric ensemble with positive ρ has the prescribed degree distribution where ρ ∈ [0, (c+1)/c], and p g is the truncated geometric distribution defined by Eq. ( 93).
The off-diagonal matrix entries J jk are i.i.d.random variables drawn either from a Gaussian distribution or from a bimodal distribution with the parametrization x 0 = µ 2 0 + v 2 and 2b = 1 + µ 0 /x 0 .In each case, the parameters µ 0 and v denote, respectively, the mean and the standard deviation of the distribution p J (x).
In Fig. 6, we analyze how positive values of ρ affect the spectral properties of adjacency matrices of randomly weighted directed graphs.We compare the spectral properties for different values of ρ and fixed parameters c = 2, µ 0 = 1, and v = 1.2.We compare theoretical results from Sec. III (lines) with direct diagonalization results for matrices of size n = 4000 (markers).
In Panels (a) and (b) of Fig. 6, we provide a global picture of the spectra of Poissonian random graphs by comparing the spectrum of a graph without degree correlations (ρ = 0, Panel (a)) with the spectrum of a graph with positive degree correlations (ρ = 0.5, Panel (b)).In the latter case, the correlations are perfect in the sense that K in j = K out j for each node j.The direct diagonalization results corroborate well the formula Eq. 53 for the boundary of the continuous part of the spectrum.We also observe that the leading eigenvalue λ 1 (A) increases as a function of ρ, that λ 1 (A) is located at the boundary ∂σ ac for ρ = 0 [Panel (a)], and that λ 1 (A) is an outlier for ρ = 0.5 [Panel (b)].
In the Panels (c) and (d) of Fig. 6, we provide a more detailed view of the eigenvalues λ 1 and λ 2 as a function of ρ.We observe that both λ 1 and λ 2 are monotonically increasing functions of ρ, and that there is a continuous transition from a gapless phase for ρ < ⟨J 2 ⟩/(c⟨J⟩ 2 )−1 ≈ 0.22 to a gapped phase for ρ > ⟨J 2 ⟩/(c⟨J⟩ 2 ) − 1.We observe that the values of λ 1 and λ 2 are universal, in the sense that they depend on the distributions p J and p K in ,K out only through the parameters c, ρ, ⟨J⟩ and ⟨J 2 ⟩.Theoretical results are well corroborated with direct diagonalization results, although finite size effects are more significant for the spectral gap.
Lastly, Panels (e) and (f) of Fig. 6 compare the theoretical result ⟨R 1 ⟩ of Sec.III with the sampled average R 1 of the quantity R 1 , as defined in Eq. ( 89).In the gapless phase, we have ⟨R 1 ⟩ = 0, while in the gapped phase we obtain ⟨R 1 ⟩ > 0, which is reminiscent of a continuous phase transition between a spin-glass phase and a ferromagnetic phase.We observe that in the gapped phase direct diagonalization results are in very good agreement with the theoretical expressions for infinitely large matrices, whereas in the gapless phase there are significant deviations between theory and direct diagonalization results.These deviations are due to finite size effects, which are significant because of our convention to normalize the eigenvectors with Eq. ( 29).In spite of that, we observe that direct diagonalization results slowly converge to the theoretical values as the matrix size n increases.
Overall, we conclude that the theoretical results for the typical values of λ 1 , λ 2 , and ⟨R 1 ⟩, presented in Sec.III, describe well the numerically estimates for their ensemble average.This is because in the regime c(ρ + 1) > 1 it is unlikely that a stochastic outlier eigenvalue exists.
C. Adjacency matrices of random graphs with power-law degree distributions In this subsection, we analyze the spectral properties of the adjacency matrices of random directed graphs with power-law degree distributions.Power-law random graphs are interesting from a practical point of view, since degree distributions of real-world systems often have tails that are fitted well by power-law distributions [10,[75][76][77].For example, the World Wide Web is a directed graph with a power-law degree distribution of the form p K in ,K out (k, ℓ) ∼ k −2.1 ℓ −2.7 [28].Since power-law degree distributions have diverging moments, these ensembles exhibit strong finite size effects and large sample-to-sample fluctuations, and it is thus interesting to test the possible limitations of the theory in Sec.III for power-law random graphs.
We consider two classes of power-law random graphs, namely, an ensemble without correlations between indegrees and outdegrees (ρ = 0), and an ensemble with perfect degree correlations, where K in j = K out j for all nodes j (ρ > 0).The ensemble without degree correlations has a prescribed degree distribution with k, ℓ ∈ [n − 1] and with N pow = n−1 k=1 k −a , while the ensemble with perfect degree correlations has the prescribed degree distribution with k, ℓ ∈ [(n − 1)/2] and with M pow = k −a the normalization constant.The parameter a controls how fast the degree distribution decays for large degrees.
We discuss the values of the parameters c and ρ in the limit n → ∞.If a > 2, then the mean degree is given by with ζ(x) the Riemann zeta function.Also, if a > 2, then the ensemble of Eq. ( 99) has a degree-correlation coefficient while if a > 3, then the ensemble of Eq. ( 100) has a degree-correlation coefficient Re[ ] (a) Re[ ]  77) Note that c(ρ + 1) > 1, and therefore the power-law graphs we consider have a giant SCC.We consider unweighted power-law random graphs with J jk = 1 for all j, k ∈ [n].
We now resort to direct diagonalization in order to gain a better understanding of the statistics of the leading eigenvalue of power-law random graphs.In Panel (a) of Fig. 7, we plot the sample mean λ 1 of the lead-ing eigenvalue λ 1 (A) and the sample mean Re[λ 2 ] of the real part of the subleading eigenvalue λ 2 (A) as a function of a in the ensemble defined by Eq. ( 99) for which ρ = 0. We observe that for a ≳ 3 the theoretical expressions Eqs. ( 57) and ( 53) for λ isol and |λ b |, respectively, are in very good agreement with direct diagonalization results for the leading and subleading eigenvalue.In the regime a ≲ 3, we observe significant deviations between  100) for which ρ > 0. In this case, the theory works well when a ≳ 4, whereas for a ≲ 4 we observe significant deviations between theory and numerical experiments.This is because for a → 3 + the degree correlation coefficient ρ diverges, and therefore the theoretical expressions for λ isol and |λ b | also diverge.Overall, these results show that the Eqs.( 57) and ( 53) work remarkably well for power-law random graphs.
Lastly, in Panels (c) and (d) of Fig. 7, we plot the empirical mean R 1 as a function of a and compare results from the direct diagonalization of randomly sampled matrices with the theoretical expression for ⟨R isol ⟩ given by Eq. ( 58).We observe a reasonable good agreement between theoretical results and numerical experiments, considering that power-law random graphs exhibit significant finite-size effects and fluctuations.Interestingly, the normalized mean ⟨R isol ⟩/ ⟨R 2 isol ⟩ vanishes at a = 3 and a = 4 for ensembles with degree distributions ( 99) and (100), respectively.Since the Perron-Frobenius theorem applies to this ensemble, this is a transition from a delocalized phase (⟨R isol ⟩/ ⟨R 2 isol ⟩ > 0) to a localized phase (⟨R isol ⟩/ ⟨R 2 isol ⟩ = 0), as argued in Sec.III H.In other words, the leading eigenvector is localized when the exponent a that characterizes the decay of the power-law degree distribution is small enough.15) and (17).In the numerical experiments, we have used the criterion |Ri| 2 < 1e − 20 to identify a zero-valued entry.

D. Distribution pR for the entries of the right eigenvectors
So far, we have studied the mean value of the distribution p R .Differently, in this section we analyze properties of the full distribution p R .
Equations ( 56) and ( 60) state that the distribution p R contains a delta peak at the origin with weigth 1 − s in , where s in is the relative size of the OUT component.In other words, the number of nonzero entries in a right eigenvector is equal to the size of the OUT component.Figure 8 tests this prediction for the adjacency matrix of a directed random graph with a Poissonian degree distribution given by Eq. ( 90) with c = 3.
In Panel (a) of Fig. 8, we compare theoretical predictions for p R , obtained by solving the recursive distributional Eqs.(48)(49) at λ = λ isol through a population dynamics algorithm [44,54,78,79], with a histogram of the entries of the right eigenvector associated with the leading eigenvalue λ 1 , obtained through direct diagonalization results.We have set ρ = 0 and the couplings J ij are drawn from a Gaussian distribution.In Fig. 8, we observe an excellent agreement between theory and numerical experiments and we also observe a delta peak at the origin, which is clearly discernible in both theory and numerical experiments.
In order to quantify the weight of the delta peak at the origin, we plot in Panel (b) of Fig. 8 the fraction of entries R i that are not equal to zero.We compare direct diagonalization results for right eigenvectors associated with the leading eigenvalue λ 1 with the theoretical expression s in obtained by solving Eqs. ( 15) and (17).We find again an excellent agreement between theory and numerical experiments, confirming that the number of nonzero elements in ⃗ R equals the size s in of the IN component

VI. EXTENSIONS OF THE THEORY
Here we extend the theory from Sec. III C to the case of random matrices with diagonal disorder and graphs that contain nondirected links.

A. Random matrices with diagonal disorder
We consider random matrices of the form where J and C are defined in exactly the same way as in Eq. ( 3), but where D is now a diagonal matrix with entries [D] jj = D j that are i.i.d.random variables drawn from a probability distribution p D (x) with x ∈ R + .Note that p D has a support on the positive real axis since otherwise Re[λ 1 ] > 0 and the dynamical system described by A will not be stable.In the special case when we recover the model given by Eq. ( 3).The theory of Sec.III C applies to the model given by Eq. ( 104) after some minor modifications.As shown in Appendix I, for the present model the distribution p R solves the recursion relation and q R solves the recursion relation If D is deterministic, then Eq. ( 105) holds and we recover the recursion Eqs. ( 48) and (49).
In Appendix I, we derive the values of λ for which the recursion Eqs. ( 106) and (107 admit normalizable solutions.In this way, we obtain that the deterministic outliers of A solve the equation and the eigenvalues λ b ∈ C at the boundary of the continuous part of the spectrum solve Using Eqs. ( 108) and ( 109), it is possible to derive phase diagrams for the stability of dynamical systems with disorder in the decay rates D j , similar to those presented in Figs. 3 and 4. We leave this open for future studies.

B. Nondirected graphs with random couplings
We consider random matrices of the form where Cn is the adjacency matrix of an nondirected random graph with a prescribed degree distribution p deg (k), and where Jn is a random matrix with zero entries on the diagonal and with offdiagonal pairs ( Jjk , Jkj ) that are i.i.d.random variables with a distribution p J1, J2 (x, y).We assume that p J1, J2 satisfies the symmetry property The random matrix model defined by Eq. ( 110) is locally tree-like, but it is in general not locally oriented.Nevertheless, locally oriented ensembles can be recovered in the limiting case In this case, the model given by Eq. ( 110) is the adjacency matrix of a directed random graph with a joint degree distribution which is a special case of the model defined by Eq. ( 3).
On the other hand, if then Eq. ( 110) defines symmetric random matrices.
In Appendix I, we derive a set of recursion relations for p R .We obtain that the distribution p R is the marginal of the joint distribution p G,R (g, r) that solves the recursion relation and q G,R solves the equation Note that in the special case of symmetric random matrices [i.e., when Eq. ( 114) holds], Eqs.(116)(117) are equivalent to those derived in Refs.[49,[63][64][65].
The outliers λ isol and the boundary λ b of the continuous part of the spectrum are found as values of λ for which the relations (116)(117) admit normalizable solutions.In the present case, we do not know how to derive compact analytical expressions for λ isol and λ b .However, Eqs. (116)(117) can be solved numerically with a population dynamics algorithm, as described in Refs.[54,78,79], and consequently a stability phase diagram as in Figs. 3 and 4 can be derived.We leave such a study open for future work.
In this paper, we have analysed the linear stability of large dynamical systems defined on random directed graphs with a prescribed degree distribution p K in ,K out , which serve as a model for, among others, the World Wide Web [28,29] and neural networks [30][31][32].We have shown that dynamical systems defined on random directed graphs are more stable than systems defined on nondirected graphs.Indeed, we have shown that for random directed graphs the leading eigenvalue is with probability one finite in the limit of infinitely large n.This results brings an interesting perspective in the diversitystability debate [9].Dynamical systems defined on dense matrices or nondirected graphs are unstable when n is large enough: in the former because λ 1 is of the order O( √ n) [20], while in the latter because λ 1 is of the order O( √ k max ) [47][48][49].Hence, a large and complex systems will be in general unstable [20,89].However, if the system is defined on a random directed graph, then it can be infinitely large and stable since λ 1 converges to a finite limit for large n.The stabilising nature of random directed graphs is a consequence of their locally tree-like and oriented structure, which implies that there exist no feedback loops that amplify local perturbations.
A second surprising result is that the stability of dynamical systems defined on directed random graphs exhibits a universal character, in the sense that it is governed by only three network parameters: the effective mean degree c(ρ + 1), the coefficient of variation v J = ⟨J 2 ⟩ − ⟨J⟩ 2 /⟨J⟩, and the ratio α = ⟨J⟩/d between the mean interaction strength and the decay rate.This result follows from the analytical expression, given by Eq. ( 71), for the typical value of the leading eigenvalue of the adjacency matrix that encodes the network of interactions between the system constituents.From the analytical expression for the typical value of the leading eigenvalue, we obtain the universal phase diagrams of Figs. 3 and 4.
Analyzing these phase diagrams, we obtain the following interesting conclusions on how network topology affects system stability.First, negative correlations between indegrees and outdegrees stabilize large dynami-cal systems, whereas the mean coupling strength α and the coupling fluctuations v J render dynamical systems less stable.Second, when the fluctuations v J of the coupling strengths are small enough, then the stability is controlled by an outlier and is independent of v J .On the other hand, when v J is large enough, then the leading eigenvalue is determined by the boundary of the continuous part of the spectrum and the system stability decreases as a function of v J .Moreover, in the first scenario, the unstable mode is ferromagnetic (⟨R⟩ > 0) whereas in the second scenario it is spin-glasslike (⟨R⟩ = 0).Lastly, systems with coupling fluctuations v J larger than the critical value v * = 1−α 2 α 2 do not contain a stable phase, no matter how large the negative correlations between indegrees and outdegrees are.
The universal phase diagrams of Figs. 3 and 4 have been derived with a mathematical method, akin to the cavity method in statistical physics, which computes the typical value of the leading eigenvalue of random directed graphs that have a giant SCC in the limit of n → ∞.The cavity method computes the typical value of λ 1 by neglecting contributions of cycles of finite length.However, if the graph contains disorder in the weights J ij , then the leading eigenvalue is not a self-averaging quantity and there exists a finite (albeit) small probability that the leading eigenvalue comes from a cycle of finite length that is part of the graph, as sketched in Fig. 2. Hence, short cycles can destabilize large dynamical systems defined on random directed graphs when they induce strong enough feedback loops.
The derived theoretical results for the spectra of large sparse non-Hermitian random matrices may also be useful for applications other than the linear stability analysis of large dynamical systems described by differential equations.For example, the theory is also useful to study the stability of dynamical systems in discrete time [90], which are relevant for the study of systemic risk in networks of banks connected through financial contracts [8,86].For discrete-time systems, the stability is controlled by the spectral radius r(A) = max {|λ 1 |, |λ 2 |, . . ., |λ n |}.Another example of an application is the analysis of spectral algorithms that use the right or left eigenvector associated with the (sub)leading eigenvalue to obtain information about a system, e.g., spectral clustering algorithms [91,92], centrality measures based on eigenvectors [93][94][95], or algorithms for the low-rank matrix estimation problem [96,97].Detectability thresholds of spectral algorithms often depend on the location of the leading and subleading eigenvalue [92,[97][98][99][100].A fourth example of an application is the analysis of stochastic processes with spectra of Laplacian or Markov matrices [101][102][103][104][105]: the stationary state of a Markov process is the right (or left) eigenvector associated with the leading eigenvalue of a Markov matrix [105], the relaxation time is provided by the spectral gap [106][107][108], and the cumulant generating function of a time-additive observable can be expressed in terms of the leading eigenvalues of a sequence of Markov matrices [109][110][111][112][113]. A fifth application is the study of nonHermitian quantum mechanics on random graphs [87,114,115], which is currently an active research field.Lastly, we remark that the subleading eigenvalue, and its associated right (left) eigenvector, provide not only information about the asymptotic stability of large dynamical systems, but also about their response to random perturbations as shown in Ref. [46].Taken together, we conclude that the spectral theory presented in this paper can be used in various contexts.
The theoretical results obtained in this paper are conjectures about the spectral properties of directed random matrices.Reference [97] provides a mathematical proof for Eqs. ( 68) and (77) for the leading and subleading eigenvalues in the special case of directed Erdős-Rényi graphs with J ij = 1.To our knowledge, there exist no proofs of Eqs. ( 53) and ( 57) for the deterministic outlier eigenvalue λ isol and the boundary of the continuous part of the spectrum λ b for graph ensembles different than directed Erdős-Rényi graphs.Also, we are note aware of proofs for Eqs. ( 54), ( 55) and ( 58) on the mean value of the distribution of right eigenvector elements, the recursion relations for p R given by Eqs. ( 48) and ( 49), the algebraic multiplicity of the trivial −d-eigenvalue given by Eq. ( 45), and Eqs. ( 56) and ( 60) for the number of zero entries of right eigenvectors.The results in the present paper are thus interesting conjectures about the spectral properties of sparse non-Hermitian random matrices.
In the present paper, we have focused on systems that are locally tree-like and oriented.For future work, it would be interesting to understand how network topology affects the linear stability of non-oriented systems [116] and systems that contain small cycles or motifs [52,53,117,118].Based on the results in the present paper, we would expect that those systems are in general less stable than locally tree-like and oriented systems.
If A is a nondiagonalizable matrix of size n, then there exists a nonsingular matrix S such that [60] where H is a Jordan matrix.The Jordan matrix has the form where and is a Jordan block of size n.
The number of Jordan blocks associated with an eigenvalue λ equals the geometric multiplicity of the eigenvalue λ.Hence, the number of independent right eigenvectors of the matrix A is equal to the number m of Jordan blocks in the matrix H.
The columns of S are the generalized right eigenvectors ⃗ R j of the matrix A. Since the matrix S is nonsingular, the generalized right eigenvectors form a set of n independent vectors that span C n .Analogously, the rows of S −1 are the generalized left eigenvectors ⃗ L † j of A. Also, generalized left and right eigenvectors form a biorthonormal system, with the ℓ α as defined in Eq. (A11).Analogously, the m left eigenvectors are Since the generalized right and generalized left eigenvectors form a biorthonormal system, the matrix A can be expressed in the form In the present paper, we consider random graphs with a given prescribed degree distribution p K in ,K out .In this ensemble, graphs are drawn with probability is the probability distribution of a degree sequence and where n k in j , k out j j∈ [n] is the number of graphs with a degree sequence k in j , k out j j∈ [n] .The probability distribution of a degree sequence is proportional to This model is called the uniform model [38].It is the configuration model [2,4,37] conditioned on the event that there are no self-links and multiple edges.However, since in the configuration model self-links and multiple edges are rare, the results in this paper apply both to the configurational model and the uniform model (the local neighborhood of a randomly selected node is for both models the same in the limit n → ∞).

Algorithm
We detail the algorithm we use in this paper to sample graphs from the ensemble defined in Sec.B 1. We consider the specific case of a distribution of the form The algorithm we have used to generate random graphs from this degree distribution consists of the following steps: 1. We generate a sequence of n i.i.d.variables k in j from the distribution p K in ; 2. We generate a sample of n i.i.d.Bernoulli random variables x j ∈ {0, 1}, which take the value x j = 1 with probability q and x j = 0 with probability 1−q; 3. If x j = 0, then we set k out j = k in j ; 4. We generate a random permutation ζ on the set of indices j ∈ [1, n] for which x j = 1; 5. If x j = 1, then we set k out j = k in ζ(j) ; 6.To each j we associate k in j insockets and k out j outsockets; 7. We randomly connect pairs of insockets with outsockets by starting with the node with the highest total degree k in j + k out j and connecting its sockets to k in j randomly selected outsockets and k out j randomly selected insockets.Two connected sockets create a directed edge.8.We do not allow for self-links and we do not allow for multiple edges.Sometimes step seven in the algorithm fails because connecting two sockets would create either a self-link or a multiple edge.In this case, we the algorithm restarts step seven.9. We repeat step seven until the algorithm has found a proper set of edges that defines an oriented simple random graph.
This algorithm works very well for most of the degree distributions discussed in this paper, except for power-law random graphs with a small exponent a, see Section V C.This is because for power-law random graphs it can be difficult to avoid multiple edges or self-links.Generating graphs with a power-law degree distribution with a small exponent a requires more sophisticated algorithms, such as, algorithms using Markov chains [119,120].Alternatively, one could consider the configurational model instead of the uniform model and allow for self-links and multiple edges.One should however bare in mind that for power-law random graphs with small exponent a the configuration model and the uniform model may not be equivalent anymore because finite size effects will be significant.
Appendix C: Oriented rings in random and directed graphs An oriented ring graph of size ℓ is a subgraph of size ℓ that has an adjacency matrix of the form where J i ∈ R.
Oriented ring graphs may contribute stochastic outliers to the spectra of random graphs with a prescribed degree distribution p K in ,K out .Here, we first derive explicit expressions for the eigenvalues of an isolated oriented ring graph, and then count the number of oriented ring graphs in random and directed graphs.

Eigenvalues of an oriented ring graph
The eigenvalues λ j of an oriented ring graph are located on the circle centred at the origin with radius and are given by Notice that for simplicity we have used that A ii = 0.If A ii = −d, then the eigenvalues are located on the circle with radius γ centred at −d.

Number of oriented ring graphs in a random and directed graph
We count the average number ⟨N (ℓ)⟩ of oriented ring graphs of length ℓ located in a random and directed graph with a prescribed degree distribution.
Before considering the general case, we count the average number of oriented rings in the directed Erdös-Rényi ensemble, viz.[121], which is the probability of drawing ℓ edges multiplied by the total number of ordered sequences of ℓ indices.In the limit of large n, The expected number of cycles of finite length is given by Consider now a random and directed graph with a prescribed degree distribution p K in ,K out .The distribution of outdegrees obtained by following a link in a directed graph is given by Hence, the average number of oriented rings of length ℓ is given by and in the limit of large n If ρ = 0, then Eq. (C9) is equivalent to Eq. (C5).The total expected number of cycles of finite length is given by The distribution of N , the number of oriented cycles of finite length, is a Poisson distribution with mean ⟨N ⟩.
The probability p + to have at least one cycle of length larger than 2 is given by Note that p + → 0 for c(ρ + 1) → 0 and p + → 1 for c(ρ + 1) → 1.Hence, at the percolation transition of the SCC there exists with probability one at least one cycle of finite length.We show that the spectral distribution µ(z) of the adjacency matrix A of random and directed graphs, as defined in Eq. ( 3), takes the form where s sc is the relative size of the giant strongly connected component, see Sec.II C 1, and where μ(λ) is the normalized spectral distribution associated with the giant strongly connected component.Since d only contributes a trivial shift λ j → λ j − d to all eigenvalues, we can set d = 0 without loss of generality.
In order to demonstrate Eq. (D1), we use the spectral theory for sparse non-Hermitian random matrices from Refs.[42,45].We focus on the case for which J jk = 1, but the derivation can readily be extended to the J jk ̸ = 1 case.As shown in those references, the spectral distribution µ(z) of matrices of the form given by Eq. ( 3) can be expressed as where ∂ z * = (∂ x + i∂ y )/2 and where g is a 2 × 2 square matrix with complex-valued entries.The distribution p G solves the recursive distributional equation where The distributions q out and q in solve the recursive distributional equations and respectively.
Using the ansatz (D8)-(D9) in Eq. (D3), we obtain where s in , s out , s wc and s t denote the relative sizes of the giant incomponent, outcomponent, weakly connected component, and tendrils, respectively (see Sec. II C or Refs.[28,40,41]).The distributions pin (x), pout (x) and p(g) solve a set of recursive distributional equations that we have omitted because their precise form does not matter here.Eqs. (D2) and (D10), together with the formulae and imply the final result (D1), which we aimed to prove in this appendix.
Appendix E: Derivation of the recursive distributional equations for pR in random and directed graphs with a prescribed degree distribution p K in ,K out In this appendix, we derive the recursive distributional equations (48)(49) for the distribution p R of entries of right eigenvectors in random and directed graphs with a prescribed degree distribution p K in ,K out .The relations (48)(49) apply to the eigenvalues λ b located at the boundary of the continuous part σ ac of the spectrum and to deterministic eigenvalue outliers λ isol .
The theory of Ref. [44] builds on two properties of random and directed graphs with a prescribed degree distribution, namely, that these random graphs are locally tree-like and oriented.In addition, it uses that eigenvalues λ b and λ isol are stable, i.e., insensitive to small matrix perturbations.
In a first subsection, we clarify what we mean by a matrix being locally tree-like and oriented, and in the second and third subsections we derive the recursive distributional Eqs.(48)(49).
1. Locally tree-like and oriented random matrices We say that an nondirected graph is a tree if it is connected and it does not contain cycles, see Ref. [125], and we say that a matrix is oriented if A ij A ji = 0 for all pairs i and j.In the following, we extend these global definitions to local definitions that apply to sequences of random matrices A n , with n ∈ N.
First we define the concept of local tree-likeness.Let A n , with n ∈ N, be a sequence of random matrices and let C n be their associated adjacency matrices, i.e., C kj = 1 when A kj ̸ = 0 and C kj = 0 when A kj = 0. We also consider the associated symmetrized adjacency matrices Cn with entries Cjk = max {C jk , C kj }, which are the adjacency matrices of nondirected simple graphs.We define the nondirected ℓ-neighborhood of a node i as the subgraph of Cn formed by the nodes that are separated no more than a distance ℓ from i.We say that the sequence of random matrices A n is locally tree-like if, for each ℓ ∈ N, the nondirected ℓ-neighborhood of a uniformly and randomly selected node in Cn is in the limit n → ∞ with probability one a tree, see Ref. [56].
Second, we define local orientedness of a sequence of random matrices A n .We say that the sequence of random matrices A n is locally oriented if, for each ℓ ∈ N, the principal submatrix of A n formed by the nodes in the nondirected ℓ-neighborhood of a uniformly and randomly selected node i is in the limit n → ∞ with probability one oriented.
the continuous part of the spectrum σ ac in locally treelike random matrices in the limit of infinitely large n.In the special case of locally tree-like and oriented matrices, see Appendix E 1 for a definition, we show that Eq. (E3) holds.
In order to clearly show how the assumptions of locally tree-likeness and locally orientedness enter in the theory, we first derive a set of recursion relations in the entries R j of a general matrix A. As we will demonstrate, the relations for general matrices are not closed and are thus not useful.In order to close these equations, we make the assumption that A is locally tree-like.Lastly, we show how the recursion relations simplify when A is also locally tree-like and oriented.

General matrices
The derivations we present rely on a recursive implementation of the Schur formula to the resolvent of A.
The resolvent of A is defined by with 1 n the identity matrix of size n.In the limit of n → ∞, the resolvent G A (z) only exists for values z / ∈ σ ac .Let λ be a nondegenerate eigenvalue, and let ⃗ R and ⃗ L be a corresponding left and right eigenvector.We assume that there exists a path in the complex plane that reaches λ and along which G A (λ − η) exists.In addition, we assume that λ is a stable eigenvalue of A, i.e., we assume that if λ is an eigenvalue of A, then λ is also an eigenvalue of the principal submatrix A (j) , which we obtain from A by deleting the j-th row and column.Hence, λ is either a deterministic eigenvalue outlier or is located at the boundary of the continuous part of the spectrum σ ac .
Since there exists a path that reaches λ and along which G A (λ − η) exists, we can write Note that relation (F2) also holds when the matrix A is not diagonalizable since we can decompose G A (λ) in a biorthonormal system of generalized left and right eigenvectors, analogous to the decomposition of A in Eq. (A16).Eq. (F2) implies that the components R j of ⃗ R are given by where G jℓ (λ − η) = [G A (λ)] jℓ , ⃗ 1 is the column vector with all components equal to one, and ⃗ e j is the column vector with all components equal to zero, except for the j-th component, which is equal to one.
To compute the elements G jℓ (λ) of the resolvent matrix, we use the Schur formula, which is a common tool in random matrix theory (see for instance section 2.4.3 in Ref. [126] and also Refs.[56,71,127]).Let  Note that for symmetric random matrices, Eqs.(F31-F32) are equivalent to the recursion relations for the resolvent derived in Refs.[42,78,79].The Eqs. (F19), (F24), (F31) and (F32) form a closed set of recursion relations.They can either be solved on a given graph instance or we can solve these equations in a distributional sense for a random graph ensemble.In either case, we obtain information about the statistics of R j .

Locally tree-like and oriented matrices
For random matrices that are locally tree-like and oriented, we can use in Eqs.(F31) and (F32) that J jk J kj = 0. (F33) As a consequence, we obtain the explicit expressions This concludes the derivation of Eq. (E3), which we were meant to show.
Appendix G: Normalizable solutions to the recursive distributional Eqs. ( 48) and ( 49) for pR In this appendix, we derive analytical results for λ b ∈ ∂σ ac , and λ isol by identifying values of λ for which the Eqs.( 48) and ( 49) admit a normalizable solution.
Since Eqs. ( 48) and ( 49) are linear distributional equations, we can derive a set of fixed-point equations for the lower-order moments of R and L. In order to distinguish averages with respect to p R with those with respect to q R , we introduce the notation ⟨f (R)⟩ = d 2 r p R (r)f (r) (G1) and ⟨f (R)⟩ q = d 2 r q R (r)f (r), (G2) where f is an arbitrary function.From Eq. ( 49), we obtain that ⟨R⟩ q = ⟨K in K out ⟩ c(λ + d) ⟨J⟩⟨R⟩ q , (G3) ⟨J⟩ 2 ⟨R⟩ 2 q , (G4) and from Eq. ( 48) we obtain G7) The Eqs. (G3-G8) admit three kind of solutions.The first type of solution is obtained when ⟨R⟩ q ̸ = 0. We First, we derive the recursion Eqs. ( 106) and (107) for the random matrix model Eq. ( 104) with diagonal elements D j drawn from a distribution p D .Using Eqs.(F36) and (F37) for the eigenvector elements R j and R (j) k , and the fact that for the locally tree-like random matrices defined in Eq. ( 104) all random variables on the right-hand-side of Eqs.(F36) and (F37) are independent, we readily obtain the recursion Eqs. ( 106) and (107), with p R and q R as defined in Eqs.(E6) and (E7).
Second, we determine the values of λ for which the recursion Eqs. ( 106) and ( 107) admit normalizable solutions, which provide us with the deterministic outlier eigenvalues λ isol and the eigenvalues λ b at the boundary of the continuous part of the spectrum.To this aim, we use the Eqs.(106) to derive the set of self-consistent equations I1) in the lower order moments of q R .Solving Eq. (I1) for ⟨R⟩ q ̸ = 0, we obtain Eq. ( 109) for the eigenvalue outliers λ = λ isol of the random matrix ensemble.On the other hand, setting ⟨R⟩ q = 0 in Eq. (I2), ew obtain Eq. ( 108)

2 * = 1 J 2 J 2 FIG. 3 .
FIG.3.Universal phase diagram for the stability of dynamical systems on random directed graph with positive ⟨J⟩.Black solid line and black dashed line separate the unstable phase at large effective connectivity c(ρ + 1) from the stable phase at small connectivity c(ρ + 1) for two given values of α = ⟨J⟩/d.The red dotted line separates the gapped phase at small vJ from a gapless phase at high vJ , which can also be considered a transition line from a ferromagnetic phase (gapped) to a spin-glass phase (gapless).

4.0 3 2 J 3 FIG. 4 .
FIG.4.Universal phase diagram diagram for the stability of dynamical systems on random directed graph with negative ⟨J⟩.Similar as in figure3, but now for negative α.In this case there is no gapped (or ferromagnetic) phase.

FIG. 5 . 2 )
FIG. 5. Effect of negative ρ on the spectral properties of the adjacency matrices of random directed graphs.Spectral properties for the adjacency matrices of Poissonian [see Eq. (90)] or geometric [see Eq. (92)] random directed graphs with a mean degree c = 2 and negative ρ are presented.Direct diagonalization results for matrices of size n = 4000 (markers) are compared with the theoretical results for infinitely large matrices (lines) derived in Sec.III.Panels (a) and (b): eigenvalues λj(A) of the adjacency matrices of two Poissonian random graphs with ρ = 0 [Panel (a)] and ρ = −0.3[Panel (b)], respectively, are plotted and compared with the theoretical boundary λ b for the spectrum given by Eq. (53).Panel (c): Mean values of the leading eigenvalue λ1 and real part of the subleading eigenvalue Re[λ2] are plotted as a function of ρ and compared with theoretical results λ isol = 2(ρ + 1) and |λ b | = 2(ρ + 1) if ρ > −0.5 and ⟨λ1⟩ = 1 − [1 − c(ρ + 1)]e c(ρ+1) if ρ < −0.5.Panel (d): Mean value R1 for the entries of the right eigenvector associated with the leading eigenvalue are plotted as a function of ρ and compared with the theoretical results ⟨R 1 ⟩ √ ⟨|R 1 | 2 ⟩ =

FIG. 6 .
FIG.6.Effect of positive ρ on the spectral properties of adjacency matrices of weighted, random, directed graphs.Spectral properties for the adjacency matrices of Poissonian [see Eq. (95)] or geometric [see Eq. (96)] random directed graphs with a mean degree c = 2 and positive ρ are presented.The off-diagonal weights are drawn from a or a bimodal distribution with mean µ0 = 1 and standard deviation v = 1.2 (see Eqs. (97) and (98)).Direct diagonalization results of matrices of size n = 4000 (markers) are compared with theoretical results for n → ∞ (lines), presented in Sec.III.Panels (a) and (b): eigenvalues λj(A) of the adjacency matrices of two Poissonian random graphs with ρ = 0 [Panel (a)] and ρ = 0.5 [Panel (b)], respectively, are plotted and compared with the theoretical boundary λ b for the spectrum given by Eq. (53).Panels (c)-(f): the sample means for the leading eigenvalue Re[λ1], the spectral gap Re[λ1] − Re[λ2] and the first moment of the right eigenvector R1 are plotted as a function of ρ and compared with the theoretical expressions λ1, λ1 − Re[λ2] and ⟨R1⟩ derived in Sec.III.Sample means are over 1000 matrix realizations of size n = 4000 (except for the blue circles in Panel (e), which are for n = 1000).The error bars denote sample standard deviations.

FIG. 8 .
FIG.8.Properties of the distribution pR 1 .Results are for the adjacency matrices of random directed graphs with a Poisson degree distribution given by Eq. (90) and with mean c = 3. Edges are weighted by random couplings Jij drawn from a Gaussian distribution [see Eq. (97)] with mean µ0 = 1 and variance v 2 = 0.2.Panel (a): Theoretical results for pR solving Eqs.(48-49) with λ = λ isol (solid lines) are compared with a histogram of the entries of ⃗ R1 obtained from direct diagonalizing 2e + 4 matrices of size n = 1000 (markers).The degree correlation coefficient ρ = 0. Panel (b): Fraction of nonzero entries of the leading right eigenvector ⃗ R1 as a function of the degree-correlation coefficient ρ.Direct diagonalization results for matrices of size n = 100 and n = 1000 (markers are sample averages over 100 and 20 samples, respectively) are compared with theoretical results for sin (solid line) obtained from solving Eqs.(15) and(17).In the numerical experiments, we have used the criterion |Ri| 2 < 1e − 20 to identify a zero-valued entry.
Appendix D: The algebraic multiplicity of the −d-eigenvalue in random and directed graphs

n− 1 −
matrix, thens a := d − ca −1 b (F5)is the Schur complement of block a, ands d := a − bd −1 c (F6)is the Schur complement of block d.If a and its Schurcomplement s a are invertible matrices, then the following block inversion formula holds which we call the Schur formula.We use the Schur formula to derive recursion relations for the elements of the resolvent G A and eventually Eqs.(E4) and (E5).Applying the Schur formula to the off-diagonal elements G jℓ of the resolvent, we obtainG jℓ = −G jj n k=1;(k̸ =j) λ1 n−1 ) −1 .(F10)Summingover the index ℓ, we obtain