Fast algorithm for topologically disordered lattices with constant coordination number

We present a stochastic algorithm for constructing a topologically disordered (i.e., non-regular) spatial lattice with nodes of constant coordination number, the CC lattice. The construction procedure dramatically improves on an earlier proposal [Phys. Rev. E. 97, 022144 (2018)] with respect to both computational complexity and finite-size scaling properties - making the CC lattice an alternative to proximity graphs which, especially in higher dimensions, is significantly faster to build. Among other applications, physical systems such as certain amorphous materials with low concentration of coordination defects are an important example of disordered, constant-coordination lattices in nature. As a concrete application, we characterize the criticality of the 3D Ising model on the CC lattice. We find that its phase transition belongs to the clean Ising universality class, establishing that the disorder present in the CC lattice is a non-relevant perturbation in the sense of renormalization group theory.


I. INTRODUCTION
Given a set of discrete points in space, if connections between these points are created according to a rule defined in terms of geometrical closeness, one obtains a so-called proximity graph. Such graphs possess only local connections and their typical shortest path length scales as̄ ∼ 1∕ on adimensional set of points, in contrast to small-world networks [1] and some scale-free networks [2], where shortcuts provided by long-range connections lead to scalings̄ ∼ ln and̄ ∼ ln ln , respectively.
The most prominent proximity graph in two dimensions is arguably the Delaunay triangulation (DT) [3]. Its construction rule specifies that the circumcircle of any DT triangle must be empty, i.e., it cannot contain any points of the set. Furthermore, in the DT, it is guaranteed that the distance along the edges between any two points is not larger than about 2.42 times their metric distance [4]. Similar, but more restrictive rules lead to the relative neighborhood graph (RNG) [5] and Gabriel graph (GG) [6], which can be constructed from the DT by removing specific bonds [7]. Also the (first) nearestneighbor graph turns out to be a subgraph of the DT. A further simple proximity construction is the random geometric graph (RGG) [8], where any two points whose distance falls below a certain threshold are linked.
Proximity graphs are useful in a wide range of applications, most notably mesh generation, surface modeling, pattern classification, ad-hoc networks, path planning and astrophysics [7][8][9][10][11][12][13][14]. Besides topics predominantly related to computer science, proximity graphs constructed from random point sets have also found application in statistical physics [15], with focus on the behavior of critical phenomena [16] for systems placed on such a graph rather than on a regular lattice. These irregular lattices are said to present topological disorder, in order to distinguish their disorder from that of, e.g., diluted regular lattice models (a research topic with an even longer history, see Ref. [17] and references therein). In the language of renormalization group theory [18] the nonregular structure of those lattices represents a source of disorder, and the central question is whether it is relevant, i.e., whether it changes the character of the phase transition of a given physical model.
Paradigmatic systems exhibiting continuous phase transitions include the equilibrium Ising model [19] and the nonequilibrium contact process (CP) [20], both already studied on two-dimensional Delaunay triangulations using large-scale numerical Monte Carlo simulations [21][22][23][24][25][26]. For the CP, it has been shown that the topological disorder of the DT presents no relevant perturbation, and its phase transition remains in the universality class of directed percolation. This result cannot be explained by Harris' seminal relevance criterion [27,28], and it motivated the introduction of a generalization [29] that attributes the non-relevance to strong spatial anti-correlations in the coordination numbers of the lattice nodes. Also for this new criterion, however, violations have been found [30], meaning that a complete criterion explaining the influence of topological disorder on continuous phase transitions still remains to be found.
Fluctuating local coordination numbers are typical for proximity graphs (DT, RNG, GG, RGG, -nearest neighbor, . . . ). Therefore, a lattice which is still disordered in the topological sense, but where coordination number fluctuations are absent can be an interesting tool for investigating the influence of disorder on phase transitions in physical systems. One such lattice is the Constant Coordination (CC) lattice, which we introduced with J. A. J. Richter in [31], and is illustrated in Fig. 1. By construction, every site in the CC lattice is connected to exactly other sites, without allowing for self or double con- nections. In this way, any perturbation on the phase transition of a model can only originate from the implicit connectivity disorder, independent from coordination numbers. In this paper we refine the original construction rules [31] and present an improved local algorithm with significantly reduced computational complexity (from quadratic to linear in the number of points), which is straightforward to apply to dimensions larger than two and also eliminates a few shortcomings related to the applicability to finite-size scaling [32] simulations. Furthermore, we make use of the new algorithm to investigate the 3D Ising phase transition on the CC latticean undertaking unfeasible with the original algorithm.
The paper is organized as follows. In Sec. II we illustrate the basic idea of the construction procedure. In Sec. III we review the drawbacks of the originally proposed algorithm and show how they are resolved in the current approach. After a few remarks on the immediate generalization to higher dimensions in Sec. IV, we present and analyze, as an example of application, numerical Monte-Carlo simulations of the Ising model on a 3D CC lattice with coordination number = 4 in Sec. V.
Finally, in Sec. VI we present our concluding remarks.

II. BASIC CONCEPT
As typical for topologically disordered random lattices, our starting point is a cloud of randomly distributed points in a toroidal domain of linear size with Euclidean metrics. In the first step of the lattice construction, bonds between random pairs of sites are gradually introduced until each site has exactly neighbors. The key step after that, is to subject the graph to a dynamical rewiring, by means of a simulated annealing (SA) procedure [33], in order to achieve locality, i.e., to keep connections effectively short ranged. Specifically, in the SA algorithm, two bonds and are taken at random and rewired to a new configuration and whenever this leads to a decrease of the sum of the bond lengths, i.e., whenever (1) defines the cost function. The simulated annealing temperature has the effect of noise on the convergence to a state of low cost function. The value of is logarithmically decreased during the simulation, such that in the beginning, nonoptimal rewiring updates are accepted with moderate probability, whereas in the final stages, this probability almost vanishes and only those moves are performed where condition (1) strictly applies. The first algorithm for the CC construction, put forward in the original proposal [31], presents two central drawbacks: first, it is computationally expensive; second, it requires considerable care and the introduction of an inconvenient extra parameter in order to avoid any dependence of its geometrical characteristics on the lattice size.
An improved algorithm for generating the CC lattice, which overcomes these drawbacks, can be obtained based on a simple key concept: instead of over the whole lattice, we perform the construction locally, in subgraphs delimited by grid cells of a small, fixed size. That brings the complexity from ( 2 ) to ( ) by keeping fixed the size of the set to which simulated annealing is applied, and its locality also guarantees there are no lattice-size dependencies.

III. ALGORITHMIC DETAILS
In this section we elaborate on the Constant Coordination lattice algorithm sketched in Sec. II, giving special attention to its improvements over the original proposal [31], which presented important drawbacks: a. Computational cost Since every rewiring step takes, at random, two bonds of the full set, the time complexity scales as  (  2 ), where is the number of sites in the lattice, the coordination number and the number of SA steps. Even though and are constant parameters, the ( 2 ) dependence alone renders the algorithm prohibitively expensive for large lattices. Besides, as the typical bond length becomes small with respect to , random rewirings grow increasingly unlikely to satisfy condition (1) and the majority of update attempts is rejected, resulting in a slow convergence which demands a very large number of steps .
b. Initial configuration The need for a large number of rewiring steps can be mitigated by starting the SA procedure from a configuration that already has a certain degree of locality, instead of being fully random. Such an initial configuration can be obtained simply by, considering only sites with fewer than bonds, randomly linking those sites to their nearest neighbors until each site has exactly bonds. This requires the full distance matrix of the sites to be known, which can be calculated in ( ln ) using spatial tree techniques [34]. In practice, this initial step allows the parameter to be reduced by about two orders of magnitude.
c. Pathological motifs However, the use of an optimized initial configuration also comes with a drawback of its own: the occasional failure to produce a so-called simple graph, due to unlucky configurations, or pathological motifs. One such configuration arises, e.g., when the last site in the initial construction loop is left to connect to itself (all the other sites already having bonds). A number of examples of such motifs can be found in Appendix A. These occasional failures must be dealt with using either some involved iterative procedure or a complete restart.
d. Micro-scale equivalence A lattice with sites and an arbitrary subgraph with sites from a larger lattice should be indistinguishable with respect to their connectivity properties (such as average bond length, shortest path, clustering, etc.) up to boundary effects. This property is crucial for the application of finite-size scaling methods. While it is trivially fulfilled for any geometrically constructed lattice, like regular lattices, DT, RGG, etc., this is not the case in the original CC algorithm. There, even though the initial connection step connects most of the sites locally, typically a few larger bonds spanning a significant part of the system can not be avoided, thus introducing a dependence on the lattice size. Therefore, in order to achieve micro-scale equivalence, larger lattices must be subjected to longer SA procedures, i.e., instead of a constant parameter , we have a function = ( ), which is an undesirable additional parameter and a possible source of error.
In order to eliminate these deficiencies, we impose that all the steps of the lattice construction must be local, restricted to subgraphs delimited by grid cells of a small, fixed size instead of over the whole lattice. In the first step of the construction (Fig. 2a), the spatial domain of linear size is subdivided into cells  1, of linear size and index . Typically, we choose ≈ 8. In the next step (Fig. 2b), the sites in each cell are linked together, such that each of them has 1 < neighbors. Note that building these subgraphs is always possible as long as 1 is even [35], a restriction imposed by the handshaking lemma [36]. Once this first layer of connections is in place, the lattice consists of ( ∕ ) disconnected subgraphs where = 2 in Fig. 2. Then, considering a second grid of cells  2, that is, for instance, diagonally displaced with respect to  1, , another set of bonds is added so that each site gets 2 additional bonds (Fig. 2c). This staggering of layers results in a seamless, connected graph where each site has 1 + 2 neighbors. In the next step (Fig. 2d) bond lengths are reduced by a simulated annealing procedure as described in Sec. II, but now restricted to the cells 1, , 2, , . . . where the tilde indicates that the cells are open, i.e., not only those bonds are considered which lie completely inside the cells, but also those crossing the boundaries. In contrast to the initial connection steps (b, c), the SA step may be repeated not only for diagonally staggered cells (Fig. 2e), but also for cells displaced horizontally or vertically by ∕2 (not shown in the figure). The rewiring is repeated until the desired degree of locality is reached, completing the construction of the CC lattice (Fig. 2f). Note that, in order to avoid any directional bias, we switch the order in which the cells are processed (bottom left to top right in Fig. 2) after each repetition. A detailed pseudocode for the construction procedure can be found in Fig. 5 and an animation of the whole lattice construction process is available in the Supplementary Material.
The drawbacks of the earlier algorithm, listed above, are eliminated in the present, improved version by restricting the most expensive construction step, namely the dynamical rewiring, to the small -cells. This drastically reduces the complexity of the lattice construction from ( 2 ) to ( ), as shown in Fig. 3. The small size of the cells also eliminates the need for an optimized initial bond configuration, further saving computing effort, since a fully random initial linking is then sufficient. Also the issue of pathological motifs (Appendix A) is thereby eliminated, since it only arises in the generation of an optimized initial configuration, which is not a necessary step in the improved algorithm. Finally, the property of microscale equivalence is now fulfilled by construction as long as the length is fixed for lattices of different size in a set of finite-size scaling simulations. That can be shown [31] by considering the distribution of bond lengths for lattices of different sizes: as the histogram of Fig. 4 shows, the bond length distribution for different values of coincide perfectly within numerical precision. The concentration of lengths around lower values seen in the figure also gives evidence of the high degree of locality of the lattice.
We remark that some care must be taken with the set of construction parameters . The handshaking lemma states that any finite graph has an even number of odd-degree nodes. As a consequence, the algorithm cannot converge for cells which end up with an odd number of nodes if is also odd. Clearly, this also places an important limitation on the algorithm: it should only be used for generating lattices with even coordi-procedure CONNECT_SUBGRAPH(set of sites ; ) SHUFFLE ← length of G for = 1 to do for = 1 to ∕2 do ← mod ( + , ) ⊳ cyclic connections Add bond from site ( ) to site ( ) end for end for return end procedure procedure REWIRING_ATTEMPT(sites , , , ; ) nation number . Provided are even, the number of layers and the amount of displacement can be seen as tunable parameters. For constructing a lattice of constant coordination number = 6, for instance, one could employ either = 1 + 2 = 2+4 = 6, or three layers with = 1 + 2 + 3 = 2 + 2 + 2 = 6, and displacements ∕3 and 2 ∕3. A two-layer setting = 1 + 2 = 3+3 = 6 must be avoided due to the evenness requirement from the handshaking lemma. An overview of some of the possible configurations is given in Tab. I.

IV. HIGHER DIMENSIONS
Whereas the ( 2 ) scaling of the original algorithm is a significant limitation already in 2D, for higher dimensions it makes the construction of lattices of reasonable size prohibitively expensive. In this context, it is important to notice that the current algorithm is not only a substantial improvement over the original one, but also compares favorably with algorithms for usual proximity graphs, such as the Delaunay triangulation and its subgraphs, as well as nearest-neighbor graphs. For the latter, typical sequential algorithms are known to scale as ( ln ), through the use of spatial tree decomposition methods [37][38][39]. For the DT on uniformly distributed points, a ( ln ln ) scaling is possible [40] using sophisticated divide-and-conquer techniques. In dimensions larger than two, some of those algorithms are not trivially generalized and the scaling is not maintained, such as for the RNG, where one falls back to algebraic complexity in 3D [41]. In contrast, the CC algorithm is straightforwardly generalized to higher dimensions, as already described in Fig. 5, while maintaining the ( ) scaling behavior, as shown in Fig. 3. Essentially, the construction remains the same, with only some parameters such as the displacement vectors having to be adjusted (see Tab. I for examples). The hypercubic cells of the general case admit many more layer configurations and displacement vectors than the square cells of the 2D setting (see again Tab. I). One must only ensure that mixing in all directions takes place, such as can always be achieved by a single fully diagonal displacement, i.e., 1 = (0, 0, … , 0), 2 = ( ∕2, ∕2, … , ∕2). Hence, as in 2D, the smallest possible coordination number is = 4 for any dimension. Furthermore, the CC algorithm can not only be generalized to higher dimensions, but in principle also to spaces equipped with metrics other than Euclidean, with the single necessary change being in the distance function in the algorithm's rewiring subroutine (Fig. 6). A generalization to curved manifolds should be possible as long as a proper spatial grid can be defined, such as for the hyperbolic plane ℍ 2 , for which a number of regular tessellations can be constructed [42]. However, due to the inherent length scale of this space (the curvature radius), the cell size is determined by geometric constraints and can not be freely chosen. Also, setting up periodic boundary conditions is non-trivial in hyperbolic spaces, although possible [43].

V. APPLICATION
Topologically disordered structures are instrumental in understanding the role of certain perturbations in the context of statistical physics, which motivated the construction of the CC lattice in the first place. Therefore, as a possible application, we perform large-scale Monte-Carlo simulations of the ferromagnetic Ising model on a 3D constant coordination lattice with coordination number = 4 (CC4), using the MARQOV code framework [44].
The Ising model is defined by the Hamiltonian where are discrete spins on the lattice. denotes the coupling between nearest neighbors ⟨ , ⟩ and ℎ is the external field at site . For equilibrium lattice models, quenched disorder can be introduced in a variety of ways. For fixed ferromagnetic coupling = > 0 and randomly distributed external field, the system is called Random Field Ising model (RFIM) and has been investigated thoroughly over the last decades [45]. In contrast, for vanishing external field but randomly distributed (anti)ferromagnetic bonds the system shows the behavior of a spin glass [46, and references therein].
In the present study, quenched disorder is introduced as topological randomness encoded in the implicit connectivity of the CC lattice. We therefore fix all couplings to = 1 at vanishing external field, ℎ = 0.
In order to investigate the Ising model in the vicinity of the critical point, we employ state-of-the-art importance-sampling Monte Carlo methods, using cluster, as well as local update algorithms. In particular, we use the algorithm proposed by Wolff [47], which significantly reduces the critical slowing down near the critical point and is furthermore straightforwardly applicable to disordered lattices. Although the cluster updates preserve ergodicity, we add local Metropolis updates [48] in order to make sure that the short-wavelength modes are properly thermalized. After an initial thermalization period, magnetization and energy per site are measured after every few updates.
In the study of disordered systems, it is necessary to average physical observables over many different, independent disorder realizations of the system, also called replicas. The quenched averages over replicas are performed at the level of (extensive) observables, rather than at the level of the partition function [46]. Denoting quenched averages as and thermal averages as ⟨...⟩, we use the following definitions of magnetization, energy and susceptibility: Furthermore, the two-point finite-size correlation function is given by with the Fourier transform of the magnetization defined by where denotes the spatial coordinate of site and min = (2 ∕ , 0, 0) represents the smallest non-zero wave vector in the finite system. The quantity  2 ( min ) is also measured during the Monte Carlo run. Finally, the fourth-order magnetic cumulant, or Binder ratio, is given by Essential for the analysis of the Ising model is a precise knowledge of the location of the critical point, which depends on the details of the lattice structure and is therefore not known in advance. It is, however, known that the curves of certain RG-invariant quantities, such as the Binder ratio 4 or the correlation length ∕ intersect close to the critical point for different lattice sizes . More specifically, the intersection points * for pairs of ( , 2 ) converge to the critical temperature according to * ( , 2 ) = + −1∕ , which allows to obtain a very precise estimate of the critical temperature [49]. In the scope of this paper, however, we simply use the crossings without the infinite volume extrapolation, which provides a sufficiently good estimate for the next steps of the analysis. From the 4 curves we get the following estimate where the error is evaluated, quite conservatively, from the width of the intersection region. The estimate from the crossing points of ∕ turns out to be considerably less precise, though fully compatible. For the fixed point values of the phenomenological quantities, we obtain ( ∕ ) * = 0.623(10), again without using infinite volume extrapolations. These quantities are considered universal, at least in a limited sense, in that they depend weakly on certain geometrical characteristics of the system [50][51][52]. Taking as reference the most precise estimates available for the 3D Ising model on a cubic lattice, ( ∕ ) * = 0.6431(1) [49] and * 4 = 0.46548(5) [53], we see that our estimates present only small deviations, giving a first indication that the Ising model on a 3D CC4 lattice stays in the universality class of the clean model.
In the next step of the analysis we simulate the Ising model on lattices of size = 16, 24, 32, 48, 64, 96, and 128 for several temperatures in the vicinity of the critical point. A measurement is taken after every Elementary Monte Carlo Step (EMCS), which consists of one Metropolis sweep and cluster updates, keeping the fraction of flipped sites approximately independent of the lattice size. Each disorder realization is initially prepared in a cold configuration and 500 EMCS are used for proper equilibration. Then another 1000 EMCS are performed, with a measurement being taken after each one of them. For the smaller lattices we use up to 10 4 disorder replicas for the averages of Eq. (4), for the two largest lattice, = 96 and = 128, we use at least 1500 replicas for every temperature. The simulations took about 10 4 CPU days on an Intel Xeon E5-2697 v3 processor, where the time for constructing the lattices was below 1% of the total time.
Finite-size scaling theory predicts that the susceptibility, the fourth-order magnetic cumulant and the correlation length scale according to where and are critical exponents and , 4 , and denote universal scaling functions, with the argument given by These equations describe the scaling behavior to first order.
Corrections of higher order are expected to become irrelevant for large system sizes. We compute the scaling collapse plots, assuming the best known values for the clean model critical exponents [53] and the value of of Eq. (10). In the upper panel of Fig. 7 we plot the scaling collapse for the susceptibility and, in its lower panel we plot both phenomenological couplings against the scaling variable. The nearly flawless collapse displayed by the curves, for lattice sizes ≥ 32 in the upper panel and for ≥ 16 in the lower panel, provides compelling evidence that the Ising model on a three-dimensional CC4 lattice belongs to the universality class of the clean 3D Ising model.

VI. CONCLUSION
We present an algorithm for the construction of a topologically disordered lattice with nodes of constant coordination number. Boasting a computational complexity that scales linearly with the number of points, it is significantly faster than other proximity graph constructions. The algorithm performs only local operations, dividing the spatial domain into cells of small linear size compared to the lattice dimensions. This guarantees bond lengths are bounded and allows a straightforward generalization to any number of spatial dimensions and, in principle, to different metrics and topologies. By efficiently constructing disordered graphs of fixed coordinated number, the CC lattice could find application in the modeling of amorphous materials such as low temperature amorphous silicon in two and three dimensions [54,55], especially given that the most well-known lattice with constant coordination number, the Voronoi construction, is not considered a satisfactory model [56]. As a prototypical application we perform numerical Monte Carlo simulations of the 3D equilibrium Ising model on a realization of the lattice with four neighbors and find the character of the second-order phase transition to be unchanged with respect to the clean 3D Ising model. Hence, quenched disorder is revealed to be a non-relevant perturbation in this case, providing another puzzle piece on the road towards a general disorder relevance criterion [30]. dangling bond each. As they, however, are already connected, this would result in a double-connection, which is also illegal. In the improved CC algorithm, presented in this paper, these issue do not arise in the first place, as fully random initial connections are sufficient, as pointed out in Sec. III. (2) (4) Dashed lines symbolize open connectors (which form a bond when two are closed/connected) at the end of the initial construction step, whereas the peripheral blue dots represent sites that are already fully connected.