Memory Formation in Adaptive Networks

The continuous adaptation of networks like our vasculature ensures optimal network performance when challenged with changing loads. Here, we show that adaptation dynamics allow a network to memorize the position of an applied load within its network morphology. We identify that the irreversible dynamics of vanishing network links encode memory. Our analytical theory successfully predicts the role of all system parameters during memory formation, including parameter values which prevent memory formation. We thus provide analytical insight on the theory of memory formation in disordered systems.

The continuous adaptation of networks like our vasculature ensures optimal network performance when challenged with changing loads. Here, we show that adaptation dynamics allow a network to memorize the position of an applied load within its network morphology. We identify that the irreversible dynamics of vanishing network links encode memory. Our analytical theory successfully predicts the role of all system parameters during memory formation, including parameter values which prevent memory formation. We thus provide analytical insight on the theory of memory formation in disordered systems. DOI: 10.1103/PhysRevLett.129.028101 Network architecture determines network performance. Strengthening and weakening links in a network over time is key for maintaining optimal performance under changing loads for stability in mechanical networks [1-3] as well as transport efficiency in traffic [4,5] or vasculature [6][7][8][9][10][11][12][13][14][15]. Understanding the physical principles of how an adaptive network's past is governing its current state is eminent in a world where even social and economic networks are currently facing massive adaptation. For the prototype of adaptive networks, living flow networks, data evidences that changes in loads drive the permanent adaptation of network architecture [16][17][18][19][20]. Do adaptive networks memorize information about past loads while continuously striving for their optimal state? Memory in disordered, passive systems, like granular media [21][22][23] or non-Brownian suspensions [24][25][26] as well as in neural networks [27,28], is encoded in persistent configurations of the microstructure of the system [29]. During a training period, irreversible dynamics lead to specific microstates; the system memorizes the past direction or amplitude of the training load. Do the active dynamics of the continuous optimization of adaptive networks allow for irreversibility to encode information about the past?
Here, we show that adaptive networks retain information on the position of an applied load in their architecture. Despite the presence of fluctuating loads, the applied load's position is retrieved upon reapplication. Specifically, we find that links with vanishing conductivity are responsible for the irreversibility of optimization dynamics allowing for memory encoding. We analytically show that irreversibility is a direct consequence of the adaptation dynamics, providing deep insight into the physical role of all systems' parameters on memory. Strikingly, our analytical calculations predict that the cost function can limit memory formation, which we confirm in our simulation. Our Letter thus not only discovers that adaptive networks are able to store memories of previous loads but provides an analytical tractable theory of memory formation in disordered systems.
We follow the standard model for adaptive networks most often used in the context of flow networks [15,30,[46][47][48]55,56]. The network consist of N nodes that are connected by links, whose flow rates Q ij are linearly dependent on their conductances C ij for fixed potential differences. At every time step t, the flow in the network is driven by loads q i ðtÞ applied at each node i, where only one node has a negative load, q 1 ðtÞ ¼ − P i>1 q i ðtÞ, i.e., it acts as the outlet, while all other nodes have q i ðtÞ ≥ 0 [15,47]. Conservation of flow at every node, known as Kirchhoff's law, uniquely determines individual flow rates Q ij ðtÞ from the entire network's conductances C ij ðtÞ and the loads q i ðtÞ at every node.
The adaptation rule first introduced by Murray [6] minimizes power loss E ¼ P hiji Q ij ðtÞ 2 C ij ðtÞ −1 under the constraint of fixed building cost P hiji C ij ðtÞ γ ¼ K γ . Here, K quantifies the overall constraint, and the exponent γ determines how link conductances contribute to the cost; see the Supplemental Material [57]. For example, resistor networks or porous media typically exhibit γ ¼ 1, while flow networks with Hagen-Poiseuille flow have γ ¼ 1 2 or γ ¼ 1 4 when the overall tube volume or the surface area is fixed, respectively. Iterative adaptation of C ij with discrete time steps δt locally solves the optimization problem [46,57]. To account for fluctuating loads q i ðtÞ, we additionally average over a period T, implying the update rule [15,47] where AðtÞ ¼ P hiji hQ ij ðtÞ 2 i γ=ðγþ1Þ T is a normalization factor. Taken together, this model defines how the conductances adapt for a given time series of loads q i ðtÞ [57].
Memory is the storage of information in a noisy environment [29], so that previously written information can be retrieved at a later time. To probe for memory in adaptive networks, we consider a disk-shaped geometry with its primary outlet i ¼ 1 at the center; see Fig. 1(a). We model the fluctuating environment by stochastically switching on and off background loads with equal probability, which describes open-close switches ubiquitous in biological flow networks [9,47]. The mean and the standard deviation of the fluctuations are parametrized by the average background load q ð0Þ on every node. Note that our results are robust and also hold when we consider a different noise distribution or a continuous optimization algorithm; see the Supplemental Material [57].
To test for memory formation, we follow the protocol used in disordered systems [58,59], where a writing stimulus is applied and the information about the stimulus is subsequently retrieved by applying the full possible range of stimuli. In our case, we apply an additional load q add at the boundary of the network at a particular angle θ 1 over a duration t train . This stimulus imprints a treelike structure on the network morphology; see Fig. 1(a). However, the system quickly returns to a seemingly isotropic morphology when the additional load is removed; see Fig. 1(b). To test whether this morphology still carries information about the writing stimulus, we applied after a time period t wait a probing stimulus at various angles θ 2 and measured the total power loss E. Figure 1(c) shows that the power loss is minimal for precisely the angle at which the writing stimulus was applied, indicating that this configuration is more optimized due to memory of the stimulus. In contrast, the power loss is independent of the angle in an untrained network where the writing stimulus was never applied. This demonstrates that adaptive networks can retain memory despite lacking an obvious visual imprint.
To unveil the mechanism of this memory, we quantify the memory read-out signal S as the relative change in power loss, Strikingly, we find that data for different t train and t wait collapse onto a straight line of the form Memory is subsequently probed by applying probing stimulus at angle θ 2 (empty blue). (c) Power loss E over 200 independent simulations versus θ 2 − θ 1 for varying θ 2 for trained dataset (red) and for untrained data set (blue). (d) Memory read-out signal S collapses when plotted using Eq. (3) for all t train shown by the colorbar. Fig. 1(d). This functional form was motivated by an individual analysis of the dependencies [57]. The structure of the two terms suggests that the signal consists of persistent memory, Mð1 − e −t train =τ mem Þ, as well as correlations that decay over time, Ce −t wait =τ cor . Note that the correlations start at the maximal value C and decay with a time scale τ cor during the waiting period; see the Supplemental Material [57]. Conversely, memory builds up during the training period with a timescale τ mem , saturates at the value M, and is retained indefinitely.
To understand how adaptive networks can encode memory, we next quantify how the links' conductances C ij evolve in time. Figure 2(a) shows that after the initial training period low conductance links tend to shrink, while high conductance links tend to stay the same. In fact, we observe that the weakest links eventually reach the minimal conductance value allowed in the simulation [see Figs. 2(b) and 2(c); details in the Supplemental Material [57] ] and can never grow back under the adaptation dynamics given by Eq. (1), despite the background load fluctuations. We show in the Appendix that azimuthally oriented links decay fastest in the vicinity of the stimulus. Consequently, the orientations of irreversibly shrinking links retain memory of the spatial stimulus, comparable to memory formation in disordered systems [29]. Figure 2(a) suggests a simple functional form for the dynamics of the network: conductances C above a threshold value C th fluctuate minimally to maintain fixed building costs [57], while those below shrink with a power law behavior: Fitting the data shown in Fig. 2(a), we find hCðt þ δtÞ=CðtÞi ¼ 1 AE 0.03 for large conductances and β ¼ 0.31 AE 0.07 for small conductances in the case without stimulus (colored dots). Remarkably, we find a very similar exponent (β ¼ 0.31 AE 0.07) when a stimulus is present (gray dots), although the threshold value C th is clearly lower. This suggests that the exponent β is constant and characterizes the adaptation dynamics of the network, while C th depends on the stimulus strength. These observations point to the dynamics of small conductance links as key for memory formation in adaptive networks. The irreversible dynamics break ergodicity [57], implying that not all configurations can be explored in the long time limit and memory persists.
To show that the links' dynamics observed in the numerical simulations are universal, we next consider the dynamics of the simplest adaptive networks analytically. For simplicity, we focus here on constraints with γ ¼ 1 2 , but the general case is discussed in the Supplemental Material [57]. We start by considering the simplest network consisting of three nodes in a triangular arrangement; see Fig. 3(a). For given loads q 2 and q 3 , the optimal network has a V-shaped morphology [46] with a negligible conductance between nodes 2 and 3. We then perturb the system around the optimal state by altering the load at node 2 to q 2 þ δq and examine the adaptation of all conductances under the dynamics given by Eq. (1). We derive that the high conductance links barely change [57], Only threshold conductance C th is stimulus strength specific; compare gray (q add ¼ 40000q ð0Þ ) and color (q add ¼ 0). (b) A network adapted for t train , iterating for longer, 4t train , links with conductance smaller than threshold C th disappear (c). γ ¼ 1=2, q ð0Þ ¼ 1, N ¼ 526, and T ¼ 30δt. C 12 ðt þ δtÞ C 12 ðtÞ ¼ C 13 ðt þ δtÞ Conversely, the small conductance changes as where the threshold C th is proportional to the constraint K and otherwise only depends on the loads [57]. This analytical result qualitatively agrees with the numerical results presented in Fig. 2(a). In particular, we predict an exponent of 1 3 for the evolution of small conductances. To show that the analytical result is universal and to study the parameter dependence of the threshold value C th , we next extend the analytical treatment to larger networks; see the Supplemental Material [57]. Here, we build more complex trees by adding additional layers. Since the dynamics of C 23 are governed by the load difference between node 2 and node 3, we first focus on fully asymmetric trees, where the load difference is maximized by funneling all additional loads through node 3; see Fig. 3(b). For simplicity, we consider a scenario described by an additional load q add applied at the last layer, while the fluctuations are represented by their average value q ð0Þ at each node and load perturbation at node 2 of δq. This implies q 2 ðtÞ ¼ q ð0Þ þ δq and q 3 ðtÞ ¼ ðN − 2Þq ð0Þ þ q add . Focusing on the adaptation dynamics of the small conductance C 23 , we again find the power law with exponent 1 3 , and the associated threshold value reads as assuming q 2 ≪ q 3 ; see the Supplemental Material [57]. This expression demonstrates how the additional load q add and load perturbation δq compete with the average of the background load fluctuations quantified by q ð0Þ : Larger perturbations, a stronger stimulus, and a larger system size N result in a smaller threshold, slowing down the decay of weak links. Conversely, a larger average background load increases the threshold, allowing for a fast decay of weak links. We find very similar results for fully symmetric trees, suggesting that all treelike networks exhibit this behavior; see the Supplemental Material [57]. Despite the simplicity of the considered networks, our analytical results agree with the numerical data shown in Fig. 2(a). In particular, they confirm that high conductance links are invariant, while links with a conductance below the threshold C th shrink with a 1 3 -power law. Moreover, Eq. (7) predicts how the model parameters affect the dynamics of links leading to memory formation in adaptive networks. The analytical result suggests that links of weak conductance have universal ensemble dynamics governed only by the threshold C th given by Eq. (7). To test this prediction, we quantified C th by fitting the dynamics of the conductances as a function of N for various q ð0Þ and q add . Figure 4 confirms that the scaling predicted by Eq. (7) agrees with numerical simulations despite the simulation's more complex network morphology; see also the Supplemental Material [57]. We further confirm that C th is independent of background load fluctuation in the absence of a stimulus q add (see the Supplemental Material [57]), and the memory effect is independent of load fluctuation when q add =q ð0Þ is kept constant [57], as predicted by Eq. (7).
We have shown that adaptive networks, which minimize power loss under the constraint of constant building cost, exhibit memory. However, so far we have focused on the particular constraint parameter γ ¼ 1 2 , which is known to result in treelike optimal morphologies [30,46,48,55], or hierarchical morphologies with loops [15,47,60] ignoring that other constraints are also possible and often lead to quite different optimal solutions [15,30,46,47,50,60]. How does memory formation change if we consider a general constraint parameter γ? Our detailed calculations (see the Supplemental Material [57]) reveal that the dynamics of high conductance links are independent of γ. Conversely, weak links follow the γ-dependent power law Cðt þ δtÞ CðtÞ ∝ CðtÞ which reveals that weak links shrink faster for smaller γ. This equation indicates that memory exists for γ < 1 and the precise value of γ hardly affects the dynamics. Conversely, Eq. (8) predicts that weak links grow for γ > 1. Consequently, links never disappear, loops form [60], and memory formation should be impossible in this case. In fact, we expect that these systems are ergodic (see the Supplemental Material [57]), so that transient changes are erased in the long term.
dynamics of the links hardly depend on their conductances C, implying that weak links typically do not vanish. These qualitatively different dynamics result in loopy networks, in contrast to the treelike networks that are observed for γ ¼ 1 2 [47]. Numerical simulations also show that the networks do not retain any memory of the direction of the stimulus; see the inset of Fig. 5 and the Supplemental Material [57]. Taken together, the analytical results and the numerical simulations indicate that memory formation relies on vanishing weak links and is only possible for γ < 1.
We have shown that adaptive networks can retain memory of a stimulus despite background load fluctuations. Applied loads lead to irreversible change in the networks' microstructure by eroding weak links that cannot be revived. Our analytical calculations and numerical simulations consistently describe a power law for the decay rate of low conductances, functionally determined by the networks' building cost. The irreversibility of the dynamics arises from the trade-off between building cost constraint and minimizing power loss. A high local load increases conductances locally for efficient flow while also eroding weak, unimportant links due to the constraint of a fixed building cost, thereby imprinting memory. Yet, if the cost to build high conductance links is too high (γ ≥ 1) networks adapt to low hierarchy, loopy architecture, which erases memories of loads over time. Future work needs to show whether memory is also erased in adaptive networks on growing tissue, which achieve the global optimum [61], or in adaptive networks with the special ability to create new links [62].
Unraveling how adaptive networks can encode memories changes our physical understanding of these active systems. In particular, it provides a conceptional change in how we may look at and control adaptive networks when designing smart mechanical materials or treating the plethora of malfunctions of our very own vasculature. Appendix: Spatial signature of memory.-To unveil the spatial signature of memory, we analyze the location of the shrinking weak links, which contain the memory, in detail. Our analytical calculations [57] indicate that links with a direct path from inlets to the outlet shrink more easily if they are far away from the stimulus. Conversely, links perpendicular to such direct paths decay quickly if they are close to the stimulus. We thus expect that azimuthally oriented links decay quickly close to the stimulus in our disk-shaped networks. To quantify this, we measure the fraction of minimal conductance links with radial positions between R − Δr and R, where R is the radius of the network and Δr the width of the annulus. With stimulus, the network has a significantly higher fraction of such minimal links where the stimulus was applied; see Fig. 7(d). To get further details, we also measure the orientation of minimal link ij (see Fig. 6) as the angle ϕ ij ∈ ½0; ðπ=2Þ between its orientation vectorX 0 ij and its location vector ⃗ X l ij , Consequently, ϕ ij ¼ 0 corresponds to radially oriented links, while ϕ ij ¼ ðπ=2Þ indicates azimuthally oriented links.
We quantify the angle averaged over small regions of space in networks evolved without [ Fig. 7(e)] and with a FIG. 6. Measure of vanishing link orientation for spatial signature of memory. Example network, highlighting vanishing links in light blue. Other links' width is scaled by their conductance value, and the outlet node is depicted in red. On network enlargement, a link orientation angle ϕ ij (dark blue dotted curve) is indicated as the angle between the orientation vector ⃗ X 0 ij of link ij and the location vector of the center of the considered link ij with respect to the outlet in the network ( ⃗ X l ij ). . While both plots reveal the sixfold symmetry of the underlying irregular network, there are also significant differences: The average orientation hϕi is slightly higher in the wedge defined by the stimulus, indicating that azimuthally oriented links are more likely to decay. Conversely, hϕi is slightly reduced at the boundary of this region, in agreement with our analytical calculations [57]. Taken together, our analysis shows that the decay of azimuthally oriented links in the vicinity of the stimulus memorizes its location.
* To whom all correspondence should be addressed.