Robust retrieval of dynamic sequences through interaction modulation

Many biological systems dynamically rearrange their components through a sequence of configurations in order to perform their functions. Such dynamic processes have been studied using network models that sequentially retrieve a set of stored patterns. Previous models of sequential retrieval belong to a general class in which the components of the system are controlled by a feedback ("input modulation"). In contrast, we introduce a new class of models in which the feedback modifies the interactions among the components ("interaction modulation"). We show that interaction modulation models are not only capable of retrieving dynamic sequences, but they do so more robustly than input modulation models. In particular, we find that modulation of symmetric interactions allows retrieval of patterns with different activity levels and has a much larger dynamic capacity. Our results suggest that interaction modulation may be a common principle underlying biological systems that show complex collective dynamics.


I. INTRODUCTION
Biological systems are often made of many components that dynamically arrange themselves into specific configurations.In some cases, this dynamics follows a particular sequence of configurations which allows the system to perform some function, as illustrated in Fig. 1.For example, neurons rearrange their synaptic activity in order to generate a sequence of activity patterns [1,2].Multi-protein assemblies dynamically rearrange their protein composition several times in a precise order as they perform a function, such as the spliceosome processing pre-mRNAs [3].Bacterial communities on marine particles undergo successions where the species structure changes in reproducible patterns to degrade organic matter [4].Thus, sequential transitions of multi-component systems through well-defined configurations is a general phenomenon in biology.
These specific configurations can be considered metastable states of the dynamics.The ability of the system to transition from one such configuration to the next means that the system can alter the stability of the configurations.This can be achieved by controlling specific components, e.g., by changing the abundance of certain proteins in the case of assembly dynamics, or by modulating the neuron firing activity in neural systems.Such a scenario corresponds to a common approach in control theory, i.e., modulating inputs on a subset of variables to influence the full system [5].In biology, however, a different approach may be considered, namely to modify the interactions among the components.Unlike physical systems where the interactions between elementary particles are determined by fundamental forces, biological components are complex objects and the effective interaction strengths between them can be modified by third parties.For example, the affinity between a pair of proteins can be regulated by allosteric modulation through a third protein [6].Similarly, the synapses among a pair of neurons can be modulated by a third neuron via heterosynapstic plasticity [7].Can such interaction modulation be used to control the dynamics of complex systems?What are the differences in performance between interaction and input modulation?
In this work we address these questions using a framework inspired by the Hopfield neural network [8].The Hopfield network was originally developed as an abstract model of associative memory capable of storing and retrieving particular network configurations.This paradigm has been extended to model biological systems ranging from metabolic networks [9] to protein assemblies [10] and even ecosystems [11].Furthermore, sequential transitions among the stored configurations have long been considered [12][13][14][15], aimed at describing phenomena such as central pattern generation [16], counting [17] and, more recently, free association [18], memory recall [19], and assembly dynamics [20].Here, we model the dynamics of retrieval by introducing a small set of feedback units, which control the sequential transitions.When formulated this way, previous FIG. 1. Sequential dynamics in complex systems.A: Sequential transition along a sequence of configurations is a common phenomenon in complex systems, such as the assembly of multi-protein complexes and the synaptic activity of neurons.Such dynamics may be modeled by a network where the units si store multiple patterns ξ µ .B: Sequential dynamics can be viewed as unfolding across a rugged, changing landscape.C: Transitions between configurations are represented by time series of order parameters, which measure how close the system is to each configuration.The peak of each colored curve represents the retrieval of a particular configuration.models are shown to fall into a class of models based on input modulation.We propose a new class of models that rely on interaction modulation to trigger autonomous transitions between configurations.Remarkably, we find that modulation of symmetric interactions allows sequential retrieval of configurations that have different activity levels, which cannot be reliably done by models that use input modulation.Furthermore, this model can retrieve much longer sequences than other models.Our results suggest that interaction modulation may be biologically favored over input modulation due to its robustness and large dynamic capacity.

II. PATTERN STORAGE AND RETRIEVAL
Our model is based on the classic Hopfield network, which can store and retrieve a given set of network configurations, or "patterns".The network is composed of N units, whose activities are denoted by {s i } N i=1 and take continuous values, 0 ≤ s i ≤ 1.The interactions among the units are characterized by a matrix W ij , which contributes to the input of each unit: Here V i is an external input to each unit that does not depend on the current state s i of the network.The dynamics of the system is governed by where F (•) is an activation function centered at zero, such as the sigmoid F (x) = 1/(1 + e −βx ).
The original task of a Hopfield network is to store and retrieve p patterns, denoted by {ξ µ } p µ=1 , each of which is a vector with binary elements, ξ µ i ∈ {0, 1}.The patterns can represent structured content, such as the pixels of black-and-white images, but for simplicity we take them to be random with an average activity a.That is, each pattern has a fraction a of the units set to 1 and the rest set to 0. To store those patterns, FIG. 2. Input versus interaction modulation.A Hopfield network with units si (spherical nodes) is controlled by a set of feedback units cµ (square nodes).These feedback units are updated on a slow timescale τ (blue lines), and modulate either the input field Vi of the main units or their interactions Wij (orange lines).
the following symmetric interactions are introduced [8], For W ij = J ij and V i = 0, the patterns are fixed points of the dynamics in Eq. ( 2).That is, the system will retrieve a pattern provided that it is close to it initially, where the proximity to patterns is measured by the p overlap variables, Thus, for s i = ξ 1 i , we have m 1 = 1 and m µ =1 ≈ 0, because random patterns are approximately orthogonal for large N .Eqs. (1-3) define a dynamical system capable of storing and retrieving each individual pattern.
We are interested in networks that can autonomously retrieve a sequence of patterns, one after another.Already in Hopfield's original paper [8], it was suggested that sequential retrieval could be achieved by introducing asymmetric interactions of the form These asymmetric interactions provide a directional bias from every pattern ξ µ towards the next pattern ξ µ+1 .The interaction matrix is then generalized to W ij = J ij + λ Jij , where the parameter λ represents the strength of the directional bias.The rationale behind this construction is that, after a pattern is retrieved due to the symmetric J ij term, the asymmetric Jij term will destabilize it and push the system toward the next pattern in the sequence, i.e., However, this simple model cannot produce sequences reliably [8].Instead, the network exhibits no dynamics for λ less than a certain value, or chaotic dynamics otherwise [21].The reason is that the J ij and Jij terms act on the same timescale, such that either the stabilizing term dominates and leads to no dynamics, or the destabilizing term dominates and leads to chaotic dynamics.Therefore, robust sequential dynamics requires a separation of timescales between fast stabilization and slow destabilization.In this way, the system first relaxes to a pattern ξ µ , which is slowly destabilized, then goes to the next pattern ξ µ+1 , and so on.
The required separation of timescales can be achieved through feedback modulation.To this end, we introduce a set of feedback units represented by the variables {c µ } p µ=1 , which obey the dynamics These feedback units will be used to destabilize the retrieved pattern on a timescale τ 1, by either modifying the inputs to the network, i.e., V i (c µ ), or the interactions, W ij (c µ ), as schematically depicted in Fig. 2. We will elaborate on these two approaches below, with four representative models summarized in Box 1.
Box 1 -Models of sequential retrieval Sequential retrieval can be achieved via feedback control that modulates either the input to the units or the interactions between the units.Two classical models of sequential retrieval via input modulation (HU and SK) and two new models via interaction modulation (MSI and MAI) are summarized in the table below, and elaborated on in Section III A.

Model
Interaction The figures below provide a heuristic depiction of the dynamics for each model.The "energy landscape" represents the patterns stored in the network.The network state s i currently occupies the pattern ξ µ and will transition to ξ µ+1 .The black arrows represent update to the feedback units c µ on a timescale τ , and the orange lines represent the effect of feedback.The red arrows represent directional bias that guides the transitions, and the dashed gray lines represent the effect of input fields (constant when not depicted).The input modulation models work by raising the inhibitory threshold (HU) or tilting the energy landscape (SK), whereas the interaction modulation models work by enforcing the directional bias (MAI) or deforming the energy landscape (MSI) to modify the stability of the minima.

III. MODELS OF SEQUENTIAL RETRIEVAL A. Sequential retrieval via input modulation
We first reformulate some well-studied models of sequential retrieval using our framework.For instance, the model due to Horn & Usher [13] can be re-expressed by setting The latter is usually referred to as an adaptive threshold, with θ as the strength of inhibition.In this model neurons that are active in a retrieved pattern will experience an inhibitory input after a time ∼ τ , which tends to turn them off and thus destabilize the current pattern.This model uses separate terms to serve different purposes: the symmetric interactions J ij stabilize each pattern, the external field V i slowly destabilizes the retrieved pattern through the feedback coupling, and the asymmetric interactions Jij bias the system towards the subsequent pattern.Fig. 3A shows an example of sequential retrieval using this model.
Similarly, we can reformulate the model due to Sompolinsky & Kanter [12] through identifying In this model, after the network retrieves a pattern, the feedback units c µ slowly activate to drive the system towards the subsequent pattern.This is enough to destabilize the current pattern if λ is sufficiently large (see Section IV for feasible parameter regions).An example of this dynamics is shown in Fig. 3B.

B. Sequential retrieval via interaction modulation
Our reformulation of both models above makes it clear that in these models the feedback units c µ modulate the input V i to achieve sequential retrieval.We now present new models of sequential retrieval in which the feedback units c µ modulate the interactions W ij .Two types of such modulation are possible, which act either on the symmetric interactions J ij or on the asymmetric interactions Jij .
First consider a model for the modulation of the symmetric interactions (MSI), which can be described by When the network has retrieved a pattern ξ ν for a period of time, all c µ will decay to zero except c ν .As a consequence, all terms in J ij will be turned off except for µ = ν.Therefore, only one pattern ξ ν+1 will be stable, which is the one that the network will retrieve subsequently.In other words, the symmetric interactions store and retrieve one pattern at a time.Asymmetric interactions Jij are needed to provide directional bias towards subsequent patterns, even though the strength λ can be small (see Section IV).
Alternatively, feedback units can be used to modulate the asymmetric interactions (MAI).Consider a model with As before, when the network has retrieved a pattern ξ ν for some time, c ν reaches a large value while all other c µ decay to zero.In this model, however, only one directional bias is active, corresponding to the transition ξ ν → ξ ν+1 .For a sufficiently large λ, this term will destabilize the current pattern and push the system towards ξ ν+1 .Compared to the MSI model above, here all patterns are stored in the symmetric interactions, but only one transition is enabled at a time.In Fig. 3C and D we show two examples of sequential retrieval using the MSI and MAI models, respectively.As one can see, the dynamic trajectories are very similar to the HU and SK models, which operate via input modulation.We therefore conclude that interaction modulation is an equally feasible way of retrieving dynamic sequences.

IV. THE PHASE SPACE OF SEQUENTIAL DYNAMICS
Within our general framework, all models of sequential retrieval are characterized by the same two parameters: the magnitude of the bias λ and the threshold θ.This allows us to compare interaction and input modulation by systematically examining the (λ, θ) parameter space and identifying the regions in which sequential retrieval occurs.To proceed, we numerically solved the dynamical system and used a custom-made score for the dynamics to quantify the accuracy of sequential retrieval, see Appendix B for details.
Fig. 4 presents (λ, θ) plots for different levels of activity a in each of the four models.Shaded regions correspond to parameter combinations that produce sequential dynamics, and regions within the red contour correspond to high accuracy (above 0.9).It can be seen that both input and interaction modulation models have compact regions of parameter space that allow sequential dynamics.However, the HU model is the least robust compared to others, as small parameter changes lead to dysfunctional behavior, such as dynamics with very high frequencies or pattern dependent amplitudes and frequencies (see Appendix B).
To better visualize the dependence of model performance on the activity level a, we overlaid the regions of accurate retrieval (red contours in Fig. 4) from different a values, as shown in the last column of Fig. 4. The HU model has very small retrieval regions for any a.For SK and MAI the retrieval region drifts from a positive value of θ towards 0 as a increases.Notably, for MSI there is a compact region where the retrieval regions for different values of a overlap.This suggests that MSI is able to retrieve sequences among patterns with varying activities, which we explore below.

V. VARIABILITY IN PATTERN ACTIVITY
So far we have focused on sequential retrieval of patterns with the same activity.To study how well the models can retrieve patterns with different activities, we consider patterns ξ µ which each have a particular activity level a µ .We chose five patterns with a µ equally spaced within the range 0.3 ± 0.2r, where the "unevenness" parameter r is varied between 0 and 1.Because the patterns have different activities, the order of these patterns in the sequence can affect retrieval.Therefore, we computed the mean accuracy of retrieval over all possible permutations of a given set of patterns (the scores are first binarized according to a cutoff and then averaged).Fig. 5A shows the average accuracy as we vary the unevenness.The ratio of the gray area to the retrieval region for uniform patterns (red contour) represents the ability of each model to retrieve uneven patterns.It can be seen that the MSI model is robust to unevenness in pattern activity, as expected from our observation of Fig. 4, as well as to the ordering of the patterns.We quantified this robustness in Fig. 5B, where the area of the retrieval region is plotted against the unevenness.While this measurement of robustness decays rapidly for other models, it is much more stable for MSI.Our results are not qualitatively affected by altering the accuracy cutoff, as shown in Fig. 5C.

VI. DYNAMIC STORAGE CAPACITY
So far we have studied sequential retrieval of a small number of patterns.We now study how these models differ in their dynamic capacity, i.e., the ability to retrieve increasingly long sequences of patterns.We quantify the dynamic capacity α D as the longest sequence of patterns which may be stored by a network of size N .The patterns are randomly generated with the same activity level (a = 0.3), and we average the accuracy over 100 sequence realizations.
Fig. 6A shows the accuracy as a function of the number of patterns for the four models in Box. 1.As the length of the sequence increases, the retrieval accuracy decreases.However, substantial differences exist in the behavior of these models.For instance, the HU model shows no apparent improvement in its capacity to store longer sequences as the network size increases.In contrast, the MSI model remains highly accurate for a large number of patterns.To corroborate this observation, we plot the dynamic storage capacity against the network size N , as shown in Fig. 6B.While the range of N is not sufficient to constitute a comprehensive scaling analysis, this figure shows that the MSI model vastly outperforms the other models in terms of dynamic storage capacity.

VII. DISCUSSION
In this work we have used the Hopfield network as a setup and formulated multiple models capable of sequentially retrieving stored patterns.In our framework, the transition between subsequent patterns is controlled by a set of feedback units, where feedback is coupled either directly to the main units in the case of input modulation, or to the interactions between these units in the case of interaction modulation.
While all models that we studied are capable of sequential retrieval, we showed that an interaction modulation model (MSI) outperforms all others.The MSI model is robust to variations in the activity level of the patterns and the ordering of the sequence, and is capable of retrieving much longer sequences than the others.The unique features of the MSI model are likely due to the dynamics of the network happening across a less rugged energy landscape (Box 1).In associative memory networks spurious minima arise as linear combinations of stored patterns, which result in a rugged landscape and dynamics which may be trapped in spurious minima.The MSI model, however, stores only a single pattern in J ij at any moment during sequential retrieval.We therefore expect that dynamics unfold on a smoother landscape, resulting in more accurate retrieval as observed.
Many models of sequential retrieval, such as those presented in Section III A, originate from the study of associative memory and the dynamics of real neurons.For example, in the SK model the role of feedback is played by slow asymmetric interactions [12].Later models, such as HU, introduced such slow feedback as neural fatigue, modeled as a neuron-specific threshold that increases when the neuron remains active [13].Another model produces so-called latching dynamics through combining a slow time-dependent threshold with neurons of many internal states [18].
Coupling feedback to interactions has been studied in various forms within the context of neural networks.In [14], the authors are motivated by a type of allostery among neurons -regulation of the efficacy of a given synapse by the activity of another synapse.Their model allows pairwise interactions to be modified by other neurons in the network, and bears resemblance to our MAI model.This phenomenon, by which a synapse that is not currently active can be strengthened or weakened by the firing of a third modulatory neuron, is referred to as heterosynaptic plasticity [7,[22][23][24].It has inspired network models that can learn sequences through synaptic competition [25].Other biological phenomena, such as neural fatigue [26], have also inspired models that modulate neural interactions rather than individual thresholds.For example, in [27] a slow feedback is introduced to depress the synapses between neurons.This model can be reformulated in our framework of interaction modulation similarly to the MSI model, as presented in Appendix A 1. In [28], synaptic depression is used to control transitions between patterns by externally modulating a global inhibition of all interactions among neurons, which allows for transitions between correlated patterns.Instead of controlling such inhibition externally, we can modify their model to couple the inhibition to a slow feedback and reformulate it within our framework, as presented in Appendix A 2.
Separate from a biological setting, the mechanism that we introduce as feedback appears widely within control theory, where it is typically studied in the context of control through linear input modulation [5,29].However, interaction modulation is a less studied form of controlling network dynamics [30], especially when the interactions are structured like in the Hopfield network.Our results suggest that interaction modulation may be a new paradigm for controlling nonlinear dynamics.

VIII. CONCLUSION
In biology, interactions among the components of a system are often effective, coarse-grained descriptions of complicated microscopic mechanisms.The strength of such effective interactions can be tuned by modifying the underlying mechanisms.Modification of interaction strengths is quite common among biological systems, such as allosteric regulation of protein interactions [31] and trait-mediated modification of species interactions [32,33].Yet, the benefits of being able to modify interactions is under-explored theoretically.
Here we have demonstrated that interaction modulation can be an effective way of controlling the stability of system configurations and the direction of its dynamics, which may be important for biological functions and evolution.
In complex systems, the interactions among many constituent units give rise to various collective behaviors, such as coherent motion [34][35][36] or collaborative functions [37].The sequential transition of the system between multiple metastable configurations that we modeled here is one type of collective behavior, and we have shown that interaction modulation is a robust way of controlling such behavior.Our study may inspire future work on exploring the role of interaction modulation in other situations.(B2) Our construction of G(m µ ) attenuates m µ below some threshold θ towards 0 and amplifies m µ above θ towards 1, so instances of retrieval correspond to a single high s µ when the patterns are orthogonal.The parameters were chosen as β = 10 and θ = 1 − a for all analyses, and was chosen to be 10 −5 to make the instantaneous score well defined even when m µ = 0 for all µ.Each pattern is retrieved and remains stable for a continuous interval of time, which we call an instance of retrieval.We identify such intervals as blocks of time when G(m µ ) ≈ 1, which corresponds to m µ > θ.The score of an instance of retrieval amounts to the time average of the instantaneous scores over the interval: where t 1 and t 2 are the bounds of the interval.The time series of network dynamics is typically composed of many retrieval instances, so we define the overall accuracy of sequential retrieval as the average score over many retrieval instances in the time series.In all cases, the network is simulated with τ = 10 and ∆t = 0.1 for 6000 time steps.We compute the accuracy only for the latter half of each time series to avoid the transient dynamics in the beginning.
In the phase space analysis of Section IV we choose a cutoff between high-accuracy and low-accuracy regions of retrieval, indicated by the red contours in Fig. 4. To determine an appropriate accuracy cutoff, we examined the distribution of scores over parameter space (λ, θ, a) for each model, as shown in Fig. 7.The right-most peak (corresponding to high-accuracy retrieval) is separated from the remaining peaks by a cutoff accuracy of 0.9.

FIG. 4 .
FIG. 4. Parameter space of sequential retrieval.The performance of each model for retrieving a cyclic sequence of four orthogonal patterns is evaluated at different combinations of the bias λ and threshold θ.These parameters were numerically sweeped with increments ∆λ = ∆θ = 0.025.The grayscale color represents the accuracy score (see Appendix B); the red contours represent regions of high accuracy (> 0.9).The first five columns correspond to different values of the pattern activity a, whereas the last column shows the overlay of the high-accuracy regions for these different activity levels.The green dots in the a = 0.3 column correspond to the time series shown in Fig. 3.

FIG. 5 .
FIG.5.Retrieval for variable activities and sequence compositions.A: Phase diagrams are computed for each model at different unevenness of activity levels (r values as marked in panel B).The parameter ranges are the same as for Fig.4.The retrieval accuracy at each point (θ, λ) is first binarized according to a cutoff of 0.8, and then averaged over all permutations.This average accuracy is colored using a grayscale, and the red contours represent the shaded area for r = 0. B: The ratio of the gray area to the red contoured region is calculated for each unevenness value r.The curves show the mean of the area ratios over sequence permutations (for an accuracy cutoff of 0.8), and the color shades show the standard deviations.C: The area ratio for unevenness r = 1 is calculated as the accuracy cutoff is varied.The dashed line corresponds to a cutoff of 0.8 used in panel A & B, above which all models fail except for MSI.

FIG. 6 .
FIG. 6. Dynamic capacity of sequential retrieval.A: For each network size N , the average accuracy of retrieval is plotted against the number of patterns p.The accuracy is calculated for a sequence of p randomly generated patterns with a = 0.3, averaged over 100 realizations.A sigmoidal curve is fitted for each N , and the dynamic storage capacity αD is defined as the value of p where the average accuracy falls below 0.7, indicated by the marker.B: The estimated values of αD are plotted against the size of the network N , showing different scaling for each model.The parameters used to simulate each model are the same as for Fig. 3 (see caption).

FIG. 7 .
FIG.7.Accuracy distributions.The distributions of accuracies over the parameter phase space in Fig.4is calculated and marginalized over a for (A-B) input modulation and (C-D) interaction modulation models.The rightmost peak, corresponding to accurate retrieval, is separated by an accuracy cutoff of 0.9 (only a small peak is present in HU).For clarity, scores of zero are omitted.