Transient Chaos in BERT

Language is an outcome of our complex and dynamic human-interactions and the technique of natural language processing (NLP) is hence built on human linguistic activities. Bidirectional Encoder Representations from Transformers (BERT) has recently gained its popularity by establishing the state-of-the-art scores in several NLP benchmarks. A Lite BERT (ALBERT) is literally characterized as a lightweight version of BERT, in which the number of BERT parameters is reduced by repeatedly applying the same neural network called Transformer's encoder layer. By pre-training the parameters with a massive amount of natural language data, ALBERT can convert input sentences into versatile high-dimensional vectors potentially capable of solving multiple NLP tasks. In that sense, ALBERT can be regarded as a well-designed high-dimensional dynamical system whose operator is the Transformer's encoder, and essential structures of human language are thus expected to be encapsulated in its dynamics. In this study, we investigated the embedded properties of ALBERT to reveal how NLP tasks are effectively solved by exploiting its dynamics. We thereby aimed to explore the nature of human language from the dynamical expressions of the NLP model. Our short-term analysis clarified that the pre-trained model stably yields trajectories with higher dimensionality, which would enhance the expressive capacity required for NLP tasks. Also, our long-term analysis revealed that ALBERT intrinsically shows transient chaos, a typical nonlinear phenomenon showing chaotic dynamics only in its transient, and the pre-trained ALBERT model tends to produce the chaotic trajectory for a significantly longer time period compared to a randomly-initialized one. Our results imply that local chaoticity would contribute to improving NLP performance, uncovering a novel aspect in the role of chaotic dynamics in human language behaviors.


I. INTRODUCTION
Many machine learning models have recently been proposed for natural language processing (NLP) tasks by incorporating deep neural network techniques. Among them, Bidirectional Encoder Representations from Transformers (BERT) [1], developed by Google™, have especially gained in popularity, owing to their scalability and efficiency, by establishing the state-of-the-art results for eleven NLP benchmarks, such as the Stanford Question Answering Dataset (SQuAD) 1.1 [2] and general language understanding evaluation (GLUE) benchmark tasks [3]. BERT is a type of feedforward deep neural network architecture composed of multiple encoder layers of Transformers [4], by which input sentences are encoded as favorable high-dimensional state vectors potentially capable of solving NLP tasks. Using selfsupervised learning to train network parameters with a large amount of corpus data, BERT is optimized to yield essential representations suitable for multiple NLP tasks. The masked language modeling task (MLM task) is especially used for the pre-training process, the objective of which is to predict the original vocabulary ID of the masked word based only on its context. A huge collection of novel data (BookCorpus) and English Wikipedia articles is used for the pre-training dataset, by which the structure of natural language is expected to be encapsulated in the model. After the pre-training, an additional simple decoder model, such as logistic regression, is tuned for a specific language task while the pre-trained BERT parameters are fine-tuned or often kept fixed. This fine-tuning process is completed with a smaller number of training epochs compared to the process of learning from scratch, and thus is computationally inexpensive compared with pre-training. Despite its simplicity of the training scheme, BERT has outperformed various task-specific architectures at a wide range of NLP tasks [1].
A Lite BERT (ALBERT) [5] is also a powerful neural architecture for NLP tasks, characterized as a lightweight version of the BERT model. Initially, two types of models, BERTbase and BERT-large, were proposed based on network size, but both are extremely large with more than 100 million parameters. ALBERT was proposed to reduce BERT parameter size and enhance memory efficiency. ALBERT incorporates the following two techniques to reduce the number of parameters: (i) factorized embedding parametrization (the embedding matrix is split into input-level embeddings with relatively low dimensionality, and hidden-layer embeddings with higher dimensionality) and (ii) cross-layer parameter sharing arXiv:2106.03181v2 [cs.CL] 9 Jun 2021 (ALBERT shares parameters across layers to improve parameter efficiency). Even after drastically reducing redundancy, it was reported that ALBERT outperformed the original BERT in typical benchmark tasks [5].
A. ALBERT as "the reservoir" The cross-layer sharing technique provides ALBERT with a structure that reuses the same feedforward neural network. In other words, ALBERT can be regarded as a type of recurrent neural network (RNN) such that the internal state encodes the generic NLP representation in a specific time step after the input sentence is given. Also, as with the fine-tuning processes of BERT, additional models (linear models are used in most cases) are trained for each NLP task while the Transformer's encoder parameters are often kept fixed. This finetuning style recalls the reservoir computing (RC) framework [6,7], in which the internal weights of RNN (a.k.a. reservoir) are fixed and only the readout's parameters (often a simple linear regression model) are modified for a specific task [8,9] (Fig. 1A). Moreover, unlike the conventional RC frameworks using random networks, the network parameters are fully tuned to solve the MLM and sentence order prediction task before the readout training phase. In that sense, the Transformer's encoder can be interpreted as a well-designed reservoir specific for NLP, namely "the reservoir," where the essential properties required in NLP tasks are represented in the form of a dynamical system. Therefore, we expect that fundamental structures underlying natural language can be anatomized by analyzing the dynamics of the pre-trained ALBERT model. There have been several works that attempt to analyze the structure of natural language using dynamical systems theory [10][11][12] and chaotic dynamics [13][14][15][16]. Our study aims to advance these dynamical systems approaches to understand natural language, by using the up-to-date machine learning model.
In this study, based on the "ALBERT as the reservoir" perspective, we analyze the dynamics of the pre-trained ALBERT and clarify the embedded properties for realizing NLP capability. In particular, we compare the pre-trained model with a randomly-initialized one and conduct the analysis focusing on two different timescales, namely (i) short-term and (ii) longterm. In the short-term analysis, we analyze short-term trajectories where the input effects are likely present, by sampling transients within 500 time steps, that is, the outputs of 500 layers. Here, we show that pre-training ALBERT significantly increases the trajectory's dimensionality while maintaining its coherency, by measuring three indices: synchronization offset, the local Lyapunov exponent (LLE), and effective dimension. To discuss the contribution of the short-term properties to the representative capability for NLP tasks, we also measure the performances in the short-term range with the following three tasks: MLM task, semantic textual similarity benchmark (STS-B) [17], and handwriting task (see the Appendix for the detailed setups). These layerwise analyses are similar to those in [18] which evaluates BERT performance, and our study inspects the properties for wider time range. In parallel, we investigate the system's global properties in the long term analysis. We show that the pre-training increases the chaoticity, that is, the pre-trained trajectory is more likely to magnify small differences in input sentence by measuring the LLE and thereby contributes to amplifying the trajectory's dimensionality.
These properties are fundamentally essential to retain and emphasize differences in input sentences, i.e., non-chaotic systems cannot properly represent the differences since the information vanishes by converging to a fixed point or limit cycle after a certain time step. Additionally, we demonstrate that ALBERT intrinsically shows transient chaos [19], a typical nonlinear phenomenon showing chaotic dynamics only in its transient, and the length of the chaotic trajectory in transient chaos becomes significantly longer due to pre-training.
The significance of this study is clear: we show that pretraining increases the discriminative capability of transients by increasing the chaoticity, which plays an important role in achieving the generic NLP ability of ALBERT by magnifying and retaining the small differences in input sentence, disclosing a novel aspect of information processing in highdimensional chaotic trajectories. Finally, we discuss the potential impact of this study and the future direction of studying the dynamical relationship between low-level structures (such as letters, words, phrases, and grammar) and high-level structures (such as logical flow, and meaning).

B. Generalization of ALBERT Model
ALBERT is a deep neural network whose architecture is formulated by the following equations: x where u represents the embedding layer and f stands for Transformer's encoder (Fig. 1B). The embedding layer u maps input sentence s with N w tokens onto high-dimensional vector x 0 ∈ R N w ×N h . Here, N h is a constant integer called a hidden dimension (N h = 768 in ALBERT-base, N h = 1024 in ALBERT-large) and represents the dimension of token vectors. The i-th token is transformed into the i-th row of vector x i 0 ∈ R N h (i = 1 · · · N w ). As shown in Fig. 1, Transformer's encoder f : R N w ×N h → R N w ×N h also outputs a vector with the same shape as the input vector; that is, x L can also be split into N w vectors x i L , where x L and x i L are the state vector and token vector, respectively.
Transformer's encoder f is a feedfoward neural network as well, which includes the attention mechanism [4] described by the following equation:  each token vector, and A : R N w ×N h → R N w ×N h is a feedforward neural network called multi-head self-attention. In the multihead self-attention, N a (:= N h /64) square matrices called attention probability (∈ R N w ×N w ) are calculated, quantifying the strength of relationship among N w tokens. The multi-head self-attention A integrates all attention probability matrices to produce an attention matrix A(x) ∈ R N w ×N h . During the finetuning process, which usually involves logistic regression, an additional model is trained for a specific task based on the output of the L-th layer x L . In particular, L = 12 and L = 24 are used in the ALBERT-base and the ALBERT-large models, respectively, according to the pre-training condition.
This multi-layer feedforward neural network can be described in the form of the following time evolution equation of a discrete dynamical system, In other words, the embedding layer u can be reinterpreted as a mapping that generates the initial value of the dynamical system from the input sentence, the output of t-th layer x t as a dynamical state at time t. In this study, we analyzed the dynamics of state vectors and their token vectors to clarify the relationship between the dynamical properties and the NLP capability.

II. RESULTS
Based on the formulation of the dynamical system in section I B, we investigated the ALBERT dynamics to anatomize the system properties contributing to realizing the generic NLP capability. As mentioned earlier, to comprehensively understand the mechanism, we analyzed the ALBERT dynamics from the following two different viewpoints; short-term and long-term analysis. We used a publicly-available pre-trained ALBERT model [5], referred to as the pre-trained network, and compared it to a randomly-initialized network, referred to as an initial network. We prepared five initial networks by randomly sampling parameters in the same manner as [5] (each weight parameter was independently sampled from a truncated normal distribution N(0, σ 2 ) where σ = 0.02 was used and the sampled value was clipped in the range of [−2σ, 2σ]. bias parameters are initially set to 0). Below, statistics for the initial network conditions were calculated using five different networks generated through the above explained procedures.

A. Short-term Analysis
We began by investigating the trajectories in a relatively short-term range within 500 time steps and the contribution to the NLP capability. Specifically, we first focused on the transients of the high-dimensional token vectors. Fig. 2A shows the dynamics of the first element of each token vector and the degrees of variation among them. Using the beginning parts of the article, an English Wikipedia article of "BERT (language model)" was selected to prepare the 32-token input sentence.
To evaluate the degree of variation among token vectors, we introduced the following index D(t): Note that the performance is assessed by the Pearson score between the outputs and targets, which can take negative values. In addition to the initial and pre-trained network, we examined the performance of a finetuned network where the internal parameters of f were tuned (dash lines). (C) Handwriting task.
[Left] Analysis of handwriting task's error. Normalized mean square errors (NMSEs) between target and output are displayed in the colormap. We changed the offset time t 0 and length of target trajectory ∆T , where each linear regression model was tuned by the system's trajectory x t (t ∈ [t 0 , t 0 + ∆T )).
[Right] Demonstrations of handwriting task. We trained a linear regression model with two output nodes to write the letters "U" and "S" (black dashed lines) for unrelated and similar sentences, respectively, from the state trajectory x t (t ∈ [7, 35), shaded in pink). The corresponding state vector dynamics are also shown in the middle. Five best outputs with the smallest errors are displayed in each figure of output dynamics.
Although the more complicated patterns were observed in the pre-trained network compared with the initial one, it was found that all token vectors synchronized (D(t) → 0) after a certain period in both conditions, which we named token-vector synchronization. Token-vector synchronization occurred in all observed cases that we investigated, and all values in the attention probability became identical after token-vector synchronization began. These observations suggest that the Transformer's encoder and its attention mechanism would have the potential to cause the synchronization phenomenon where all token vectors take the same value. Note that each token vector element had different convergence values and the timing of synchronization starting, and its value changed according to the input sequence.
As shown in Fig. 2A, the period until the beginning of the token-vector synchronization got significantly longer in the pre-trained network compared with the initial one. To investigate the contribution of the pre-training to the synchronization timing, we measured the synchronization offset with 1,000 sentences randomly sampled from English Wikipedia articles. Note that synchronization offset was defined as the time step when D(t) falls below threshold 10 −5 . Fig. 2B shows the synchronization offset over token size N w , suggesting that pre-training uniformly extends the synchronization offset regardless of the number of tokens. This result implies that pretraining amplifies the expressive diversity of the token vector, holding the information about the input sentence for longer time steps.
To further scrutinize and quantify the transient structure, we introduced the following two measures: LLE and effective dimension [21]. The LLE assesses the expansion degree of the trajectory at each time step t. In particular, the sign of the LLE value works as an indicator discriminating between chaotic and non-chaotic trajectories. The LLE was calculated in an iterative manner based on the numerical algorithm of the maximum Lyapunov exponent [22] (see Appendix for the detailed procedures by which to obtain LLE). The top column of Fig. 2C displays the evolution of averaged values of LLE λ(t) under the conditions of k = 1.0, τ = 10 over 1,000 inputs randomly sampled from English Wikipedia articles, showing that λ(t) takes a relatively small positive value in the range of t < 100 in the pre-training network, while λ(t) is uniformly small in the initial one. This result suggests that the pre-training made the internal dynamics chaotic but embedded a relatively stable structure especially in the short-term range after inputs were given.
The effective dimension evaluates the trajectory's dimensionality, that is, the effective number of the principal components explaining the state cloud (the set of the state vectors). The effective dimension is calculated from the eigenvalues wheres i (t) represents the normalized eigenvalues so that the sum becomes 1. In this study, the covariance matrix was calculated from the 1,000 32-token sentences randomly sampled from English Wikipedia articles. The bigger N eff (t) indicates that the state cloud is distributed in the higher-dimensional space and thus the model encodes input sentences in a less redundant manner. The bottom column of Fig. 2C displays the evolution of N eff (t), suggesting that the pre-trained network consistently shows a higher effective dimension than does the initial one. This steady increase of N eff (t) suggests that the input sentences were represented with a wider range of parameters in the state vector as a result of the pre-training, improving separability for input sentences. A sharp decrease in N eff (t) was also observed in the pre-trained network, from t = 50 to t = 150, which is consistent with the results in Fig. 2B; that is, the separability is reduced as a result of token-vector synchronization. So far, we investigated the dynamics in terms of three measures: synchronization offset, LLE, and effective dimension, to analyze the transient structure embedded by pre-training in the short-term range. It is assumed from the three indices that the pre-training embeds relatively stable transients with higher dimensionality for a longer period compared with the initial one, which would enable the state vector to reflect the difference in input sentences. (Note that we found that whether this locally stable structure appears or not depends on the type of tasks. For example, some token-vectors generated from the STS-B dataset yielded chaotic transients with opposite properties to those from the MLM dataset. See the Appendix for the detailed reports).
In parallel, to illuminate the contribution of these transient properties to the NLP capability, we prepared three NLP tasks and measured the performances. First, we measured the task performance of the MLM task (see the Appendix for the detailed setups) used in the ALBERT pre-training. We prepared the publicly-available decoder model of logistic regression tuned in the pre-training procedure. Originally, the decoder model was optimized to predict masked vocabulary IDs using x 24 for the ALBERT-large model. Here, we measured the scores of each time step to investigate how the essential information required in the task was encoded and stored in the transients. We obtained the performance with a fixed and fine-tuned condition, where the decoder's parameters were unchanged or re-trained for each time step, respectively. Fig. 3A displays the performances of the MLM task, showing that the pre-training certainly improved the score of the MLM task, at t = 24 with both the fixed and fine-tuned decoder model. Notably, the pre-trained accuracy kept relatively high values in a broad range around the original layer size, t = 24, suggesting that the fundamental structures required to solve the MLM task were constantly held in a certain time range after the pre-training, and not limited in t = 24. (Since we used a partial dataset of Wikipedia sentences used in the pre-training, the fine-tuned model slightly outperforms the fixed one, even at t = 24, by over-fitting to the dataset. See the Appendix for the detailed setup of the MLM task).
Next, we evaluated the task performance of state vector x t with STS-B. STS-B is a task in GLUE where the model is tuned to quantify the semantic similarity between two sentences separated by a special token "[SEP]" on the scale of 0.0 to 5.0. The performances of STS-B were measured using the Pearson score with human annotation data, with the performance getting closer to 1.0 with a better model. In addition to the initial and pre-trained networks with fixed parameters of the Transformer's encoder f , we examined the performance of the fine-tuned network where the parameters of f were also fine-tuned for the task, verifying the validity of the ALBERT-large model (the learning rates used during the training are shown in the Appendix). Fig. 3B shows the Pearson score for each x t , indicating that the scores with the pre-trained network were constantly higher than those using the initial network, and gradually decreased with larger t. The score did not get significantly higher at t = 24, implying that the number of layers did not necessarily require designation at the fine-tuning process. Similarly, in the fine-tuned network, the plateau in score evolution was observed in the range of t < 60, suggesting performance was independent of the predetermined number of layers. Note that the performance oscillation in the range of t > 60 would be caused by the instability of the training for larger architectures, known as the gradient explosion problem [23], meaning that t = 60 would be the minimum time step where the gradient values could surpass a preferred range. These evaluations of STS-B performances imply that the transient of the pre-trained AL-BERT model possesses the generic NLP capability in a certain time range and decreases its discriminative capability as time evolves.
The evaluations of the above two tasks show that the pretrained ALBERT model properly encodes the input sentences into generic state vectors useful to solve NLP tasks in a certain time range. Finally, we prepared a handwriting task to demonstrate that the transient dynamics of ALBERT itself, not the state vector at a certain time step, have a rich expressiveness and can even be exploited as direct computational resource from which to draw required cursive letters. The handwriting task used the same dataset of STS-B. Unlike the STS-B task, the decoder model (a.k.a. readout) was tuned to output dynamics of pen directions from the state vector dynamics x t in a time range of t ∈ [t 0 , t 0 + ∆T ) to "write" the letters "U" and "S" separately according to the semantic similarities of input sentences (see the Appendix for the detailed setups). Here, we extracted unrelated and similar sentences whose scores were below 1.0 and over 4.0, respectively, among the original STS-B dataset to create the training and evaluation data. We first evaluated the expressive performances of the transients by measuring the normalized mean square errors (NMSEs) between the output dynamics and the desired one for each time range. The colormap in the left part of Fig. 3C indicates that there was a local minimum, and the performance got worse as offset time t 0 and target length ∆T became larger, which is consistent with the result of STS-B performance shown in Fig. 3B, where the performance peaked at t = 16 and got worse with larger t. The right part of Fig. 3C demonstrates the best five output dynamics for each semantic type of input sentence and the corresponding state vector dynamics of a favorable range scoring minimum NMSE (t 0 = 7, ∆T = 28). Note that the same readout was reused to separately "write" these letters, suggesting that the high-dimensional transients had sufficient expressiveness to design desired trajectories according to the similarity of input sentences in a certain time range.
To summarize, we analyzed the short-term dynamics of the ALBERT-large model and revealed that pre-training allows the ALBERT architecture to yield locally stable transients with higher dimensionality, especially in the range of t < 100. The evaluations of NLP tasks clarified that the transient possesses expressive capabilities reflecting the semantic differences in input sentences in a short-term range, suggesting that the pre-training would enhance the expressive capability for NLP tasks by increasing the transient's dimensionality while keeping its stability moderately.

B. Long-term Analysis
Next, we investigated the long-term trajectories to focus on the global properties of ALBERT-large as a dynamical system. We especially evaluated the system's chaoticity and investigated pre-training effects on global properties. First, Fig. 4A demonstrates the original and perturbed trajectories of the pretraining network with the same 32-token sentences used in Fig. 2A. The exponential expansion of the difference was observed between the original and perturbed trajectory, indicating that the pre-trained ALBERT-large trajectory would be chaotic.
To examine the system's chaoticity more carefully, we sampled extremely long-term trajectories and calculated LLE λ(t) again, which offered a measure of local chaoticity at time step t (We used k = 1.0, τ = 50, T = 1.0 × 10 6 in this analysis). Fig. 4B shows dynamics over 600,000 time steps with the same 32-token input sentence, exhibiting the abrupt and unexpected transition from a chaotic trajectory to the periodic orbit. Notably, the LLE value suddenly fell below 0 after the transition, indicating that the chaotic trajectory transited to a non-chaotic one. This characteristic phenomenon showing chaotic dynamics only in its transient is known as transient chaos [19] and is often observed in high-dimensional dynamical systems. In addition, all dynamics, as far as we observed, transited to the same periodic global attractor, implying that the pre-trained network was globally non-chaotic and possessed a single global periodic attractor basin. We also analyzed the obtained trajectory by principal component analysis in Fig. 4C, suggesting the trajectory of transient chaos had a certain structure and was distributed on a high-dimensional state space.
To evaluate the effect of pre-training on the transient chaos length, we measured the distribution of the transient chaos length using 1,000 32-token sentences randomly sampled from English Wikipedia articles. Fig. 4D shows the distribution of the transient chaos length, indicating that the system was more likely to produce chaotic trajectories for a longer period after pre-training. The initial network failed to produce chaotic trajectories with 38.5% of tested sentences, while all samples yielded transient chaos in the pre-trained network. The average LLE value in chaotic trajectories was close to zero in the initial network (6.41 × 10 −3 ), yet positive in the pre-trained network (9.54 × 10 −2 ). These results suggest that pre-training increases local chaoticity and extends the length of chaotic trajectories.

III. DISCUSSION
In this study, based on the formulation of ALBERT as a discrete-time dynamical system, we analyzed the trajectory properties using short-term and long-term analyses. In the short-term analysis, we first demonstrated that token vectors began to synchronize after a certain period, and suggested that it was caused by the attention mechanism of the Transformer's encoder. Recent studies have showed that graph neural networks [24], including the ALBERT's attention mechanism, can cause over-smoothing and eliminate the difference among nodes with the deep architecture [25,26], which is consistent with our empirical results exhibiting token-vector synchronization. We also measured three indices to quantify the transient properties and found that: • pre-training significantly extended the synchronization offset, • local chaoticity was low, while the effective dimension was significantly high, especially when t was small for sentences used in pre-training (English Wikipedia sentences). These results suggest that the pre-trained ALBERT generates a relatively stable trajectory, with higher dimensionality in a certain short-term range. Moreover, the gradual changes in the benchmark scores around L = 24 indicate that NLP functionality is not implemented by the designated composite function f L ; instead, it is formed by a single mapping f , and essential structures required in NLP tasks are maintained in its transients for a certain period. This property allows the transients to design the output dynamics according to the meaning of input sentences, which was demonstrated in the handwriting task. Ref. [27] also reported that optimal performance was not always obtained in the predetermined number of layers (e.g., t = 24 for ALBERT-large model), which is also compatible with our results. Since the NLP performances were especially high when using short-term transients of t < 50, it is assumed that the discrimination ability of ALBERT for NLP tasks is enhanced by diverse trajectory patterns induced by pre-training.
Next, the long-term analysis showed that transient chaos was generated for a longer period through pre-training. It was reported that nonlinear dynamical systems become universally unstable over a certain dimensionality [28]. In AL-BERT architecture, the nonlinearity of the system is provided by a softmax function in the multi-head self-attention layer, layer normalization, and GLUE activation. Therefore, it is quite possible that chaotic trajectories emerge on ALBERT architecture. In addition, LLE and transient chaos length values revealed that pre-training not only increases the local chaoticity but also produces a chaotic trajectory for a longer period. Since the effective dimension generally increases in the chaotic regime [21], it was considered that local chaoticity induced by pre-training contributes to increasing the trajectory's dimensionality, enabling the system to amplify input sentence differences and consequently enhance separability for NLP tasks.
Furthermore, the coexistence of long-term chaotic dynamics and locally stable trajectories would be a significant property when utilizing chaos in information processing. For example, an RNN framework called innate training [29] revealed a novel aspect of a chaotic trajectory for information processing. Similar to ALBERT's self-supervised algorithm, the internal weights of the chaotic RNN are adjusted by a semi-supervised learning scheme to reproduce the chaotic dynamics once generated by itself. As a result, the trained RNN can reproduce versatile spatiotemporal chaotic dynamics according to the input, which can be used for forming and controlling various transient patterns [29][30][31]. In that sense, it can be said that ALBERT exploits the chaotic trajectory in information processing with a mechanism analogous to innate training.
We formulated ALBERT as a dynamical system and investigated dynamical properties. This generalization greatly expands applications of ALBERT's architecture. For example, ALBERT can be applied or expanded to time series processing tasks. As demonstrated in the handwriting task, motor command generation and natural language recognition can be naturally implemented in a single dynamical system, exhibiting the possible use of ALBERT at the real-world environment. Also, the existence of chaoticity in ALBERT offers a new guideline for the application of nonlinear chaotic dynamical systems. Conventionally, researchers have mainly focused on studying ways to suppress and control the system's chaoticity [32][33][34]. Our results may imply the positive contributions of chaotic dynamics to human linguistic activities by shedding light on the application of chaotic dynamics to information processing.
To investigate the general properties of ALBERT, we randomly chose sentences from English Wikipedia articles without any rules. We also did not directly change the content of the input sequence. However, ALBERT can properly recognize the difference in the meanings of input sentences as shown via several benchmark tasks. In other words, it is speculated that ALBERT can adjust the degree of chaoticity according to the sentence meaning. The discriminating capability of sentence meaning raises an important question: which of ALBERT's properties enable the system to reflect the difference in sentence meanings in the dynamics? For example, in our daily lives, we can distinguish between sentences having opposite meanings with a single word flip, which is a chaotic-like state sensitivity. Conversely, we can comprehend that completely different texts, the ordering of words, can generate the same meaning. Here, the relationship between the sentences and their meanings should be nonlinear. We expect that ALBERT might express this nonlinear correspondence by exploiting embedded chaoticity. If this is the case, we can quantitatively evaluate the relationship between sentence meaning and dynamical properties by directly manipulating the input sentences for ALBERT, which yields sig-nificant implications for the relationship between symbolic structure and sentence meaning in human language processing. These points are explored in our future works. MLM is a pre-training task of BERT and ALBERT. The network parameters are trained to predict the original vocabulary ID of the masked word based only on its context. Originally, 58,476,370 sentences were prepared for pre-training by extracting the beginning parts of the English Wikipedia articles. In this study, we extracted 10,000 sentences from the dataset and used 9,500 sentences for the training and 500 sentences for the evaluation.

B. Detailed setups of the STS-B training
STS-B is one of the GLUE tasks used to evaluate the performance for quantifying the semantic similarity between two sentences separated by a special token "[SEP]" on the scale of 0.0 to 5.0. For obtaining the results displayed in Fig. 3C, we set the learning rates to 10 −3 and 10 −6 for the fixed and fine-tuned setups, respectively. We tuned the parameters for 10 epochs.

C. Algorithm for local Lyapunov exponent
Local Lyapunov exponent (LLE) λ(t) for a dynamics x t with an initial value x 0 is calculated with an iterative algorithm (algorithm A1).

D. Trajectory analysis for STS-B dataset
To understand the relationship between sentence meaning and the dynamics properties, we also measured the LLE and effective dimensions for the STS-B dataset. The analysis of the LLE shown in the left part of Fig. A1 shows that the STS-B's transients are more likely to become chaotic in the short-term range. Also, evaluations of the effective dimension displayed in the right part of Fig. A1 clarify that the dimensionality of the transient becomes lower in the short-term range. These properties by the STS-B dataset are opposites to those by the Wikipedia one, and indicate that the locally stable structure obtained by the pre-trained dataset was not necessarily observed, and this outcome depends on the type of tasks and corresponding input structures.

E. Detailed setups of the handwriting task
The handwriting task evaluates the expressive capability of a transient in a certain time range, where the external linear regression model is tuned to separately write the letters "U" and "S" according to the semantic similarity of sentences from the STS-B dataset. The base paths "U" and "S" were prepared from Arial font. The target dynamics of the pen directions were calculated by interpolating the paths with ∆T + 1 points. The readout models were tuned to output the dynamics of the pen direction v t = [∆x t , ∆y t ] T ∈ R 2 and the accumulated trajectories p t = t k=t 0 v t are shown in Fig. 3C. The normalized mean square errors (NMSEs) between the target d(t) and the output y(t) were calculated as follows: where the bracket represents the average over the evaluation data. Algorithm A1 Local Lyapunov Exponent (LLE) 1: x 0 = u(s) Calculating x 0 for input sentence s ← y t+τ − x t+τ Recalculating perturbation