Information Swimmer: A Novel Mechanism of Self-propulsion

We study an information-based mechanism of self-propulsion in noisy environment. An information swimmer maintains directional motion by periodically measuring its velocity and accordingly adjusting its friction coefficient. Assuming that the measurement and adjustment are reversible and hence cause no energy dissipation, an information swimmer may move without external energy input. There is however no violation of the second law of thermodynamics, because the information entropy stored in the memory of swimmer increases monotonically. By optimizing its control parameters, the swimmer can achieve a steady velocity that is comparable to the root-mean-square velocity of an analogous Brownian particle. We also define a swimming efficiency in terms of information entropy production rate, and find that in equilibrium media with white noises, information swimmers are generally less efficient than Brownian particles driven by constant forces. For colored noises with long correlation times, the frequency of measurement can be greatly reduced without affecting the efficiency of information swimmers.

While self-propelling of bacteria is typically achieved via actuation of cellular appendages such as flagella, synthetic self-propellors often move via surface effects [13,15], or phoretic effects [3], i.e., interaction with gradient of physical quantities. Another interesting selfpropelling mechanism is Brownian motor [11,12], which relies on a delicate interplay between noises and periodic potential.
Here we explore a novel mechanism of self-propulsion that uses information instead of energy. We imagine a swimmer periodically measures its velocity relative to its environment, and adjust its friction coefficient accordingly. As a consequence it is able to maintain a steady motion along the chosen direction, with an average velocity comparable with root-meansquare velocity of Brownian motion. We shall call such a system an information swimmer, in echo of information engine which use information to extract energy from a single heat bath.
Perpetual motion with no energy dissipation may widely be perceived as violating the second law of thermodynamics. The essence of the second law is however not about energy, but about entropy. In the presence of information acquiring devices, entropy increase is not necessarily accompanied by energy dissipation. The relation between entropy and information is an intellectually profound question with long and interesting history [16]. Through the works of Maxwell [17] , Szilard [18], Landauer [19], Penrose [20], and Bennett [21,22], and many more recent studies [16,23,24], it has become clear that in the presence of information acquiring agent, the total entropy can be written as whereas H(Info) is the information entropy, and S(Sys|Info) is the thermodynamic entropy conditioned on the information acquired. Thermodynamic entropy and information entropy can be transformed into each other, much like energy and mass. The total entropy is however dictated by the second law to be non-decreasing.
There have recently been a large body of researches on design and application of "information heat engines", which use information to extract mechanical work, or to push particles to higher free energy states [25][26][27][28][29][30][31][32][33]. Information swimmer is an information engine which serves the distinct purpose of maintaining of directional transport in noisy environment. Of special interests is a design called "information ratchet" [34,35], where one measures the position of an object, and adjust the confining potential accordingly, which leads to directional transport with no apparent energy dissipation. Unlike information ratchet, the mechanism we study in this work does not require a periodic confinement potential.
Information swimmers may be realized using colloidal particle, polymeric materials, or biological molecules. An information swimmer consists of at least three components: a sensor (measuring velocity), a memory (storing information), and a switch (controlling friction coefficient). Accordingly, the working cycle of the swimmer consists of three basis operations: measurement, information storage, and tuning of friction coefficient. While in reality these operations are always dissipational, there is no lower bound of dissipation imposed by any fundamental law of physics. The observation that measurement can in principle be made reversible and hence causes no energy dissipation was first made by Bennett [21,22,36], and played an essential role in the proper resolution of paradox raised by Maxwell's demon. For a review, see reference [16]. Likewise, the operation of information storage (which can be understood as a special form of computation) can also be made reversible, and hence causes no energy dissipation. (Also pointed out by Bennett is that information erasure is always irreversible and hence cause energy dissipation. We shall further assume that the memory space is sufficiently large so that there is no need of information erasure.) The friction coefficient may be tuned by changing the swimmer's volume, shape, or surface structure, which can be realized using structure phase transition of polymeric materials. A fancier way is to deform the particle using microscopic molecular motors [37][38][39][40][41][42][43] or nanorobots [44][45][46]. Again there is no lower bound of energy cost/dissipation in these processes, and hence we will assume them to be reversible. Assuming that all these operations are reversible, an information swimmer can maintain directional motion without energy dissipation. This however does not mean violation of the second law, since the information entropy stored in the swimmer's memory does increase steadily.
Swimming on information may have advantages over existing mechanisms of transport in microscopic noisy environment. It does not need externally imposed potential, which is required by information ratchets and Brownian motors, or proximity to interface, which is required by phoretic swimmers. It may have higher biocompatibility since it causes much less (in principle zero) energy dissipation.
It is interesting to note that life uses information for control of transport, long before human understand information. For example, in chemotaxis [47], bacteria tune their motion using information (together with energy) on gradient of external chemical stimulus (either attractant or repellent). In swarming, birds and insects adjust their fly according to their distances to neighbors [48,49]. In marine navigation, sailors control the directions of rudders and sails [50] to make boat turn (tacking) and move (zigzagging) along arbitrary direction relative to the wind. Sailing is in fact an almost ideal realization of self-propulsion using information only, because the energy cost of turning sails and rudders is negligible comparing with that needed to drive a boat. Further studies may reveal many other informationfeedback mechanisms for motion in biological and technological fields.

Model and Simulation Methods
To reduce the complexity of details, we consider the one-dimensional case. The physics of higher dimensional cases is essentially the same.
The swimmer has a baseline friction coefficient γ. After every time interval τ m , the swimmer measures its velocity and compares with a threshold velocity v 0 . The friction coefficient is The results of measurement are recorded in its internal memory space.
The dynamics of the swimmer can be modeled using piecewise linear Langevin dynamics.
Assuming that the noises acting on the swimmer is Gaussian and white, during the time interval nτ m < t < (n + 1)τ m , the velocity of the swimmer obeys the following equation where ζ(t) is the normalized Gaussian white noise with statistical properties: The coefficients of the noise terms in Eq. (2) are chosen such that the Einstein relation, i.e., the second Fluctuation-Dissipation relation, is satisfied separately for v(t) < v 0 and v(t) > v 0 . This relation is a reflection of the equilibrium nature of the ambient fluid, and remains valid independent of the swimmer velocity [59]. If we set α = 1, Eq. (2) describes a normal Brownian particle, whose velocity distribution converges to a Maxwell distribution with average kinetic energy T /2, as required by equilibrium statistical mechanics.
Further defining two control parametersṽ 0 ≡ v 0 /v T andτ m = τ m /τ , Eqs. (2) and (3) becomes We use numerical scheme to discretize Eq. In long time, the swimmer reaches a steady moving state. By simple dimensional argument, we expect that the average velocity scales with the thermal velocity v T = T /m, if we chooseτ m smaller than unity, and α 2 larger than unity. For a swimmer with a micron radius moving in a fluid with viscosity comparable to water, we estimate that the time scale τ ∼ 8µs and the steady velocity v ∼ 100µm/s.

Velocity distribution
We consider the case α 2 = 10, which means that the friction coefficient becomes ten times larger if the velocity is unfavorable. First we set the threshold velocityṽ 0 = 0, and plot the velocity distributions for two different periods of measurement τ m = 0.01 and 1. As shown in Fig. 1(a), forτ m = 0.01 (one measurement every τ /100 seconds), the velocity distribution has an abrupt change of slope near v = 0, and a high peak to the right. The probability density is severely suppressed for v < 0. The overall shape is drastically different from the equilibrium Gaussian distribution. The average velocity of the swimmer is approximately 0.7v T , as one can see from Fig. 1(b). Forτ m = 1 (one measurement every τ seconds), the velocity distribution has a much more regular shape, even though the difference with equilibrium distribution is clearly noticeable. The average velocity is approximately 0.25v T , as one can see from Fig. 1(b). Next, we fixτ m = 0.01, and vary the threshold velocity. As one can see in Fig. 1(a), there is always an abrupt change of slope in the vicinity ofṽ 0 . Forṽ 0 = 1, the velocity distribution p(ṽ) exhibits a two-peak structure, with a low and wide peak to the left ofṽ 0 , and a high and narrow peak to the right, and the average velocity of the swimmer is approximately 0.8v T . These results unequivocally demonstrate the feasibility of information swimming as a viable mechanism of self-propulsion.
In Fig. 1(b) we show how the average velocity ṽ varies as a function of the threshold velocityṽ 0 forτ m = 1, 0.1, 0.01, and α 2 = 10, 2 respectively. In all cases we see that ṽ vanishes asṽ 0 → ±∞. This is of course totally expected, since in these limits, measurement almost always return the same result, and the friction coefficient remains invariant. The peaks of the curves in Fig. 1(b) correspond to the maximal average velocity achieved by tuningṽ 0 . The location of the peak move to the right as α increases, orτ m decreases.
However the optimal threshold velocity is never far away from zero. The height of peak increases as α increases or τ m decreases. Note that the maximal velocity is generally less than, but of the same order as the thermal velocity.
Entropic Efficiency of Swimming Similar to Maxwell's demon, an information swimmer record its measurement results, and as a consequence, the information entropy of its memory increases steadily during the motion. Hence even though the entropy of the ambient fluid remains constant, the total entropy increases, in accordance with the second law. Because information can be stored and transferred at arbitrary low temperature, increase of information entropy does not need to be accompanied by energy dissipation [19].
We can quantify the rate of information entropy increase in swimmer's memory. Let s n be the result of measurement at time nτ m , which takes 0 if v > v 0 or 1 if v < v 0 . The sequence s 1 , s 2 , . . . , s n , . . . forms a discrete Markov chain. The results of consecutive measurements are however generally correlated, and hence the sequence can be compressed before storage.
According to information theory [51,52], the minimal information bit needed to store each measurement result (averaged over long sequence of measurements) is the entropy rate of the Markov chain, which is defined as: The entropy production rate is then Σ = I/τ m , where τ m is the period of measurement.
Consider now a force-driven Brownian particle, whose dynamics is Eq. (2) with α 2 = 1, and augmented by an external force F : mv = F − γv + √ 2γT ζ(t). The work done by the external force is constantly dissipated into the ambient fluid in the form of heat. The average velocity is v = F/γ, and the entropy production rate is Σ = F v /T = v 2 γ/T .
The same quantity for information swimmer is The ratio between Eqs. (8) and (7) is: which characterizes the entropic efficiency of information swimmer relative to a force driven Brownian particle with the same mass, baseline friction coefficient, and in the same temperature. We compute this ratio for different values of control parameters. As shown in SI Sec. II, the optimal choice of threshold velocityṽ 0 is always very close to zero. Thus we fix v 0 = 0 to reduce to the task of computation. In Fig. 1(c), we plot η IS as a function ofτ m for α 2 = 2, 5, 10, 100 respectively. It is seen there that the optimal τ m is always a faction of τ , and decreases monotonically as α increases. The maximal efficiency monotonically increases with α, and remains substantially lower than unity.
Information swimming on colored noise There are many systems where fluctuations exhibit long time-correlations. For example, the time-correlations of velocity in fluids are characterized by long tails that decay algebraically [53]. Active fluids [54,55] and turbulent fluids [56] exhibit long range correlations both in time and in space. These correlations can be used to reduce the frequency of measurements for information swimmers.
Here we study information swimmer using colored noises that are in thermal equilibrium.
As illustrated in Fig. 2(a), we consider a system consisting of a box (with center-of-mass coordinate x 1 and negligible mass) and a particle (with coordinate x 2 and mass m) that are connected by a spring with constant k. The particle is confined inside the box and therefore does not couple to noise or friction directly. The dynamics of the two-body system is described by a set of coupled linear Langevin equations: where ζ(t) is a Gaussian white noise satisfying Eq. (3). Integrating out x 1 , we find that the dynamics of x 2 satisfies the generalized Langevin equation with an effective Ornstein-Uhlenbeck noise: where τ c = γ/k is the noise correlation time, which can be tuned continuously by tuning the spring constant k. In the limit k → ∞, τ c → 0, and Eq. (11) reduces to the white noise model. Details of the derivation are given by SI Sec. III. Equation (11c) is the second Fluctuation-Dissipation Theorem which relates variance of colored noise to the friction kernel [57]. It is a consequence of the time-reversal symmetry of the original model Eq. (10).
Another possible realization of the dynamics (10) is illustrated in Fig. 2(b), where two particles connected by a spring are moving near the interface of two fluids. The first particle moves in a fluid with high viscosity in the over-damped regime so that its mass can be ignored, and the second particle moves in a fluid with low viscosity so that both friction and noise can be ignored.
We can now introduce a measure-feedback mechanism into the model Eqs. (10), so that it becomes an information swimmer. The system measures velocity v 2 every τ m second, and tunes the friction coefficient to γ if v 2 > v 0 and α 2 γ if v 2 < v 0 . The coupled Langevin equations (10) are simulated using the same method as above. The dimensionless variables are defined the same as in Eq. (4). In Fig. 3, we show respectively the average velocity ṽ and entropic efficiency η IS as a function of period of measurementτ m for various values of k. It is seen that as long ask is finite, both quantities exhibit oscillation as a function of τ m . These oscillations can be used to design information swimmers that optimize velocity or efficiency. Furthermore, it appears that both quantities converge to zero asτ m → 0, indicating that the measurement-feedback mechanism becomes ineffective as the frequency of measurements becomes high, whose reason we do not yet understand. Finally, as the corre-lation time of noise becomes longer and longer (with decreasingk), the peaks of curves move systematically towards the right in theτ m axis. We note that while the maximal velocity decreases steadily as τ c increases, the change of maximal efficiency is very insignificant. The general conclusion is therefore we can use correlation of noises to reduce the frequency of measurement without changing the swimming efficiency. This may be very useful for design of information swimmers in non-equilibrium environment, such as turbulent fluids [56] or active fluids [54,55].
X.X. acknowledge support from NSFC via grant #11674217, as well as additional support from a Shanghai Talent Program. This research is also supported by Shanghai Municipal Science and Technology Major Project (Grant No.2019SHZDZX01).