Measuring the thermodynamic cost of timekeeping

A.N. Pearson, ∗ Y. Guryanova, ∗ P. Erker, ∗ E.A. Laird, G.A.D. Briggs, M. Huber, † and N. Ares ‡ Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH, United Kingdom Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, A-1090 Vienna, Austria Department of Physics, Lancaster University, Lancaster, LA1 4YB, United Kingdom (Dated: June 17, 2020)


I. INTRODUCTION
By modern standards, the accuracy with which we can keep time is truly astonishing; nowadays the best atomic clocks keep time to an accuracy of approximately one second in every one-hundred million years [1]. This is more accurate than any physical constant we have ever measured (for example, the magnetic moment of an electron g is known to 12 digits [2]), and better than computer arithmetic which has an accuracy of 16 digits for 64-bit calculations [3]. Atomic clocks run by the rules of quantum mechanics, targeting a specific hyperfine transition in an atom's energy spectrum; yet despite the great progress in keeping time, surprisingly little is known about the relation between quantum clocks and thermodynamics. Famously invariant under time-reversal, the equations of quantum mechanics provide little explanation for the passage of time, whereas the theory of thermodynamics, although elucidating little more on the same front, does at least leave some entropic signatures [4][5][6][7]. One of the milestones at the intersection of the two fields is to derive a quantitative relation between the second law of thermodynamics and the flow of time. Investigations in this direction are also a vital component in our understanding of quantum thermodynamics, a field focused on the investigation, analysis and design of machines on the quantum scale, to which clocks are no exception [8].
Alongside philosophical and conceptual curiosities, clocks constitute an intrinsic component in the operation of numerous systems, from the clocks used to time the gates on a desktop CPU to the clocks necessary for determining your GPS coordinates. In the quantum regime, as opposed to the classical case, the thermodynamic cost associated with the precise control of a system is comparable to the energy scale of the system itself [9][10][11]. For example, the cycles of a quantum Otto engine need to be controlled by a microscopic autonomous clock [12,13]; a device that produces a stream of ticks without any timing input or external control. The energetic cost of running this clock is comparable to the energetic output of the engine and thus can no longer be neglected. These clocks have been studied rigorously from the perspective of open quantum systems, where it has been shown that their performance with respect to the resources they consume is subject to particular relations, as well as trade-offs [4]. One of the challenges in deriving such relations from microscopic thermodynamic principles is that reasonably large systems are required for irreversible dynamics to emerge [14,15]. In developing relations and trade-off models, we are forced to make assumptions about the underlying parameters and system dynamics [8,16]. At the other end of the scale, in the classical domain, it is difficult to keep track of thermodynamic costs because the systems become large and complex.
In this article we experimentally explore the thermodynamic costs of timekeeping by directly measuring both the accuracy and the entropy generation associated with a simple nano-electromechanical clock. This system allows us to investigate the relation between the resources supplied to the clock, in the form of work and heat, and the corresponding accuracy. Here, a mass is suspended from a spring and the heat from the environment excites the mass' motion at frequency f0. These vibrations are probed by a signal of power Pcav. This system (the clockwork) generates a periodic signal, which is registered to identify the clock's ticks. c) A schematic of our electromechanical system acting as a clock. A nm-thick membrane is driven by a white noise signal of power PWN. The membrane's vibrations are probed by an RF cavity driven with a signal of power Pcav. The cavity output signal, and thus the clock tick's, are registered by an oscilloscope. entropy ( Fig. 1(a)). Its useful output is a train of ticks which can be counted by a register. Previous theoretical work, based on particular models of classical [17] and quantum [4] clocks, has predicted that in those particular models there is a fundamental price to timekeeping: the more regular and frequent the ticks, the greater the rate at which the clock must create entropy. This work experimentally and theoretically studies a new kind of classical clock which realises this thermodynamic process. The clock is based on a simple optomechanical model ( Fig. 1(b)), in which the Brownian motion of a mechanical resonator is monitored using an electronic cavity interferometer. Each mechanical oscillation identified by the interferometer corresponds to one tick. The clock is driven by the work performed to illuminate the cavity and by the heat transferred from the hot resonator to the cold measurement electronics. While the accuracy can be improved by increasing either the mechanical amplitude or the electrical illumination power, in both cases this leads to greater heat dissipation and therefore increased entropy, as explained in Section II and Appendix B. This model is realised as shown in Fig. 1(c). The mechanical resonator is a high-quality silicon nitride membrane vibrating in its fundamental flexural mode. To excite quasi-Brownian motion, the membrane is driven by a white-noise electrical signal, which acts as an effective thermal bath that raises the mechanical mode temperature [18]. To monitor the membrane's displacement, it is capacitively coupled to a radio-frequency (RF) cavity operated in an optomechanical readout circuit [19][20][21][22][23][24]. The voltage output of this circuit is proportional to the instantaneous displacement. This output is recorded using an oscilloscope which acts as the clock register. Each completed oscillation, identified by an upward zerocrossing of the voltage record, represents one tick of the clock.
We used our setup to test the relation between the re-sources used to power the clock and its accuracy. The accuracy was determined by an algorithm which marked the instance at which a tick (a particular behavioural signature of the membrane's motion) occurred. We then looked at the accuracy of the optomechanical system for a range of white noise driving power and compared it to the prediction of a classical clock model. In order to make this comparison, we associated the system's resources to the clock's total entropy production. Our results confirm clear proportionality between the driving power (the resource) and the periodicity of the cavity output signal (the accuracy), which is the trademark response predicted by both a quantum and a classical clock model. This finding suggests that fundamental relations for the thermodynamics of timekeeping can be observed in a broad class of operating regimes, making them universal. In this way, our results support the idea that entropy dissipation is not just a prerequisite for measuring times passage, but that the entropy dissipated by any clock is quantitatively related to the fundamental limit on that clock's performance.

II. THEORY: THE THERMODYNAMIC COST OF TIMEKEEPING
We define the accuracy of a clock as [4]: where t tick is the mean interval between successive ticks and ∆t tick is the standard deviation of this interval. Equivalently, N −1 is the Allan variance [25] when the observation period is equal to t tick . This is a more severe measure of accuracy than the Allan variance of a much larger number of ticks. If Markovian stationarity is assumed, i.e. if successive tick intervals are uncorrelated, N is also the number of ticks before the expected accumulated timekeeping error is equal to one tick interval.
Our objective is to test the measured value of N , derived by analysing a series of ticks generated by the experiment, against the prediction of models in which the accuracy of the clock appears as a function of the resources used to drive it. This line of inquiry is inspired by [4], in which the rate of entropy production and accuracy of an autonomous quantum clock are found to be linearly related (assuming weak coupling), i.e.
where k B is Boltzmann's constant and ∆S tick is the entropy generated per tick. This entropy arises due to power being dissipated by the clock, from which we understand that greater power dissipation corresponds to greater accuracy.
In similar spirit we have analyzed a classical model of the optomechanical experiment of Fig. 1(c). In this ex-periment, the accuracy is ultimately limited by the difficulty of precisely identifying zero-crossings in the presence of thermal noise. Intriguingly, this classical experiment, despite representing a completely different physical system from the quantum clock of [4], obeys a similar relationship between accuracy and entropy. As shown in Appendix B, the maximum accuracy that this classical clock can achieve is where T N is the noise temperature of the measurement electronics and T c is the temperature of the environment, assumed to be colder than the mechanical effective temperature, which in our experiment is controlled by P WN . Whereas N is the accuracy calculated from a sequence of ticks experimentally realised by the clock, N C is a statistical prediction based on the thermodynamic properties of the setup. In order to compare the values of N obtained from the experiment with the prediction of the model, we must identify the source of entropy ∆S tick in our system. We acknowledge there are various types of entropy emerging from the setup, here, we focus on the entropy in the cavity output signal, as it is directly observable in our temporal traces. Additional entropy contributions are of course produced in the instruments used to control the systems (from the tone that drives the readout cavity to the oscilloscope that measures the cavity output signal). We do not focus on this type of entropy, as it depends on the specific implementation and it is not present in autonomous devices. Finally, there is the source of entropy production that comes from the white noise driving. This is the fundamental entropy dissipated per natural temporal event (tick) in our experiment. Here it is important to note that not all of the power injected in the system will be converted into a useful drive signal, just as not all the energy from a hot bath can be converted into work in a heat engine; some will be dissipated in the environment at the expense of entropy production elsewhere. This does not impact our results as long as the power of the white noise signal used to drive the clock is high enough to make the ticks identifiable above the thermal background. Thus, we estimate the relevant entropy ∆S tick from the spectral density of the cavity output signal by computing the area of the spectral density peak located at the membrane's resonance frequency.

III. EXPERIMENTAL SETUP
The vibrating membrane is measured using the setup shown in Fig. 2(a). The membrane, which consists of 50 nm thick SiN metallized with Al, is suspended over two Cr/Au electrodes patterned on a silicon chip, forming a capacitor. A dc voltage V dc = 15 V is applied to electrode 1, with electrode 2 grounded. Electrode 1 is connected to a RF cavity, which is realised with an inductor A metalized silicon nitride membrane is suspended over two metal electrodes, forming a capacitor CC . One of the electrodes is connected to a RF tank circuit which acts as a readout cavity. Electrode 2 is grounded. The tank circuit is formed from a 223 nH inductor L, and two 10 pF capacitors CD and CM. Parasitic capacitances contribute to CM and parasitic losses in the circuit are parameterized by an effective resistance R. The cavity can be probed by injecting a RF signal at port 1 via a directional coupler. The output signal is measured at port 2 using a vector network analyser or a spectrum analyser. The membrane's motion can be excited by injecting a signal at port 3. Bias resistors allow a dc voltage V dc to be applied to electrode 1. Red (blue) arrows indicate resources (waste) for our system. (b) |S21| as a function of probe frequency fp. (c) One of the mechanical sidebands observed in the spectrum of the cavity output signal when an excitation tone at frequency fE is injected at port 3 and swept in frequency whilst the cavity is driven at its resonant frequency via port 1. The sideband power grows when fE coincides with the resonance frequency of the membrane f0. and capacitors ( Fig. 2(a)). As the membrane vibrates, the capacitance C C between the membrane and the electrodes changes. Driving the RF cavity with a resonant tone, we can probe the membrane's motion by monitoring the cavity's output signal [24]. The cavity is driven by injecting a RF signal via port 1 via a directional coupler. A signal to excite the membrane's motion is incorporated in the circuit via port 3. The experiment is carried out at room temperature at approximately 5 × 10 −6 mbar.
To determine the cavity's resonant frequency, we measure the scattering parameter |S 21 |, which is proportional to the reflection from the cavity, as we sweep the frequency of a probe tone f P . The cavity resonance is evident as a minimum in |S 21 | ( Fig. 2(b)). To identify the mechanical resonance, we perform two-tone spectroscopy. While driving the cavity at its resonance frequency (i.e. with f P = 210.3 MHz) through port 1, we applied another tone of frequency f E through port 3 in order to excite the membrane. The power spectrum of the reflected signal is shown in Fig. 2(c) as a function of f E . The mechanical response is evident as a strong increase in the sideband power at f P ± f E when f E matches the mechanical frequency f 0 ≈ 74.5 kHz [24].
In order to use the membrane as a thermal clock, we drive the membrane's motion (of the fundamental mode) stochastically by applying a white noise signal of power P WN and bandwidth 500 kHz through port 3. This whitenoise signal is the clock's heating resource. To register the ticks, we must illuminate the cavity, and to do this the resource is a resonant drive tone injected through port 1 with power P cav . We measure the displacement of the membrane in real time by demodulating the cavity output signal V (t). The demodulated signal is measured with an oscilloscope. We show V (t) after demodulation and amplification for two different values of P WN in Fig. 2(d). From these time traces, the ticks of the clock can be identified, and an accuracy can be computed for different values of P WN .
Studying clock performance in the absolute sense is not strictly possible in our system, since this would require us to synchronise multiple clocks (e.g. via the alternating ticks game [5,26]). We have thus chosen a reference clock that is orders of magnitude faster than the system under investigation in order to resolve the temporal dynamics. In our case, the membrane's frequency is in the kHz regime while our reference clock, the clock of the oscilloscope, operates at a frequency several orders of magnitude higher. Our system constitutes a quasiautonomous clock, since just with a driving tone, it is able to convert the power of the white noise driving the membrane's motion into the observable ticks of a clock.

IV. RESULTS
Ticks are generated from time records of the demodulated voltage signal as shown in Fig. 2. Each tick corresponds to an upward zero-crossing of this signal. In prin- ciple, these zero-crossings could be identified in nearly real time using a threshold detector with an appropriate input filter. In practice, we acquired the entire voltage record and identified ticks in post-processing, in order to be able to study the effects of different filter and threshold settings.
At each setting of P cav and P WN , a record of raw data with a duration of 1 s was stored. In order to suppress noise, each record was then digitally filtered using a band-pass filter of 75 kHz bandwidth centred at f 0 . This bandwidth, which is nearly equal to f 0 , is sharp enough to remove much of the electronic noise, and thus avoids triggering false upward zero-crossings, but has a fast enough ringdown to ensure that successive ticks are nearly independent. In a real-time clock, it could be implemented using an analogue filter. To extract N for each record, the upward zero-crossings were identified in order to generate a sequence of tick intervals, and the resulting standard deviation ∆t tick was substituted into Eq. (1).
The results of this analysis are shown in Fig. 3(a) as a function of P cav and P WN . For small values of P WN , we see that N increases approximately linearly with P WN . This can be understood intuitively: a stronger drive makes the mechanical oscillations easier to distinguish from the noise. As P WN increases further, the linear relationship breaks down and the accuracy shows signs of saturating. This is to be expected due to noise in the circuit leaking from the heating tone and the membrane's motion entering the non-linear regime, effects which do not allow a continued increase of N .
As P cav increases, the linear increase of N as a function of P WN shows a larger gradient. This is because an increased P cav enhances readout. Above P cav = 14 dBm, however, demodulated V (t) shows significant fluctuations, leading to the saturation of N at smaller values of P WN (see Appendix C). The time traces corresponding to P cav < 8 dBm are too noisy for ticks to be identified (see Appendix D). The oscilloscope's sampling rate was 40 MSa/s, giving a resolution of 25 ns to the acquired time traces. Given the frequency of the membrane, this resolution sets an upper limit to the measurable accuracy of N 290, 000; however, as seen from Fig. 3(a), experimental values of N are less than a hundredth of this limit.
To test the predictions of the classical clock model, we now compare the measured N with the predicted accuracy N C according to Eq. (3). The relevant entropy arises from the electrical power dissipated in the amplifier circuit by the optomechanical sidebands that contain the displacement information. As shown in Appendix B, the ratio ∆S tick /T N can be calculated from the same demodulated voltage record used to identify ticks. To do this, each record is first numerically transformed to generate a power spectrum. The entropy ∆S tick is then calculated from the integrated power within a 10 kHz window centered on the signal frequency f 0 ; the noise temperature T N is calculated from the average spectral density well away from this frequency (see Eq. (B43)). The physical temperature of the measurement circuit is taken as T c = 300 K.
We have compared the values obtained for N C with the accuracy N computed as in Eq. (1) (Fig. 3(b)). Our results confirm that increasing accuracy require increasing ∆S tick , and show the linear relation predicted by Eq. (3). However, the constant of proportionality, for all heating and illumination powers shown here, is approximately ten times smaller than predicted. Since Eq. (3) represents an upper bound on the clock's efficiency, this discrepancy is not inconsistent with the theory. It probably indicates that identifying the zero-crossings, which does not use all the information in the voltage record, is not an optimal procedure for identifying ticks.

V. DISCUSSION
Our experiment is simple enough to account for the thermodynamic resources used, like in Ref. [27], and at the same time our system is too complex to be modelled by a simple open quantum systems approach.
The results in Fig. 3 showcase an important relation between the accuracy and the entropy production that should be present in the most fundamental clocks [4], both in a quantum and a classical model. The accuracy is only a lower bound on the entropy creation, making it entirely possible for the system to dissipate more entropy at higher drive powers without providing more accurate ticks. The fact that we nonetheless see such a consistent linear relation between the accuracy and the entropy production for a considerable range of cavity and white noise drives, indicates that our clock's performance is close to optimal and that we are correctly identifying the relevant entropy contributions.
Our clock provides a steady stream of ticks that are identified from cumulative events; it would defeat the purpose of a clock if only a finished sequence of events can be used retroactively for the identification of ticks. That would rather correspond to the concept of a stopwatch, where upon interrogation one obtains a good estimate of how much time has elapsed between initialisation and interrogation, but does not provide a continuous temporal reference frame. Although the system is not fully autonomous, because a cavity drive is necessary for readout, it presents a perfect testbed for generating a stable time-ordered signal by exploiting thermal nonequilibrium. In fact, any system that acts as a register is expected to consume work, as it would inevitably require to perform measurements of irreversible events [28].
Any thermally irreversible process could be used as a clock [7], e.g. simply by observing the progress of equilibration as a proxy for time. We propose that an operational definition for a good clock is a system that reduces the linear slope of the accuracy-dissipation relation and keeps it linear for accuracy as high as possible. This is consistent with another recent finding Ref.
[29], which shows that clockwork complexity can be used to decrease that linear slope and to increase the saturation point, beyond which extra dissipation will not correspond to a better clock quality.
We should also note an interesting relation to the phenomenon of stochastic resonance [30], where noise can push a signal beyond a detection threshold and in this way increase the signal quality. Superficially our experiment presents a similar scenario, since we inject noise to create a periodic signal in time. The main difference, however, is that we do not add noise to the output signal with a constant read-out limitation, but rather the opposite: we feed our noise directly into the physical system producing the output signal and modifying the read-out mechanism to optimally reveal it in a noisy background. Nonetheless, it will be interesting to see if these techniques can be fruitfully adapted to our setup.
The observed relationship between drive power and accuracy (Fig. 3) is in qualitative agreement with the relation stemming from the oversimplified model in Ref. [4], and with the prediction of our classical model. Our results also corroborate the notion that the quality of the arrow of time is indeed limited by the entropy dissipated by a clock. As described in Ref. [4], the linear relation between accuracy and entropy production tends to break down at some point. We have observed this effect in our experiment, most likely due to the membrane's motion entering the non-linear regime at high drive powers or due to other non-linearities playing a more significant role in the circuit. Below that threshold, our observed relationship between drive power and accuracy points towards a universal relation, in both quantum and classical regimes, between entropy production and clock accuracy.

VI. CONCLUSION AND OUTLOOK
In this work, we have demonstrated a thermomechanical clock which allowed us to reveal a universal relation in the thermodynamics of timekeeping. We have first showed that the heating resource introduced to drive the clockwork of our optomechanical setup enhances the accuracy of the clock signal. Modelling our system classically, we have then found that the linear relationship between clock accuracy and entropy production, derived in an idealised quantum setting, is found to hold in the classical regime. The universality of this relation provides a clear link between the entropy dissipated by the clock and the quality of the arrow of time.
As an exciting avenue for future investigation, one can imagine interpreting the system as a heat engine, instead of a clock. Since the oscillations of the membrane can induce a current, they are able to produce work, thus mimicking a heat engine that converts unstructured noise into regular beats. For a system of this scale, work fluctuations become crucial, in contrast to a classical macroscopic engine, for which the power delivered in each stroke is approximately the same. This opens up the opportunity of studying work fluctuation relations as well as deriving rates for heat to work conversion. Finally, it would be interesting to see if the noise (heat) driving the membrane could be harnessed from the environment, rather than being input from a characterised source. In this way one would be able to say that the system is truly performing as a useful engine.

ACKNOWLEDGMENTS
We acknowledge useful discussions with G. Milburn, M. Lock, and J. Parrondo, and F. Vigneau's contribution to the experiment. This work was supported by the Royal Society, EPSRC Platform Grant (EP/R029229/1), the ERC (818751) and FQXi Grant (FQXi-IAF19-01). This publication was also made possible through support from Templeton World Charity Foundation and John Temple-ton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the Templeton Foundations. MH acknowledges funding from the Austrian Science Fund (FWF) through the START project Y879-N27, FQXi Grant number FQXi-IAF19-03-S2 and the ESQ Discovery Grant Emergent time -operationalism, quantum clocks and thermodynamics of the Austrian Academy of Sciences (ÖAW). P.E. and Y.G. acknowledge funding from the Austrian Science Fund (FWF) through the Zukunftskolleg ZK03.
The setup. The Johnson noise of resistor R 0 , in equilibrium with a hot bath at temperature T h , is filtered to pass frequency f 0 with bandwidth f 0 /Q f . The resulting signal, whose power is P 0 , is passed to a matched resistor and amplifier at temperature Tc. (b) From the noisy voltage record (points) seen by the amplifier, we can generate clock ticks by estimating the zero crossing of each cycle using a sinusoidal fit (lines). Here ∆t marks the sampling interval, t tick = 1/f 0 is the average tick interval, ±tr is the fit range, and ∆t tick is the fit uncertainty.

Appendix A: Electromechanical system
The silicon nitride membrane is 50 nm thick and has an area of 1.5 mm × 1.5 mm. 90% of the area of the membrane is metalized with 20 nm of Al. We suspend this membrane over two Cr/Au electrodes patterned on a silicon chip. The capacitor formed between the electrodes and the metalized membrane, which depends on the membrane's displacement, leads to coupling between the cavity and the mechanical motion. The RF circuit is modelled and characterised in Ref. [24]. The entire setup forms a three-terminal circuit with input ports 1 and 3 and output port 2. We used a vector network analyzer to measure the scattering parameter (Fig.1b), a spectrum analyzer to measure power spectra (Fig.1c) and an oscilloscope to measure the displacement as a function of time (Fig.1d).

Appendix B: Entropy-accuracy relation for a thermomechanical clock
This Appendix derives the entropy-accuracy relation, Eq. (3), which is tested in the main text. We do this by considering two classical clock models. The first is a very simple clock that uses the filtered Johnson noise of a hot resistor. The second is the optomechanical clock -an elaboration of the Johnson noise clock which is realised in our experimental setup. As shown below, both designs obey the same relation, which in turn resembles previously derived relations for classical [17] and quantum [4] clocks.
In both models, the clock must derive ticks from a periodic but noisy voltage record. We ask the question: how precisely can any clock identify a tick instant from a segment of this record? From the perspective of the clock, this is clearly a problem of phase estimation. From the nth segment of the record, an error δφ n in estimating the phase leads to an error δt n = t tick δφ n /2π in identifying the corresponding tick instant t n . Thus from Eq. (1), the clock accuracy in any classical model is related to the phase error by since the tick uncertainty ∆t is by definition the root-mean-square value of δt n . Furthermore, we require that successive ticks be statistically independent, which means that every tick must be derived from a non-overlapping segment of the record. In what follows, we construct models for δφ n for two physical scenarios and thus estimate the accuracy of those clock models. Figure 4 shows a design for a thermodynamical clock based on Johnson noise. The clock contains two heat baths at temperatures T h and T c . Inside the hot bath, at temperature T h , is a resistor R 0 , which is connected via a matched transmission line to an ideal voltage amplifier located in the cold bath at temperature T c . To ensure an impedance match and thus prevent reflections from the end of the transmission line, an equal resistor R 0 is connected to the amplifier input. A reflective band-pass filter is placed in the transmission line, centered at frequency f 0 and with quality factor Q f , so that it passes frequencies in a bandwidth of f 0 /Q f near the center frequency. The combined Johnson noise of the two resistors leads to an incoherent voltage oscillation at the cold amplifier input, whose peak amplitude V S satisfies

Measuring time from filtered Johnson noise
where · denotes an average over many oscillations. Each oscillation cycle corresponds to one tick of the clock. Demarcating each cycle accurately requires a large oscillation amplitude, meaning that a larger power is dissipated in the cold resistor; this is the thermodynamic price that we aim to quantify. The amplifier measures the input voltage V (t) as a function of time t (Fig. 4(b)). To generate a timing signal, the clock's task is to identify ticks from particular instances of the record, for example, those instants at which the upward crossings of the t-axis occur. This is the phase estimation problem described above. The reason that a perfect estimate is impossible even in principle is that the record is contaminated by voltage noise, including the broadband Johnson noise of the cold resistor.
How should the clock best perform a phase estimate, given a segment from the noisy voltage record? The answer is to perform a maximum-likelihood estimation. If the noise is uncorrelated and has a Gaussian distribution, as expected for broadband Johnson noise, this means a least-squares fit to the data [31]. No implementation of the clock can perform better than this.
To this end, we imagine that we have obtained some experimental data; we discretize the time interval in the record into pieces around the expected tick locations (the upward crossings), and fit one curve for each tick of the clock, such that for n ticks we fit n curves. For a particular tick we imagine fitting the function where V 0 is the oscillation amplitude; f is the frequency; φ n is the phase; and where we have chosen to fit the n-th tick to a function over the interval 2t r (see Fig. 4(b)). The parameters V 0 and f can be estimated over several recent oscillation cycles because they are slowly varying properties and are therefore not determined by the noise over a single cycle. The only parameter to fit is thus the phase φ n , which motivates the notation V (t|φ n ) as per the prescription in [31]. For a particular data set D, the optimal value of the parameter for the n-th tick, denoted φ * n , is the one that minimises the χ 2 function, defined as where i labels the data points and ranges over the total number of data-points, and σ i is the vertical standard deviation of each point. The uncertainty is then determined by ∆χ 2 = 1 and the curvature parameter α, and follows the expression: ∆φ := (δφ n ) 2 := ∆χ 2 α −1/2 .
The curvature parameter is calculated from the fitted function and the experimental points i as A final value for Eq. (B5) would be obtained by evaluating α at the fitted parameter φ * n which minimises Eq. (B4) and choosing ∆χ 2 such that it corresponds to the desired confidence interval. Since we are in the business of constructing a model for the accuracy (i.e. we are not analysing the fit of a particular data set), we must make a statement that is reasonable for all data sets {D} that may emerge from this setup. To do this, we must make a few additional assumptions. First, we are interested in a situation where the oscillation frequency is sharply defined, i.e. Q f V 2 0 /σ 2 i , which means that within a single cycle σ i is dominated by the broadband noise at the amplifier input and therefore takes a constant value σ for all data points. Next, we imagine that the n ticks are fitted by choosing n windows (or regions) of length 2t r where t r = 1/2f 0 , and the χ 2 minimisation gives us the value of the crossing φ * n for each tick. To calculate α in any such region, we idealise Eq. (B6) by imagining a continuum of data points, and thus convert the sum to an integral normalised by ∆t, the sampling interval. This gives where the last step assumes t r has been chosen at the optimal value of 1/2f 0 , and without loss of generality the zero of t has been chosen at the centre of the fit interval.
Notice that choosing to fit the function in windows of width 2t r = 1/f 0 has resulted in an expression for α that is independent of the fitted parameter φ n . Indeed, the integral in Eq. (B7) is independent of φ n for all integration regions of width 2t r = 1/f 0 , regardless of where they are centred. Thus, knowledge of the membrane frequency f 0 implies that the standard error in the fitted parameter φ n is only related to the physical parameters set for the experiment. Also note that on converting the sum to an integral, we would expect the expressions to be approximately equal. Observe that the right-Riemann sum α∆t would overestimate the integral of any monotonically increasing function in the interval, while underestimating for a monotonically decreasing function. If the parameter is fitted such that it falls roughly within the centre of the window each time (i.e. we place the window roughly where we expect the crossing), the effects of over and underestimating the symmetric function under the integral will roughly balance out.
To obtain the standard deviation ∆φ n , we should take ∆χ 2 = 1 in Eq. (B5), giving With this, the accuracy in the Johnson-noise model is The per-point standard deviation depends on the measurement bandwidth of the amplifier and on the system noise.
In the best case, it will be set by the Johnson noise of the cold resistor [32], giving where B is the measurement bandwidth (defined using the single-sided frequency convention), and the factor 4 appears because the bandpass filter presents on open load except near resonance. In order that successive points are independent but no data is lost, the sampling interval should be related to the bandwidth by B = 1/2∆t. Thus Substituting into Eq. (B11) gives for the phase uncertainty in the interval which we chose to fit Over many oscillations, V 0 fluctuates, but its root-mean-square value is V S , given by Eq. (B2). Substituting this and (B15) into Eq. (B13) gives us a model for the accuracy of the clock The clock creates entropy because the power carried by the electrical oscillation is converted to heat in the cold resistor. The entropy creation rate can be writtenṠ FIG. 5. The thermomechanical clock. (a) The setup. An optomechanical circuit consists of an LC tank circuit whose capacitance, and therefore frequency, is modulated by a thermomechanical resonator at temperature T h . To use the vibrations in a clock, the tank circuit is illuminated by a carrier tone V in at frequency fc, giving rise to a reflected signal Vout which is passed to a cold matched resistor and amplifier. The effect of the vibrations is to modulate the phase of Vout. (b) Sketch of the resulting voltage record at the amplifier input (points), with fits (lines) from which each tick is extracted. The modulation envelope is indicated by the shaded background. Inset: Power spectrum at the amplifier input, showing uniform noise background, central delta-function peak from the carrier, and two thermomechanical sidebands. In the experiment, a demodulation circuit was applied after the amplifier (as in Fig. 2(d)) because it makes ticks practically easier to identify in the record. However, the demodulator cannot improve the clock accuracy because it cannot increase the timing information present in the signal V (t); in fact a detailed calculation would show that the accuracy is unchanged. For simplicity the demodulator is therefore omitted from this model.

since the net power transferred is
Combining this expression with Eq. (B18) gives the accuracy in terms of the entropy created: where ∆S tick ≡Ṡ/f 0 is the entropy generated per tick. This best case scenario (i.e. smallest σ) provides an upper bound for the best achievable accuracy of an experiment of this type. Thus, we can expect this model to overestimate the accuracy compared to that coming from a live experiment. Similar expressions to Eq. (B21) hold for a classical clock defined by transitions on a network [17] and for an autonomous quantum clock [4]; however, in both these cases the factor 2π 2 is replaced by 1/2.

Measuring time from an optomechanical signal
In this section we proceed to build a classical model, that predicts the accuracy, which we call N C , for a scheme that is more fitting to our experimental setup. Figure 5 shows the optomechanical setup, which serves as the clock of our experiment. The clock works by illuminating a tank circuit containing a vibrating membrane with an RF tone of power P c (Fig. 5(a)). The thermal motion of the membrane modulates the phase of the reflected signal, and from this signal the ticks are derived. This is the principle of the clock realised in our experiment. The advantage of this clock over the version of Fig. 4 is that the reflected signal can be increased by increasing P c as well as by heating the membrane more strongly. As this section will show, this clock obeys a similar entropy-accuracy relation to Eq. (B21). The voltage incident on the tank circuit is where V c = √ 2R 0 P c and f c are respectively the amplitude and frequency of the illumination signal, and the characteristic impedance of the transmission line is assumed equal to R 0 . The reflected amplitude is therefore where Γ is the cavity reflection coefficient, β is the mechanical coupling strength, and x(t) is the instantaneous membrane displacement. The phase reference plane is assumed to be chosen so that the phase is zero at the membrane's equilibrium position. The membrane vibrates with a mechanical temperature T h . If its quality factor is high, the mechanical amplitude x 0 and phase φ are approximately constant over one oscillation cycle, meaning that the displacement is In this experiment, the electromechanical coupling is weak, meaning that βx 0 1. This means that we can substitute Eq. (B24) into Eq. (B23) and expand to lowest order in βx 0 , giving In other words, the reflected signal is modulated at frequency f 0 with phase φ, as sketched in Fig. 5(b). Each full cycle of the modulation is one period of the clock. To generate ticks, the clock must identify a particular point of the modulation cycles, which implies it must precisely identify φ. As in Section B 1, we want to know how accurately this can be done in principle. Again, we imagine we have obtained a set of experimental data and wish to know how accurately the n-th tick can be identified. We proceed by fitting the function in windows of width 1/f 0 around the expected tick locations. The parameters A 0 , A 1 , f c , and f 0 can be extracted over several recent cycles, and are thus known values. Therefore, just as in Section B 1 we are performing a one-parameter fit.
We imagine that for some dataset we minimise Eq. (B4) for the function in Eq. (B26), which gives us the optimal parameter φ * n . We now want to know: what is the error in this fit given the optomechanical setup we have described? We follow the recipe give in the previous section and proceed to calculate the curvature parameter of our model where t r = 1/2f 0 is the fit range. Since the fit window extends over many cycles of the carrier tone, i.e. t r 1/f c the last three oscillatory terms make a negligible contribution to the integral, leaving Since the tank circuit presents an open electrical impedance except at its resonance frequency, the Johnson noise again obeys Eq. (B15), leading to which implies that δφ n = √ 8f 0 k B T c R 0 /A 1 and To connect this to thermodynamic quantities in the experiment, we recognise that A 1 is related to the combined power P SB in the two sidebands by Entropy is created because the reflection from the tank circuit containing the hot resonator leads to irreversible heating in the cold resistor. Equation (B25) and the inset of Fig. 5(b) show that there are potentially two contributions to the heat: the reflected carrier, which is a coherent monochromatic tone at frequency f c ; and the two incoherent sidebands centred at f c ± f 0 . However, the carrier contains no information about x(t). In principle (although this was not implemented in our experiment) a narrowband filter could be used to direct this portion of the spectrum back towards the tank circuit without affecting the accuracy of the clock. Thus the reflected carrier does not contribute to the fundamental entropy cost of the clock. Instead, the unavoidable entropy increase is determined by the two sidebands, which dissipate heat P SB in the cold resistor. The entropy creation rate iṡ In contrast to Eq. (B19), there is no decrease of entropy in the hot element, because illuminating the membrane at the cavity frequency does not cool it. Thus we can re-express Eq. (B34) in terms of the entropy generated per tick, leading to Equation (B37) is the fundamental entropy-accuracy relation for the optomechanical clock. There is one more adjustment which must be made to compare Eqs. (B37-B38) to experiment. The derivation above assumed that the amplifier noise is much less than the Johnson noise of the cold resistor. Although this is perfectly possible, it is also common (and is the case in our experiment) that other noise sources contribute, leading to a decrease in accuracy that reflects technical imperfections in the voltage measurement rather than any fundamental bound. To account for this possibility, Eq. (B37) should be generalized to where T N is the effective temperature, including the Johnson noise of the cold resistor, determined by the noise in the record.
To evaluate Eq. (B39) from the experiment, we express its components in terms of the output signal's power spectrum S V V , which is proportional to the modulus squared of the Fourier transform of the record V (t). In this language, the effective temperature is given by Here S

(N)
V V is the single-sided average spectral density of the noise in the Fourier transformed signal, i.e. the average background level of the power spectrum. In terms of the power spectrum, the heat P SB in the cold resistor is given by integrating the excess spectral density (i.e. the signal) above the noise background, the integral running over both sidebands, S V V (f ) df . Thus the classical model predicts the accuracy from the experimental data to be where S

(S)
V V is the and the second line follows from Parseval's theorem. In practice, our analysis applies Eq. (B43) to the record of the demodulated voltage as in Fig. 2(d). Since demodulation does not change the signal-to-noise ratio, Eq. (B43) remains valid, with the integral now taken over the single signal peak.