Deep-Learning-Based Radio-Frequency Side-Channel Attack on Quantum Key Distribution

Quantum key distribution (QKD) protocols are proven secure based on fundamental physical laws, however, the proofs consider a well-defined setting and encoding of the sent quantum signals only. Side channels, where the encoded quantum state is correlated with properties of other degrees of freedom of the quantum channel, allow an eavesdropper to obtain information unnoticeably as demonstrated in a number of hacking attacks on the quantum channel. Yet, also classical radiation emitted by the devices may be correlated, leaking information on the potential key, especially when combined with novel data analysis methods. We here demonstrate a side-channel attack using a deep convolutional neural network to analyze the recorded classical, radio-frequency electromagnetic emissions. Even at a distance of a few centimeters from the electronics of a QKD sender employing frequently used electronic components we are able to recover virtually all information about the secret key. Yet, as shown here, countermeasures can enable a significant reduction of both the emissions and the amount of secret key information leaked to the attacker. Our analysis methods are independent of the actual device and thus provide a starting point for assessing the presence of classical side channels in QKD devices.

Quantum key distribution (QKD) protocols are proven secure based on fundamental physical laws, however, the proofs consider a well-defined setting and encoding of the sent quantum signals only.Side channels, where the encoded quantum state is correlated with properties of other degrees of freedom of the quantum channel, allow an eavesdropper to obtain information unnoticeably as demonstrated in a number of hacking attacks on the quantum channel.Yet, also classical radiation emitted by the devices may be correlated, leaking information on the potential key, especially when combined with novel data analysis methods.
We here demonstrate a side-channel attack using a deep convolutional neural network to analyze the recorded classical, radio-frequency electromagnetic emissions.Even at a distance of a few centimeters from the electronics of a QKD sender employing frequently used electronic components we are able to recover virtually all information about the secret key.Yet, as shown here, countermeasures can enable a significant reduction of both the emissions and the amount of secret key information leaked to the attacker.Our analysis methods are independent of the actual device and thus provide a starting point for assessing the presence of classical side channels in QKD devices.

I. INTRODUCTION
Quantum Key Distribution (QKD) [1][2][3][4][5][6] is one of the most mature quantum technologies.It allows two authenticated parties to use a quantum channel to exchange a cryptographic secret, which they can later use for symmetric cryptography.Using fundamental physical principles, QKD allows to quantify the amount of information leakage to an eavesdropper and to subsequently eliminate it entirely using appropriate postprocessing.QKD is used both for short-distance and long-distance communication via free-space [7][8][9][10][11] and fiber-based [12][13][14][15] links with first or planned implementations in large networks [16,17].With a plethora of testbeds and implementations in multinational industry-oriented consortia, QKD has reached commercial end-user availability.
Yet, despite its conceptual elegance, the practical security hinges upon the quality of the implementation, in particular, strict adherence to the theoretical model used to prove security.The quantum states sent over the quantum channel have to be prepared precisely within the requirements of the QKD protocol.Any correlation with any other degree of freedom, but also with classical properties of the devices used, potentially opens side channels [5,6,18,19].These will allow an eavesdropper to infer the key by measurements unnoticeable to the users.We refer to side channels exploited by interacting with the quantum channel as quantum side channels and all other side channels as classical side channels.
Electronic devices continually emit electromagnetic radiation and are in turn influenced by it.Thus, the operation of security critical devices may be influenced by active attacks [20] rendering them insecure, such as demonstrated on quantum random number generators [21].However, if emissions from a device are correlated with sensitive information processed by it, a critical side channel opens up widely and allows much simpler passive attacks.They do not need any manipulation of device components and are practically impossible to detect.Investigations of information leakage from conventional communication systems via electromagnetic radiation go back to at least the 1940s, later under the US military codename TEMPEST [22].TEMPEST attacks now refer to eavesdropping via electromagnetic or acoustic side channels and are widely considered in security specifications and during certification of security critical systems.Technologies such as software-defined radio (SDR), specialized probes [23] and particularly deep learning [24,25] make the exploitation of vulnerabilities much easier and more effective.
In reaction to quantum hacking attacks on QKD devices countermeasures have been developed to protect and secure against side channel attacks on the quantum channel [6].Yet, as standard electronic components, especially logic units such as FPGAs, ASICs or CPUs emit electromagnetic radiation, they also open a new, classical side channel for attacks on potentially every QKD system.
Here, we demonstrate a deep-learning-based side-arXiv:2310.13738v1[quant-ph] 20 Oct 2023 channel attack on a QKD device using radio-frequency (rf) emissions at frequencies up to a few GHz.Our setup does not require expensive specialized equipment and works with few computational resources.In some scenarios, our attack is able to recover virtually all information about the secret key.In contrast to a recent attack on QKD single photon detector electronics [26], our attack targets the control electronics in general.We demonstrate its power and security threat on a QKD sender module.We analyze how the information leakage depends on distance to the device, as well as to which extent it can be mitigated with improved design and shielding.The QKD sender electronics inspected here is home-built, but, since it is made from conventional electronic components also used in other QKD systems, it is representative for other, also commercial systems.In addition, our data evaluation may also be applied to attack via other weak points, e.g., power consumption [27] or acoustic side channels [28].This clearly demonstrates that a detailed examination of classical side channels is important for future QKD devices and networks.Emission security should be considered from the early design stages [29] until the deployment of QKD devices.  .Sender (Alice) and receiver (Bob) devices, comprising of electronics and optics, are connected via a quantum channel (QC) and a classical channel (CC).The eavesdropper (Eve) has access to both channels.Eve measures Alice's emissions using a near-field probe for magnetic fields or a log-periodic antenna (not shown) for far-field measurements, whose radio-frequency (RF) signal is amplified, captured by an oscilloscope and evaluated on a PC [30].
The sender module is a home-built BB84 polarizationencoding decoy-capable QKD device building upon the device presented in [9].It features a field-programmable gate array (FPGA), which controls four distinct verticalcavity surface-emitting laser (VCSEL) drivers [30][31][32].The drivers are connected to four VCSELs emitting short light pulses with a wavelength around 850 nm, which are subsequently polarized by differently rotated polarization filters.This way, the sender device can emit optical pulses with either of the four polarization directions (horizontal/H, vertical/V, diagonal/P, anti-diagonal/M).For the measurements presented here, the module is sending random streams of symbols (H, V, P, M) at a symbol rate of f clk = 100 MHz.
For our attack, we record electromagnetic near-field emissions from the printed circuit board (PCB) of the QKD sender using a magnetic near-field probe, an rf amplifier and an oscilloscope [30] with a bandwidth of 8 GHz, see Fig. 1.Given the symbol rate of 100 MHz, we sample the emissions signal at f samp = 10 GSa/s to obtain 100 voltage samples per symbol.The oscilloscope memory sets the length of the total time trace to N meas = 2 MSa, corresponding to a measurement duration of t seq = 200 µs.We hence have sequences of length N seq = t seq × f clk = 20 000 symbols sent by the sender.Besides near-field emissions, we also record far-field emissions using a commercially available directed wideband log-periodic dipole antenna.  .Near-field spectra (power spectral density) of the emissions during a key transmission (green) and during a null measurement (brown) where the QKD sender is not powered.The regular spikes on the fine grid in the upper plot are harmonics of the 100 MHz clock frequency.Frequencies higher than about 5 GHz are suppressed due to the limited bandwidths of probe, amplifier, and oscilloscope.The spectra are obtained by Barlett's method (segment length 100 000 samples) and averaged over 30 independent measurements.At some frequencies the background exceeds the signal due to the noisy office environment changing in time.

B. Near-field spectrum
The spectrum of the recorded emissions contains a significant signal at the clock frequency f clk , see Fig. 2. Due to the measurement taking place in a noisy office environment, emissions in communication bands (e.g.Wi-Fi, UMTS) contribute to the measured signal.Since the deep learning methods used in our attack deal well with such noisy signals, we make no attempt to remove the background noise and use no manual filtering in addition to what naturally occurs in the measurement devices (probe, amplifier, and oscilloscope).Our methodology clearly has the advantage that it can cope with standard environments typical, eg., for server rooms.This makes the attack scenario more realistic and enables device evaluation without the need for specialized shielded facilities.

C. Single and averaged time traces
Let us first examine the data in the time domain.For that, we measure a time trace using the near-field probe while the device is repeating a fixed pseudo-random sequence of 20 000 symbols.Time synchronization between symbols in the key and the measured time trace is achieved using the phase of the clock signal, which is digitally extracted from the emissions, as well as a separately recorded trigger signal signifying the time of the first symbol in the key.We verified that the measurement of the trigger signal does not influence the performance of our attack, see App. A.
The near-field probe signal is split into snippets with a length corresponding to a few symbols.Since the electronic processes that produce the symbol, especially in the FPGA, take several clock cycles, we thereby make sure to capture all relevant information.This yields a set of snippets of the time trace together with the respective sub-sequence of the key.Here, for illustration, we choose a snippet length of seven symbols.
To get more insight, consider, e.g., the sub-sequences matching the pattern "??VXV??", where "?" can be any symbol.We group them according to the center symbol "X" and show their respective snippets in Fig. 3. Precise features of these individual time traces are difficult to identify and finding the symbol sequence corresponding to a given time trace by naked eye seems hard.However, a first view of the raw data reveals common features and suggests that changes and specific patterns in the measured magnetic field amplitudes correspond to switching between different adjacent symbols, rather than just the symbols themselves.When the signals corresponding to the same sub-sequence are averaged (Fig. 3 bottom), the four different averages differ significantly more around the varying center symbol than at the outermost symbols, where they roughly reproduce the clock signal.

III. MACHINE LEARNING-BASED ATTACK
Conventional methods to extract confidential information from such emission data require both specialized knowledge of signal processing and detailed models of the emissions, limiting the relevance of the resulting attacks to certain domains of application and specific types of devices or electronic components.In contrast, when using machine learning (ML) techniques [33][34][35], there is no need to understand how the emissions arise.Rather, an effective statistical model of the phenomena is created from recorded data by a training procedure, making the approach more general and adaptable.Since we are able to collect training data in a known, controlled environment, we apply supervised learning, as opposed to, e.g., unsupervised learning [36] or reinforcement learning [37], which may be promising for other types of attacks.
The attacker's task is mapping a one-dimensional timeseries (the recorded emissions) to a sequence of symbols (the raw key).To solve it, general sequence-to-sequence methods could be used, such as transformer neural networks [38].However, the task can be simplified further by assuming that there are no significant long-time correlations between electromagnetic emissions and the symbol sequence, i.e., that the emissions at a given time only depend on the symbols currently being processed but not on all symbols processed at earlier times.Note that the presence of such correlations in the behaviour of the electronics could indicate serious security problems of the QKD device [39,40].
With this assumption, it suffices to be able map a short snippet of the time trace to the symbol sent during that time.Applying the mapping individually to each snippet then yields the entire key.The attack thus becomes a classification task, i.e., mapping a one-dimensional fixedlength time-series to one of four classes (H, V, P, M).To solve it, we design and train a convolutional neural network.A snippet length of 500 samples (corresponding to five symbols) as input to the neural network proved sufficient for our attack.When predicting the center symbol, this length ensures that all relevant information is contained in the snippet while lowering demands on the precision needed in synchronizing the time trace with the symbol sequence.The attack also works with shorter snippet lengths but performs slightly worse.

A. Attacker model
Our experimental design and data evaluation are motivated by so-called profiled attacks [41].For such attacks, the attacker prepares for the actual attack while having full access to a copy of the victim's device.This assumption is in accordance with Kerckhoffs's principle [42] that security of a system should not depend on secrecy of its design, and is thus appropriate for QKD devices.
A profiled attack consists of two phases.First, in the so-called profiling/training phase, the attacker uses the copy of the target device and records data corresponding to known symbol sequences chosen at will.The data is used to create a model which captures the correlations between secret information and side channels.
In our case, this so-called training dataset consists of a sequence of key symbols, say, y train The range between sample index 200 and 500 corresponds to the three symbols in the key excerpt.The regions before and after correspond to random symbols which happened to be adjacent to the selected occurrences of the excerpts in the key.For reference, the 100 MHz clock signal is shown, as obtained digitally from the probe signal using a band pass filter.Bottom: Averages of all matching snippets (about 300 each) for each symbol combination across one measurement.In the regions where random symbols contribute to the average (roughly sample ranges 0-200 and 500-700), the differences cancel and the result is close to the clock signal.In the second, so-called attack /test phase of the profiled attack, the attacker performs a measurement upon the victim's device during its normal operation, i.e., where the attacker has no control or access to any information except the recorded emissions.The attacker records a test dataset comprising of recorded emissions only, say, x test 1 , x test 2 , . . ., x test Ntest and obtains an estimate of the key as the predictions of the previously trained model.
To evaluate the success of the attack, we make use of our access to the true sequence of sent symbols where } is the number of symbols correctly predicted by the neural network.Since the four unique symbols are equally likely to occur in the key, random guessing gives a prediction accuracy of 25%.We consider an attack successful (in extracting above zero information about the key) if the test accuracy exceeds random guessing by more than three standard deviations of the binomial distribution.For a success probability of 25% and 20 000 trials, this implies accuracies should be above 25.92%.
We use prediction accuracy because it is intuitive and allows to evaluate the attack on the sender module alone without having to discuss sifting or basis choice.A high prediction accuracy implies a successful attack.However, prediction accuracy does not directly correspond to the amount of secret information gained by the attacker.Even a small but above random prediction accuracy may still allow a critical attack (see App. B).How the raw key  symbol prediction accuracy relates to information leakage is further discussed in App. C.
Note that for a QKD device normal operation implies that the secret key is used only once.This means that the attacker has access to only a single time trace of the emissions to extract information about the key.While this makes the attacker's task more difficult, we adhere to this restriction, resulting in a single trace attack.

B. Neural network architecture and training
Our neural network architecture consists of fully connected layers, one-dimensional convolutions, max pooling, and batch normalization to process traces of the emissions in the time domain.To make better use of the data, we employ data augmentation.For more details on the architecture of the network and the data augmentation, see App.D.
Training state-of-the-art neural networks can require very large datasets and is typically performed on graphics processing units (GPUs) or tensor processing units (TPUs) due to the large computational resources required.Since our model is rather small and operates on one-dimensional data, training on a standard laptop with GPU support only takes a few minutes.As moving the probe to a new location and performing the measurement takes only a few seconds, our method allows to identify vulnerable components almost in real time.

A. Near-field measurements
Figure 5. a) Test accuracy of our neural network when trained and tested at respective positions and b) RMS amplitude of recorded emissions.Note that both power and accuracy also depend on the angular orientation of the near-field probe, i.e., its rotation about the axis perpendicular to the board, which has been kept fixed.A green star indicates the location used for the distance measurement (Fig. 4).
We perform the measurement procedure described in Sec.II A for various locations of the magnetic near-field probe and collect independent datasets for each location as described in Sec.III A. Two raw keys (each of length 20 000) are created by a pseudo-random number generator with different seeds, such that all four symbols are equally likely.One of those is used for the training, the other one for the test measurements.This is crucial to avoid overfitting [43] and misinterpretation of results.
Although a single time trace, i.e., the recorded emis- Measurements with an antenna at a distance of about 2.5 m.The spectra of 30 measurement runs when Alice is sending a random key (brown) and not sending a key (green) are clearly distinguishable as shown in the selected signal range around 1.7 GHz (inset).This is even despite various strong noise contributions from Wi-Fi, GPRS, UMTS, Bluetooth, etc.
sions while sending 20 000 symbols, is sufficient to demonstrate information leakage, we combine several measurements to increase the amount of training data and thus improve attack performance.Trading off longer measurement time for attack performance (see App. D), each training dataset contains snippets obtained from seven combined time traces (equivalent to a symbol sequence of 140 000 symbols), unless indicated otherwise.To monitor how much the results depend on unrelated classical communication and background fluctuations in the noisy office environment, we also record each test dataset three times.The test datasets are not combined but instead evaluated separately and independently, thus meeting the requirements for a single-trace attack.
As the first step, we investigate which components and areas of the circuit board contribute to rf emissions or leak information about the key.We measure at locations given by a 2-D grid spaced at 10 mm in x and y directions while keeping the magnetic near field probe at a fixed distance of 10 mm from the board.As shown in Fig. 5a, especially measurements close to the FPGA allow to retrieve the key with high accuracy.Other components, such as the voltage regulator, also produce significant rf emissions, which, however, are not correlated with the symbols and effectively reduce the attacker's signal-tonoise ratio, leading to lower test accuracy at those locations.On the other hand, we obtain high test accuracies also in regions with small amplitudes of recorded emissions (Fig. 5b).This shows that the test accuracy cannot be inferred from the amplitude of recorded emissions.
As the second step, we investigate how the distance of the probe from the circuit board affects the accuracy of our attack.We select a location (see Fig. 5) close to the FPGA, which promises a successful attack.Positioning the probe above this location at various distances from the board, we observe a decrease of the test accuracy with increased distance as shown in Fig. 4. Yet, the accuracy is above the random guessing value up to distances of 8 cm.

B. Far-field measurements
At distances larger than a few centimeters, the nearfield probe is no longer effective.To investigate whether emissions can still be detected at very long distance, we use a log-periodic dipole antenna [30].Due to high background noise in the environment and non-ideal antenna characteristics, we are not able to extract key symbols using the neural network.Thus, we pursue the more modest goal of investigating if any emissions at all are present and whether they contain non-zero information about the operation of the QKD device.To demonstrate non-zero information, it is sufficient to use emissions to reliably and consistently distinguish two modes of operation of the device.To show this, we record about 500 emission spectra of our unshielded sender device at a distance of about 2.5 m for two different modes of operation of the QKD sender: sending a random key ("key"), or being turned on but idle ("no key").To exclude influence of background variations in time, the two modes are alternated many times during data collection within datasets.By studying the spectra of a training dataset (396 spectra), we manually identify a spectral region with a peak which seems highly correlated with the mode of operation (see Fig. 6).Using the selected frequency interval around 1.7 GHz, different machine-learning approaches (support-vector classification, K neighbors classification, linear discriminant analysis) can clearly distinguish those modes of operation (test accuracy of 100% for a test dataset of 94 spectra).
Although we are not able to reconstruct the key with this equipment and analysis, this result indicates the possibility of information leakage also over larger distances [44].

V. COUNTERMEASURES
There are numerous design and shielding techniques for reducing emissions and preventing information leakage via rf emissions [29,45,46].The following countermeasures significantly reduced emissions from a revised version of our electronics, thus making the attack much less effective (Fig. 7), resulting in the attack being no longer successful for distances larger than 5 cm (compared to 9 cm of the former revision as shown in Fig. 4).
An FPGA in a ball-grid array footprint has been chosen with proper care of differential signal routing, grounding and placing of decoupling capacitors.Critical signals have been routed in layers shielded by a ground and a supply-voltage plane.An optimization of the FPGA design has not been done, but could further lower the emissions.
With the addition of metallic shielding with a thickness of a few millimeters, our attack could no longer perform better than random guessing.However, note that for a QKD device, an optical channel design with a large puncture of the shielding could significantly reduce the shielding effectiveness [47].We were able to detect a small amount of emissions in front of a hole in the shielding (about 2 × 2 cm 2 ), which allowed to predict key symbols with a test accuracy of more than 27% (highly significant for a key length of 20 000 symbols).Also, metallic shielding does not help against low frequency magnetic fields [48].

VI. CONCLUSION AND OUTLOOK
We have demonstrated that an eavesdropping attack analyzing the electromagnetic emissions from the QKD sender using machine learning can retrieve all information about the key.Although we focused our analysis to rf emissions, our methodology and machine learning techniques can also be used for studying information leakage via other potential side channels.As shown, countermeasures can reduce the success rate of the attack, yet, they may be difficult to implement, especially if standard electronic components turn out to be the strongest source of information leakage.Even small changes in device design or operation environment can have a large effect.Since countermeasures are much easier to plan and implement in early design stages of devices, preliminary testing of emissions can be very valuable.
We want to emphasize the need to test QKD devices and examine information leakage not only via attacks on the quantum channel but also via classical side channels, e.g., electromagnetic emissions, acoustic vibrations, classical message timing and power consumption.The methodology introduced here may serve as a starting point for pre-compliance testing and for preparation for security certification.

VII. ACKNOWLEDGMENTS
We are grateful to Wenjamin Rosenfeld and Margarida Pereira for helpful discussions.This work was supported by the DFG under Germany's Excellence Strategy EXC-2111 390814868, the European project Open-QKD, the BMBF projects DE-QOR and QUBE-II, and the QuNET+ projects SKALE and MiQuE.AB acknowledges support by Elitenetzwerk Bayern in the PhD program ExQM.The funders played no role in study design, data collection, analysis and interpretation of data, or the writing of this manuscript.

VIII. DATA AVAILABILITY
The source code used for data evaluation is available at [49] (MIT license).It includes the measurement pipeline (remote controlling the oscilloscope), the neural network, hyperparameter optimization routine, training pipeline and graphing of results.These materials can serve as a starting point for pre-compliance evaluation of other QKD devices.
The measured data as recorded by the oscilloscope is available at [50].The provided data and software allow to reproduce all reported results and may enable further work towards enhanced attacks via, e.g., improvements of the neural network.Here, we want to relate the raw key prediction accuracy to the sifted key prediction accuracy.A standard assumption is that the attacker obtains access to the basis choices during the post-processing phase.In this case, there are two ways to evaluate the fraction of correctly recovered bits in the sifted key.
First, the predicted symbols of the neural network can be represented as a confusion matrix, which shows what symbols are more easily distinguishable than others, see Tab.I for an example.By associating bit values to the symbols, the prediction accuracy for sifted key bits can be computed.A bit prediction is still correct when the network confuses two symbols that represent the same bit value in the sifted key, which leads to higher accuracies for bit prediction than symbol prediction.Assuming that the symbols H and P represent the bit value 0 while V and M represent 1, the bit prediction accuracy is 89.0% for the example in Tab.I.However, since the optics design is largely independent of the electronics design, one can also remap which laser driver line corresponds to which symbol.If the laser drivers originally used for the symbols H and V are rewired to represent bit value 0, and P and M to represent 1, the bit prediction accuracy is 87.1%.If H and M represent the 0 and V and P represent 1, the bit prediction accuracy is 92.5%.
A second way to evaluate the sifted key bit prediction accuracy is to train a neural network for binary classification.In our case, the results agree very closely with those obtained from the confusion matrix approach.The confusion matrix approach not only gives the attacker information about Alice's basis choices, useful for additional attacks using optical measurements, it also does not require retraining for a new driver/symbol mapping as opposed to the binary classification network.

Figure 1
Figure1.Sender (Alice) and receiver (Bob) devices, comprising of electronics and optics, are connected via a quantum channel (QC) and a classical channel (CC).The eavesdropper (Eve) has access to both channels.Eve measures Alice's emissions using a near-field probe for magnetic fields or a log-periodic antenna (not shown) for far-field measurements, whose radio-frequency (RF) signal is amplified, captured by an oscilloscope and evaluated on a PC[30].

Figure 2
Figure2.Near-field spectra (power spectral density) of the emissions during a key transmission (green) and during a null measurement (brown) where the QKD sender is not powered.The regular spikes on the fine grid in the upper plot are harmonics of the 100 MHz clock frequency.Frequencies higher than about 5 GHz are suppressed due to the limited bandwidths of probe, amplifier, and oscilloscope.The spectra are obtained by Barlett's method (segment length 100 000 samples) and averaged over 30 independent measurements.At some frequencies the background exceeds the signal due to the noisy office environment changing in time.

1 , y train 2 , 1 , x train 2 ,Figure 3 .
Figure 3. Top: For each of the three-symbol key excerpts (VHV, VVV, VPV or VMV), non-overlapping snippets of the recorded time trace (for clarity, only seven per excerpt).The range between sample index 200 and 500 corresponds to the three symbols in the key excerpt.The regions before and after correspond to random symbols which happened to be adjacent to the selected occurrences of the excerpts in the key.For reference, the 100 MHz clock signal is shown, as obtained digitally from the probe signal using a band pass filter.Bottom: Averages of all matching snippets (about 300 each) for each symbol combination across one measurement.In the regions where random symbols contribute to the average (roughly sample ranges 0-200 and 500-700), the differences cancel and the result is close to the clock signal.

Figure 4 .
Figure 4. Varying distance from the circuit board at a location above the FPGA, which promises high accuracy as indicated in Fig.5a.The test accuracy is shown as the average (green dotted line) of three independent attacks (green dots) at each distance.For short distances, the test accuracy is remarkably high (about 99%).The baseline is 25%, corresponding to randomly guessing one of the four symbols.The red area indicates three standard deviations around random guessing, assuming 20 000 trials of a 25% Bernoulli distribution.The RMS amplitude of the recorded emissions is shown for reference.

Figure 6 .
Figure 6.Measurements with an antenna at a distance of about 2.5 m.The spectra of 30 measurement runs when Alice is sending a random key (brown) and not sending a key (green) are clearly distinguishable as shown in the selected signal range around 1.7 GHz (inset).This is even despite various strong noise contributions from Wi-Fi, GPRS, UMTS, Bluetooth, etc.

Figure 7 .
Figure 7. Test accuracy (a) and amplitude of the probe signal (b) of our revised electronics.Both accuracy and amplitude are significantly reduced compared to the original electronics as shown in Fig. 5.The strongest emissions are observed from the voltage regulator (bottom right on the board), which, however, do not carry information about the key.Nevertheless, despite the much lower amplitude of emissions from the FPGA, it leaks a significant amount of information about the key.

Table I .
Confusion matrix of symbol predictions (test dataset) measured on the electronics without countermeasures.The data refers to the measurement shown in Fig.4at a distance of 1 cm..The test accuracy of symbol prediction is 84.3%.