Performance of the Large Hadron Collider cleaning system during the squeeze: simulations and measurements

The Large Hadron Collider (LHC) at CERN is a 7 TeV proton synchrotron, with a design stored energy of 362 MJ per beam. The high-luminosity (HL-LHC) upgrade will increase this to 675 MJ per beam. In order to protect the superconducting magnets and other sensitive equipment from quenches and damage due to beam loss, a multi-level collimation system is needed. Detailed simulations are required to understand where particles scattered by the collimators are lost around the ring in a range of machine configurations. Merlin++ is a simulation framework that has been extended to include detailed scattering physics, in order to predict local particle loss rates around the LHC ring. We compare Merlin++ simulations of losses during the squeeze (the dynamic reduction of the \beta-function at the interaction points before the beams are put into collision) with loss maps recorded during beam squeezes for Run 1 and 2 configurations. The squeeze is particularly important as both collimator positions and quadrupole magnet currents are changed. We can then predict, using Merlin++, the expected losses for the HL-LHC to ensure adequate protection of the machine.


I. INTRODUCTION
The LHC collimation system [1,2] is designed to protect the ring from normal beam losses caused by diffusion and scattering, as well as abnormal fast losses. In order to achieve this it uses a multi-stage system for betatron cleaning, installed in Insertion Region 7 (IR7), and a similar but reduced system for momentum cleaning installed in IR3. The primary collimators (TCP) are made of carbon-fiber composites (CFC), and sit closest in to intercept the beam halo. The secondary collimators (TCS) also CFC and shower absorbers (TCLA) made of tungsten absorb the deflected protons and secondary particles. There are also tungsten tertiary collimators (TCT) that provide extra protection for the experiments, as well as TCLs to absorb collision debris. About 150 m upstream of each experiment, a pair of one horizontal (TCTPH) and vertical (TCTPV) TCT is installed. The LHC's multi-stage collimation system has proved to be performing reliably during Run 1 (2010Run 1 ( -2012 with beam energies of 3.5 TeV and 4 TeV and the start of Run 2 (2015-2018) at 6.5 TeV. However, the future physics program at 7 TeV and the higher intensities of HL-LHC [3,4] provide new challenges, so it is important that the performance is well understood and can be accurately simulated.
In this article we present predictions for HL-LHC losses during luminosity levelling, as well measurements of the performance of the LHC collimation system during Run 1 and 2. Previously, studies of LHC collimation efficiency have been successfully performed using the Six-Track code [5][6][7][8][9][10][11][12][13], while in this article we use instead the code Merlin++, described in section II. In section III the Beam Loss Monitors (BLMs) [14,15] are described. * sam.tygier@manchester.ac.uk These are used to validate the simulation code. This system consists of about 4000 ionisation chambers distributed around the ring to monitor losses at critical elements.
For this work we consider the slow losses that occur during normal operation of the LHC. Particles in the core of the beam can be excited to higher amplitudes by a number of effects, drifting out to form the beam halo. When a particle's amplitude is large enough, it is intersected by the collimators.
Before bringing the LHC beams into collision, they must first be ramped from injection energy (450 GeV) to full energy (6.5 TeV in Run 2) and the β * (the β-function at the experiment interaction points (IPs)) reduced. This latter part of the operational cycle is called the squeeze. In Run 1 these actions were performed separately, however during Run 2 a combined ramp and squeeze sequence was introduced to reduce the cycle duration. Figure 1 shows the beam modes for a typical Run 1 production fill, from injection of the physics beam though to stable beams for physics production. In the Fig. 1 the squeeze begins at around 3500 seconds, with the β * at IP 1/5 being reduced from 11 m to 0.6 m.
The squeeze is a critical time for the collimation system as there are dynamic changes to the machine optical configuration and collimator positions while the stored energy in the beam is at its maximum. It also provides a good opportunity to validate simulation against measurements in a range of configurations, allowing investigation into any differences found.
Sections IV and V compare the simulations to data for Run 1 and 2 respectively. This gives us the confidence in Merlin++'s particle tracking and scattering models, in order to use it for making predictions of future configurations. In section VI we evaluate the loads on the collimators in the HL-LHC, for the most pessimistic loss scenario allowed at full energy. This corresponds to a 0.2 h beam lifetime over 10 s [16,17], giving a total loss power of about 1 MW.

II. CALCULATION OF BEAM LOSSES
To calculate the losses in an accelerator we must model the trajectories of particles in the magnetic lattice and also the passage and scattering of particles in the materials that make up the collimators.
Merlin++ [18], previously known as MERLIN, is a modular object-oriented accelerator simulation framework, featuring 6D thick lens tracking. Initially developed for the International Linear Collider's beam delivery systems [19], it has since been extended to support synchrotrons. It is written in C++ and can be easily extended by the user, for example to add new physics models. It has multiprocessor support using the MPI protocol for communication; however for non-collective effects as used in collimation studies, it is more convenient to run multiple independent processes and sum the results.
The user-created program calls the Merlin++ library in order to define a beam line, add appropriate physics processes, create a beam and then initiate tracking. In these studies the MADInterface module was used to read in a lattice description from a MADX [20] optics calculation. Machine apertures and collimator gaps were similarly defined in separate files.
The scattering physics used to model the passage of protons through material has recently been upgraded [21]. It contains advanced models of the following processes: multiple coulomb scattering; ionisation based on Landau theory; Rutherford scattering; a new elastic scattering model; and a new single-diffraction dissociation model. This model has been fitted to elastic and diffractive scattering data from a large number of previous experiments [22]. For performance reasons, only the leading proton from each interaction is modelled; equivalent to assuming that the secondaries deposit their energy close to the interaction.
To simulate loss maps we use an approach similar to earlier LHC studies [8,9]. We track 10 8 protons for 200 turns. The initial particle bunch is generated at the face of the primary collimator in the excitation plane. The bunch is a ring in phase-space in the excitation plane that intersects the collimator jaw by 1 µm from its edge, the impact parameter, and Gaussian in the opposite plane. It is pre-filtered so that every proton interacts with the primary collimator on the first turn. This saves significant computer resources compared to modelling the diffusion of particles from the core of the bunch. As the beam is tracked, it is compared to the machine aperture at each element. If a proton hits the aperture in a collimator then it is scattered according to the process cross sections. If the proton hits the aperture of other lattice elements, it is considered as lost at this location. The loss map records the location of every proton loss with resolution of 10 cm.
The primary collimators are set to be the tightest aperture restriction in the machine, so that they are the first material that a proton with sufficient amplitude will hit. The protons that scatter without being absorbed will go on to hit other elements in the ring. If all protons hitting the primary collimator were absorbed by the collimation system we would consider it to have 100 % cleaning efficiency. To produce a loss map, which shows the distribution of losses around the ring, we measure the cleaning inefficiency. The simulated local cleaning inefficiency, η loc , is given by the ratio of particles lost on a given section, N loc , (either an element or bin along the S coordinate) to particles lost in the primary collimators, N tot , normalised to the length of the section, ∆s, i.e.

III. LOSS MEASUREMENTS
The LHC BLM system uses ionisation chamber charged particle counters to measure the radiation levels around the LHC ring [14,15]. They are used during operation to trigger a beam dump if loss thresholds are exceeded. They also provide continuous measurements of normal beam loss around the ring during the LHC operations and are used to record beam loss during the validation campaigns of the collimation system, when artificial losses are induced with safe low intensity beams to assess the system response.
To generate a loss map one of the beams is excited in a given plane using the transverse dampers (ADTs) and the losses are recorded [23]. This allows a clean loss map for an individual beam and plane to be made. Measured LHC loss maps have been studied previously in [9,13].
During 2012 several loss maps were recoded at 4 TeV in the flat top and fully squeezed optics configurations.
No deliberate loss maps were made with the intermediate squeeze optics at 4 TeV, but as the BLM signals are recorded continuously it is possible to look at the natural losses during the squeeze. In 2016 at 6.5 TeV, loss maps were generated at the intermediate squeeze points as well as the end points.
These loss maps are crucial to validate simulations, to ensure a good understanding of the collimation system, and hence demonstrate the performance of the HL-LHC layout. In the following sections we show the validation for Run I and 2 of the LHC.  Table I shows the optics settings for the squeeze during 2012.  As well as the optics configuration changing during the squeeze, the TCTs are also in motion. The collimators are set in units of beam sigma, so as the beam envelope at the collimator's location changes, the collimators jaws are adjusted. Also the TCTs at the experiments are only brought into their tightest position during the squeeze as, the normalised triplet aperture reduces, so their jaw gap in sigma units is decreased during the squeeze, as shown in Table II. We therefore concentrate this study on the losses at the TCTs. Loss maps were simulated in Merlin++ at 8 points within the squeeze covering β * at IP1 and IP5 from 11 m to 0.6 m. A bunch of 10 8 protons were tracked for 200 turns. This is sufficient to give good statistics for all relevant collimator and ring losses, even for particles that survive for multiple turns after their first scatter. Figure 3 shows examples of the loss maps at 3 of the optical configurations. The highest losses occur on the TPCs in IR7 at around 20000 m from IR1 as expected, then lower losses along the cleaning hierarchy. Also significant losses at the momentum cleaning collimators in IR3, at around 7000 m, are observed. The losses at the TCTs in front of the experiments in IR1/2/5/8 do not appear until the later stages of the squeeze. Main cold losses in IR7 dispersion suppressor, which is the bottleneck in terms of local cleaning inefficiency

C. Measured squeeze losses in 2012
As there were no dedicated loss maps made during 2012 at the intermediate squeeze optics settings, we compare to the BLM measurements made during normal LHC operation. These have a number of disadvantages over the dedicated loss maps. They have lower signal levels, and so a lower signal to noise ratio. This makes it hard to get clean data in lower loss regions. They contain an unquantified mix of losses from both beams and in both planes. This makes it impossible to completely separate out different sources of loss as can be done with the dedicated loss maps. The logging database is used to identify typical data taking fills, where the squeeze was successful and the beam reached the 'stable beams' mode. Then the BLM signals and optical parameters as functions of time can be retrieved for those fills. For each BLM a background level is calculated by averaging the lowest 5 readings during the squeeze. This value can then be used as an estimate of the uncertainty of the BLM signal.
During a fill the rate of loss varies considerably. For much of the time the losses at the TCTs are below the noise thresholds of the BLMs. This can cause spurious values for local inefficiency. It was found that there could be large swings in the total loss rate around the fixed points of the squeeze so BLM data taken at those points are particularly unreliable.
In order to get a good measure of inefficiency we identified points in time where the total losses were high enough that the TCT BLMs were above noise. A peakfinding algorithm was used to find the highest values of the TCT losses within each fill. These points were re-tained if at the same time stamp there was also a high BLM value at TCP. Figure 4 shows how for four fills, time stamps with simultaneous peaks are selected. These points were then ranked by the product of the TCT and TCP values, and the highest kept.
The BLM signals during regular operation contain a mix of both horizontal and vertical losses. These can be partially separated by looking at the ratios between the vertical, horizontal, and skew TCPs, labelled D6L7, C6L7 and B6L7 respectively due to their positions in the lattice. Vertical excitations will hit D6L7 leaving a BLM signal there but the shower will also leave a signal at C6L7 and B6L7. Horizontal excitations will pass through D6L7 and hit C6L7 first, with the shower peaking in B6L7. We find that cutting on the D6L7 to C6L7 ratio being less than 0.1 and the C6L7 to B6L7 ratio being less than 0.5 select cases that are dominated by horizontal losses. The results in 13 data points that pass the filters, covering a range of β * values from 2 m to 0.6 m.
More sophisticated machine learning algorithms exist to categorise losses [24], however these were not used as they have not been trained on 2012 data.
For the points that pass both filters we calculate the inefficiency at the TCTs and take the β * value that corresponds to the timestamp.

D. Comparison between simulation and measurements
First we can compare the full loss maps taken at the end of the squeeze with IR1/5 β * of 60 cm. BLM loss maps are from fill 2788 on the 1st of July 2012. Figure 5 shows losses around the full ring for horizontal beam 1 excitation. Merlin++ reproduces the significant loss peaks in the collimation regions and other IRs. Figure 6 shows the loss map zoomed to the IR7 collimation region. The hierarchy of losses from the TCPs through to the TCSs and TCLAs can bee seen in both simulation and data. The BLM signal outside the collimators is higher than in simulation, especially in the warm losses represented by the red bars, as full showers of secondary particles are not simulated in Merlin++. The noise level the BLMs can be seen in the measurements at around 10 −6 , these are not real losses and therefore set the precision of the measurement. While Merlin++ counts the particle losses on the beam pipe, the BLMs record the flux of the radiation shower outside the accelerator's physical components. The showers are several metres long, so proton losses at one element will also cause signal in the BLMs at downstream elements. The materials of the magnets and surrounding equipment will absorb some of the energy of the shower. For a full quantitative comparison to the BLM signals, energy deposition studies could be performed as in [25].
We can now compare the TCT inefficiency predicted in the Merlin++ simulation with the measurements. In both cases we are interested in the normalised local clean-  ing inefficiency at the TCTs, i.e. the ratio of the individual TCT to the total TCP losses. This partially normalises out the conversion of proton losses to BLM signal values. However, we must also make a normalisation to take into account the response of the BLM to the local proton loss and cross talk due to a secondary particles from one element reaching the BLM of another. To do this we normalise to the inefficiency at the fully squeezed configuration. Figure 7 shows the Merlin++ simulation compared to the data points extracted from the BLM data. BLM error bars are based on the background level found by averaging the 5 lowest reading within the time window. While the trend is compatible it is clear that the BLM data are too limited to draw conclusions. The signal to noise ratio in the BLM data are too low for β * above 2 m to retrieve any data points and give a significant data spread above 1 m. It is clear that dedicated loss maps are needed to make a better comparison. Run 2 of the LHC began in 2015 and incorporated changes to the machine configuration, most notably an increase in beam energy from 4 to 6.5 TeV. In order to reduce the time from injection to collision, a combined ramp and squeeze program was used stating from 2016, such that the initial squeeze, down to β * of 3 m at IR1 and IR5, happens simultaneously with the energy ramp. Therefore, the squeeze beam mode covers just the final 3 m to 0.4 m of squeezing. Table III shows the optical parameters for the IPs used during 2016 data taking. Table IV shows the collimation settings used. As before for IR2 and 8 the external crossing angle is given.  During the 2016 beam commissioning a number of loss maps were made during the squeeze. This gives a better signal to noise ratio and allows separation of losses from each beam and plane. The maps used in this article were taken on 20th of April 2016, during fill 4832. A beam of low intensity pilot bunches was injected and ramped to 6.5 TeV. During the squeeze, the ADTs for each combination of horizontal and vertical, and Beam 1 and 2, were fired in turn to excite one of the bunches in that plane, as shown in Fig. 8, and the BLM signal was recorded [26]. For each BLM we calculate a background level, by averaging the signal during a 10 s window near the start of the squeeze where losses are low. This fixed value per BLM is used as an estimate of the uncertainty of that BLM's signal during excitation. Note that the background measurement is usually taken closer to the loss map excitation, however in this case where loss maps are made in rapid succession this is not possible. There are a number of additional parameters not under control that can contribute to errors, such as orbit shifts and changes in the squeeze rate.

C. Comparison between simulation and measurements
First we compare a full loss map from fill 4832 taken close to when the β * at IR1/5 crossed 50 cm. Figures  9 and 10 show full ring and IR7 loss maps comparing BLM data and Merlin++ simulation. As with the 4 TeV comparisons we see that Merlin++ reproduces well the main loss locations around the ring, and the collimation hierarchy in IR7 well. We can now compare the normalised cleaning efficiency as function of β * between BLM data and Merlin++. Figures 11 and 12 show losses on the IR1 TCTs during the squeeze due to horizontal and vertical excitation of the beam. Horizontally we see excellent agreement between data and simulation, with steep increases in TCT losses as the beam is squeezed to β * of 0.4 m. For vertical excitation we again see good overall agreement, although no losses are observed on TCTPV.4L1.B1, the vertical TCT in IR1, in simulation above β * of 0.8 m. At larger β * values, the signals on the TCT BLMs are below the noise levels, so we are not able to record the losses. Figures 13 and 14 show losses on the IR5 TCTs due to horizontal and vertical excitation of the beam. For horizontal excitation we see good agreement for TCTPH.4L5.B1, but higher losses in simulation for TCTPV.4L5.B1 than in BLM data. For vertical excitation Merlin++ reproduces the losses well.
To investigate the loss fall off on TCTPV.4L1.B1 (Fig. 12) we look at the position of the collimators that act in the vertical plane, projected into phase-space. At the smallest β * values, losses on TCTPV.4L1.B1 are dominated by particles scattered from the IR7 TCSGs with the highest losses coming from TCSG.D4L7.B1. Figure 15 shows the vertical collimators with their phase advance from TCSG.D4L7.B1. It can be seen that due to the retraction in jaw gap and change in phase advance the TCT is shadowed behind TCLA.C6R7.B1 for β * of 1 m and larger. In the LHC TCTPV.4L1.B1 is positioned downstream of TCTPH.4L1.B1, so the BLM will see local showers from the horizontal TCT even when the vertical TCT is not directly hit. This explains the discrepancy between the simulation and BLM data.
With this good modelling of proton losses we can now use Merlin++ to make predictions for future collimation configuration such as the HL-LHC.

VI. PERFORMANCE OF THE HL-LHC COLLIMATION SYSTEM
The HL-LHC upgrade introduces several changes to the lattice [3]. Among other changes, the inner triplets are replaced with higher gradient magnets of larger aperture to allow a smaller β * at the IPs [27], and a new achromatic telescopic squeeze (ATS) optics scheme is used [28]. In the dispersion matching section downstream of the betatron cleaning, additional absorbers (TCLD) have been placed by splitting two of the bending magnets into shorter high field magnets [10,12,29]. For each beam in cell 6 upstream of the experiment in IR1 and IR5 an additional pair of TCTs has been placed to improve protection [16].   In order to maximise integrated luminosity while limiting the maximum pileup, the HL-LHC will use a luminosity levelling scheme [30]. If the accelerator configuration is kept constant during data taking then the luminosity will fall over the length of the fill due to the gradual reduction in the beam current. Levelling is achieved by adjusting the machine configuration to compensate for the change in beam current; in the baseline by changing β * at the IPs.
This leads to another situation where dynamic changes of the collimators could be needed, although in this case with the beams in collision.
We consider a levelling scheme that utilises changes in β * from 64 cm to 15 cm, while keeping the crossing angle fixed [31]. In this case the TCTs and TCLs are held a fixed position in mm. The jaws are fixed at the position that gives a TCT gap of 10.5 σ and a TCL gap of 12 σ at the minimum β * of 15 cm, using a normalised beam emittance of 3.5 µm. For example TCTPH.4L1.B1 will have a gap of 15.5 mm for all β * values. Table V shows the collimator settings used. For this work we use the HL-LHC version 1.2 optics, with 2 TCLDs per beam in IR7. Figure 16 and 17 show the simulated loss map at 3 steps in the HL-LHC luminosity levelling for the full ring and IR7 respectively. Again the main losses occur in the collimation regions at IR 3 and 7. Smaller loss peaks can also be seen at IR 1,2 and 5. The TCT losses get larger as β * at the IPs is reduced. Figures 18 and 19 show the Beam 1 losses on the TCTs at the main IPs as a function of the β * value.
Losses in cold magnets the rest of the ring are significantly lower than in the LHC configurations due to the TCLDs catching the dispersive losses. Although there are only very few direct proton losses in cold elements, the showers from the collimators could potentially still quench magnets during the 1 MW loss scenario, but this has been studied with energy deposition studies in [29]. Studies on the response of the collimators themselves to these loads have been performed in [32].

VII. CONCLUSION
The LHC collimation system is essential to protect the machine from beam losses during operation. Its performance is continuously monitored by the BLM system. We can use existing measurements to validate simulations, which can then be used to make prediction of future performance for HL-LHC.
In this paper we show that Merlin++ is able to model the proton losses around the LHC. It can reproduce the patterns of losses around the LHC ring and in the interaction regions from measured data. It gives good agreement, to measurements taken with the BLM systems during the beam squeeze, for Run 1 and 2 operation at 4 and 6.5 TeV, both in the overall loss patterns and the changes on the TCTs as the optics configuration is changed. It is within the expected precision, given then uncertainties of the showering to the BLMs which is not modelled in Merlin++.
In addition to the SixTrack code, already used successfully for collimation studies, we can therefore also use Merlin++ to predict losses in the future HL-LHC configuration. The possibility to use different simulation tools provides increased flexibility and means of increasing the confidence in the final results. We find that the losses on the cold magnets are acceptable, although the loads in the 1 MW scenario imply also the need of energy deposition studies of the magnet coils, as well as thermomechanical studies of the most loaded collimators.  IR1  IR2  IR3  IR4  IR5  IR6  IR7  IR8  IR1 FIG. 16: Merlin++ Beam 1 loss map for 3 IR1/5 β * value steps during HL-LHC luminosity levelling.