Time evolution of the luminosity of colliding heavy-ion beams in BNL Relativistic Heavy Ion Collider and CERN Large Hadron Collider

We have studied the time evolution of the heavy ion luminosity and bunch intensities in the Relativistic Heavy Ion Collider (RHIC), at BNL, and in the Large Hadron Collider (LHC), at CERN. First, we present measurements from a large number of RHIC stores (from Run 7), colliding 100 GeV/nucleon Au beams without stochastic cooling. These are compared with two different calculation methods. The first is a simulation based on multi-particle tracking taking into account collisions, intrabeam scattering, radiation damping, and synchrotron and betatron motion. In the second, faster, method, a system of ordinary differential equations with terms describing the corresponding effects on emittances and bunch populations is solved numerically. Results of the tracking method agree very well with the RHIC data. With the faster method, significant discrepancies are found since the losses of particles diffusing out of the RF bucket due to intrabeam scattering are not modeled accurately enough. Finally, we use both methods to make predictions of the time evolution of the future Pb beams in the LHC at injection and collision energy. For this machine, the two methods agree well.


I. INTRODUCTION
During the design and operation of a heavy-ion collider, e.g., the Relativistic Heavy Ion Collider (RHIC) [1] or the Large Hadron Collider (LHC) [2], one of the main goals is to maximize the time-integral of the luminosity L at the experiments. As usual, luminosity is defined as the event rate per unit cross section and depends only on the spatial density distributions of the colliding beams at the collision points.
The time-evolution of the spatial densities is determined via single-particle dynamics from that of the phase-space distributions, i.e., the kinetics of the beams. This, in turn, is determined by the combined influences of several inter-dependent physical processes. The action of some of these (e.g., beam-beam collisions, scattering on residual gas) principally remove particles from the beams. Others (e.g., intrabeam scattering (IBS), radiation damping) predominantly change their distribution in space and momentum. To maximize L dt, losses caused by other processes than the collisions themselves need to be minimized and the beam sizes should stay small. These processes therefore need to be understood and modeled in quantitative detail.
There have been numerous previous studies of the time evolution of luminosity (in colliders) or other performance parameters such as the emittances (e.g., in synchrotron light sources or the damping rings of linear colliders). Among many possible references, we cite [3][4][5][6] * roderik.bruce@cern.ch which are particularly close to the applications of this paper.
Many of these are based on the solution of systems of coupled ordinary differential equations (ODEs) that describe the time evolution of a few parameters characterizing the beam distributions, typically the intensities and the first-and second-order moments of the distributions (beam centroids and emittances). Such systems can only be closed with additional assumptions on the nature of the beam distributions. Typically these are assumed to be gaussian in all three degrees of freedom. Besides closing the system of equations, this can also allow convenient analytical forms for some of the terms, e.g., those describing intrabeam scattering or the way in which the luminosity is modified by crossing angles at the interaction point. This approach was applied to the evolution of heavy-ion luminosity in the LHC in Ref. [6]. Alternative approaches involve solving the Fokker-Planck equation for the beam distribution functions [7,8] and single particle tracking. In Ref. [9] both the ODE method and particle tracking were implemented in the same simulation code.
In this article, we study the time evolution of colliding heavy-ion beams in RHIC and LHC. In Sec. II, we present measurements of luminosity and beam intensities during a large number of physics stores during Run-7 [11] in RHIC and give a summary of the running conditions. The data are compared with simulations based on two different models: • Multi-particle tracking: This method, described in Sec. III, is rather direct and accurate but slow. We use an extended version of the code introduced in Refs. [12][13][14], motivated by the fact that the longi- tudinal bunch distributions in RHIC are not gaussian, as discussed in Sec. III E. It is important to have a simulation that can model IBS for arbitrary profiles of bunched beams.
• ODEs: Numerical solution of a coupled system of ODEs describing the beam emittances and bunch populations as in Ref. [6]. This method, discussed in Sec. IV, is fast but lacks accuracy when the beam distributions are not gaussian.
In Sec. V, both methods are compared with the data from RHIC, and in Sec. VI we make predictions for the LHC.

II. MEASURED LIFETIMES IN RHIC
The RHIC Run-7 consisted of 191 physics stores in 2007 with colliding 197 Au 79+ beams at an energy of 100 GeV/nucleon. The main beam and machine parameters for Run-7 are summarized in Table I. During Run-7, longitudinal stochastic cooling became operational for the Yellow ring [14] (the two rings of RHIC are called Blue and Yellow). Stochastic cooling counteracts the longitudinal diffusion caused by intrabeam scattering (IBS) that eventually pushes particles outside the longitudinal acceptance (RF bucket). This loss process, discussed in Sec. III E, is responsible for a large fraction of the beam losses in RHIC. In this article, we focus on stores without stochastic cooling in order to make a comparison with the LHC.
The time-dependent average bunch intensities N i (i = 1, 2 for Blue and Yellow respectively) were measured using DC transformers [1] and can be fitted empirically by an exponential function The beam lifetimes T N fitted to data from the first 3 h of all physics stores are shown in Fig. 1 and the mean and standard deviation values are listed in Table II. Regular physics stores last 5 h, but some stores terminate prematurely. Almost all stores have data for 3 h. The luminosity was recorded during every store by detecting neutrons emitted by ions that have undergone mutual electromagnetic dissociation in the collisions [15,16]. It turns out that it can be well fitted by a sum of fast-and slow-decaying exponential functions with the fit parameters A, T f , B, T s . This is a 3parameter fit since L fit (0) = A + B. The fit function (2) is purely phenomenological: it is not chosen on the basis of any particular physical model but rather for the fact that it generally fits rather well and is convenient to implement. The fit parameters for all stores are shown in Fig. 2 and the means and standard deviations in Table II. An example of the measured and fitted luminosity and bunch intensity is shown in Fig. 3.

III. PARTICLE TRACKING SIMULATION
The tracking program is based on a code initially used for stochastic cooling [12][13][14], which has been developed further to simulate the time evolution of two circulating and colliding bunches.
Each bunch contains a number of macro particles (in the simulations, 5×10 4 were used), which are tracked in a 6D phase space. Normalized coordinates (x, p x , y, p y ) are used in the transverse planes and (t, p t ) with p t = γ − γ 0 in the longitudinal (γ 0 is the Lorentz factor of the reference particle). The particle coordinates are updated on a turn-by-turn basis as a function of the physical processes acting on them. The following processes are taken into account: • Synchrotron and betatron motion: All coordinates are updated deterministically on every turn, taking into account the machine tune, chromaticity and RF voltage. Particles outside the time acceptance of the RF bucket are removed. In RHIC and LHC, these coasting particles are cleaned by extracting them resonantly through a few dipole kicks, sending them onto the collimators. Therefore they have a negligible influence on beam dynamics. Linear betatron coupling is taken into account in thinlens approximation. A detailed description can be found in Ref. [12].
• Collisions: collision probabilities are sampled as a function of the local density of the opposing beam at (x, y). This is described in Sec. III A.
• IBS: Each particle is given a random kick, calculated using the Piwinski model, but modulated by the local particle density in order to account for non-gaussian longitudinal bunch profiles. This is described in Sec. III C.
• Radiation damping and quantum excitation: All particles receive a deterministic amplitude decay and a random excitation. This is described in Sec. III D.
Other processes are neglected. Beam loss rates in RHIC do not change visibly when the beams are brought into collision, and the beam-beam parameter is even smaller for ions in the LHC [2]. Therefore, the beam-beam effect is neglected in our simulations. A detailed treatment of beam-gas scattering has been done elsewhere for RHIC [17,18] and LHC [19,20]. In both cases, lifetimes of the order of 100 or several hundreds of hours were found. The emittance blowup at RHIC is fractions of a percent per hour [18] and standard formulas [21] give a similar result for the LHC with predicted vacuum conditions [2,19]. Therefore, beam-gas scattering has a negligible impact on the dynamics in RHIC and LHC when compared to collisions or IBS. Furthermore, calculations with the mad-x [22] program show that the Touschek effect (large-angle scattering from the center of the bucket bringing particles outside the energy acceptance) is negligible in both machines, with lifetimes of hundreds or thousands of hours. However, small-angle scattering of particles already close to the acceptance is important and included as discussed in Sec. III C.
The strengths of the processes (collision probabilities, inverse radiation damping times, and kicks from intrabeam scattering and quantum excitation) are scaled up to account for both the smaller number of macro particles and by an additional factor, set by the user, that reduces the computational time, so that the number of turns in the simulation corresponds to a much larger number of turns in the real machine. Typically one simulation turn corresponds to 2 × 10 4 real machine turns in our simulations to achieve a relatively fast execution without loss in precision.

A. Collisions
A collision probability P 1 is calculated for every particle on every turn, and a random number is sampled to determine if an interaction takes place. In that case, the particle is removed. To calculate P 1 we perform an over-lap integral of the density of the opposing bunch with a Dirac δ-function that represents the trajectory of a single particle. The details of this are shown in Appendix A. The most general collision routine makes no assumptions on the beam distributions and uses a discrete binning of the particles. The integration is then replaced by a sum over the bins. This routine is slow and a much faster code is obtained with some simplifications.
First we assume gaussian distributions in x and y, which is generally a very good approximation (in RHIC, the longitudinal distribution is however non-gaussian as explained in Sec. III E). Furthermore, we assume a common luminosity reduction factor instead of computing it for every particle, so that P 1 is a function of the transverse coordinates only. In the transverse plane we use action-angle variables (J x , φ x ) defined for a particle in bunch 1 by with the angle variables given at the IP and an analogous definition in y. Here β * xy is the value of the optical function at the IP (assumed to be the same in x and y) where we define the longitudinal coordinate to be s = 0 and x ′ = dx/ds. We note that α * xy = −β ′ xy (0)/2 = 0. Since every simulation turn corresponds to a large number of machine turns, where φ has a uniform distribution on the interval [0, 2π], we average P 1 over φ. The resulting collision probability for a particle in bunch 1, derived in Appendix A, is a function of the betatron actions J only: Here σ is the interaction cross section, N 2 the intensity of bunch 2, I 0 a modified Bessel function, (ǫ 2x , ǫ 2y ) are the geometric rms emittances of bunch 2 and R tot is the global luminosity reduction factor, which takes into account the hourglass effect and a crossing angle 2θ. Writing the longitudinal distributions of the two bunches, normalized to unity, as ρ zi (z i ), with z i being the distance to the center of bunch i and i = 1, 2, the reduction factor is given by When collisions occur at several IPs, P 1 is summed up over all of them (only the factor R tot /β * xy varies). Furthermore, P 1 has to be scaled to account for the smaller number of macro particles and the shorter time scale in the simulation.
If the transverse action in bunch 1 is assumed to be distributed as ρ 1u = exp(−J u /ǫ 1u )/ǫ 1u for u = x, y, equivalent to a gaussian distribution in u 1 and p u1 , we calculate the luminosity as where k b is the number of bunches circulating at frequency f rev in each ring. Eq. (6) agrees with standard formulas [21].

B. Interaction cross section
To calculate P 1 we need the total cross section σ for interactions in which colliding particles are lost from the beam. Apart from inelastic hadronic interactions, boundfree pair production (BFPP) and electromagnetic dissociation (EMD) have to be taken into account. These electromagnetic processes create particles with a chargeto-mass ratio different from the main beam, which eventually leads to particle loss. Detailed discussions of these loss mechanisms in heavy ion colliders can be found in Refs. [2,5,6,[23][24][25].
Cross sections were calculated for BFPP in Ref. [26] and for EMD in Ref. [27]. We have taken standard values for the inelastic cross sections as used by the RHIC and LHC experiments [2,[28][29][30]. In Table III we summarize the cross sections of the different processes that were used in the simulation. As can be seen, the majority of the luminosity is used up by BFPP and EMD rather than the inelastic interactions, which are the usual object of study of the experiments. The standard formalisms for describing IBS [31][32][33][34][35][36] assume gaussian profiles. To simulate RHIC, where the longitudinal distributions are known to be non-gaussian, an IBS model has to go beyond this assumption. Such a model has been implemented in Refs. [12,14], which we summarize here for completeness. It assumes that the longitudinal and transverse degrees of freedom are independent and that the transverse distributions remain gaussian.
For simplicity and speed, the IBS routine starts from the Piwinski model [31]. Our code gives the user the choice between a smooth lattice approximation, resulting in fast execution, or a slower but more precise calculation taking the full lattice into account. In this case, the optical functions are read in from an external table created by mad-x. Furthermore, the IBS rise times can be calculated with the term η 2 /β u (η being the dispersion and β u the optical lattice function) replaced by as proposed in Ref. [35]. Here primed quantities represent derivatives with respect to the path length s. We call this model the modified Piwinski method and the model presented in Ref. [31] the original Piwinski method. We define the rise times T IBS,u , u = x, y, z by The rise times are calculated for gaussian distributions but are then modulated for every particle by the local beam density to account for arbitrary longitudinal profiles. Thus, particles are given normally distributed momentum kicks ∆p u in each plane on every simulation turn: Here r is a gaussian random number with zero mean and unit standard deviation, T rev the revolution time, σ t the standard deviation of t and σ pu the standard deviation of the momentum p u in plane u. It can be shown that if the longitudinal distribution ρ t , normalized to unity, is gaussian, integrating over all particles yields exactly the growth given by Eq. (7). The strength of the IBS kicks are also scaled up by the time ratio of the simulation. The rise times at the beginning of a store, calculated with the different IBS models in the tracking code and parameters in Table I, are shown in Table IV. We show also results from mad-x, as discussed in Sec. IV and, for comparison, the rise times calculated using the Bjorken-Mtingwa method [32] (with the Coulomb logarithm calculated by mad-x). As can be seen, IBS is strong in RHIC and at injection energy in the LHC.
For the cases studied here, the Piwinski models taking the lattice into account agree well with the Bjorken-Mtingwa method, which is considered more general [35]. The smooth lattice approximation, with the average βfunction calculated from the tune, gives significantly shorter rise times, thus overestimating the strength of IBS. The average difference between the modified Piwinski method, taking the lattice into account, and the Bjorken-Mtingwa method is 2.9% and the maximum difference 5.1%. Despite some known deficiencies in madx [37], the code agrees well with the other methods for the cases studied here.
Because of the better agreement with the other more general methods, we use the modified Piwinski method for all simulations unless otherwise stated. We take the found discrepancies as a guide to the maximum uncertainty of the rise times used in the simulation. The influence of the uncertainty of the IBS model on the time evolution of the luminosity and bunch population is very small compared to other error sources. A comparison between two of the models is shown in Appendix B. Radiation damping and quantum excitation are modeled in each plane u by a deterministic decay, given by the emittance damping time T rad,u , and a random excitation, as described in Ref. [38]. Since T rad,u ≫ T rev , we expand to first order in T rev /T rad,u , so that the decay coefficient for one turn becomes exp(−T rev /T rad,u ) ≈ 1−T rev /T rad,u . The quantum excitation is determined by the usual radiation integrals over the lattice [21] and would lead, in the absence of other dissipative effects, to stationary gaussian distributions with rms sizes σ eq,u . The corresponding one-turn map for the momentum coordinate p t is therefore p t → p t (1 − T rev /T rad,z ) + σ eq,z r 2 r 2 T rev /T rad,z . (9) where r is a unit, zero-mean random number. Similar maps are used in the transverse planes.
In both RHIC and LHC, the photons emitted by heavy nuclei are very soft [2] so the σ eq,u are several orders of magnitudes smaller than the real beam and quantum excitation has no significant effect on the dynamics. In RHIC, radiation damping is also negligible but it is important, enough to counter IBS, in the LHC at collision energy [2,6]. The damping times are summarized in Table V. It is important to choose an appropriate, realistic, starting distribution of the particles in the tracking; otherwise agreement between measurement and simulation is poor. In the transverse plane, the beams are well approximated by gaussian distributions, both in RHIC and (it is expected) LHC, and the initial coordinates are generated by the code from the starting emittances. The data from the RHIC stores does not in fact include the measured transverse beam sizes so these have been inferred from the logged initial bunch populations and luminosity using Eq. (6). A strong coupling between the horizontal and vertical planes, keeping the beams round on a short time scale, was assumed. This has been observed empirically in RHIC [12], while for the future ion operation in the LHC, this corresponds to the somewhat idealized situation described by Ref. [39]. We also assumed equal transverse beam sizes in every bunch of the two beams. Despite not being justified by measurements, this reduces the dimensionality of the problem to a tractable level; there is, in any case, insufficient data to attempt a fit to the small variations in intensity and size of individual bunches. Indeed we have found that, if different beam sizes are used in the simulation, the luminosity remains approximately unchanged while the bunch currents show small variations.
In the LHC, the bunch length is much shorter than the RF bucket size and an approximation of a gaussian distribution was used. This is not the case in RHIC, as can be seen in Fig. 4, where an example of the bunch current measured at RHIC when the beams were put into collision is shown. To understand the bunch shape, we discuss briefly the synchrotron motion in RHIC.
RHIC uses a double RF system (see Table I) and the longitudinal emittance is comparable to the bucket size of the h = 2520 system. The combined RF voltage from both systems is shown in Fig. 5. We show also its negative time integral, which is proportional to the "poten-  tial energy" term in the Hamiltonian of the synchrotron motion. One can thus picture the particles "sliding" on this surface. As a particle in the central bucket continuously receives small momentum kicks from IBS, it oscillates with higher amplitudes until it has enough energy to enter the next h = 2520 bucket. When it has a high enough amplitude to leave the h = 360 bucket, defining the acceptance, the motion becomes unbounded and it is considered lost by the simulation. This loss process is called debunching and is very strong in RHIC. For this reason, very significant improvements of the lifetimes and integrated luminosity were possible through the implementation of stochastic cooling [12][13][14].
The profile in Fig. 4 results from the RF gymnastics performed after the energy ramp, when the storage RF system is turned on. To make the bunches fit in the h = 2520 buckets, they are shortened through a rotation in the longitudinal phase space, but it is inevitable that some particles escape into neighboring buckets [40].
To reproduce the profile in Fig. 4 and keep the phase space consistent with the RF motion, we used as starting conditions coordinates generated by tracking an initially gaussian bunch, located only in the central h = 2520 bucket, under the influence of IBS only. In Fig. 6 we show the longitudinal phase space from this simulation on different turns. Only particles with an amplitude high enough to pass from the central h = 2520 bucket are present in the neighboring buckets. Therefore, the centers of these buckets are empty in phase space. This gives a time profile similar to the expected result of the bunch shortening (see Ref. [40]). Fig. 4 shows the resulting bunch current for the population that was used as starting distribution in the RHIC simulations. It corresponds well to the measured profile except around t = ±5 ns, at the centers of the first side buckets (see Fig. 5). The discrepancy means that in the measurement, the phase space distribution is less hollow than in Fig. 6.
Even though it would be possible to recreate the measured time profile exactly, this ad-hoc construction would depend on an arbitrary choice of the energy deviations of the extra particles in the side buckets. We prefer to keep the starting conditions as simple as possible in order to test the predictive power of the code for a large number of different conditions and therefore use the profile shown in Fig. 4.
In Fig. 7 we show an example of the simulated losses caused by IBS and collisions in RHIC. Simulations show that if no particles are present in the outer h = 2520 buckets in the starting population, losses from debunching do not occur from the beginning of the store. With such a longitudinal starting distribution, the initial slope of N i (t) is less steep and the agreement with measurements is significantly worse.
It should be noted that the assumptions discussed in this section fully determine all free parameters in the simulation.

IV. ODE SIMULATION
As an alternative to the relatively slow tracking simulation we compare it with a faster method, which does not follow single particles but the RMS emittances. This is done by solving numerically a system of coupled ODEs, similar to Refs. [6,9,41]. In all cases treated here, we assume no coherent oscillations of the bunches so that the first-order moments vanish. Then we assume both beams to be gaussian in all degrees of freedom, which obviously is not true for the longitudinal plane in RHIC, but is expected to hold in the LHC. Solutions for RHIC are presented in Sec. V anyway for comparison.
We assume round beams (ǫ xi = ǫ yi ≡ ǫ xyi for beams Each point represents one particle out of 50000. The center of the bunch is located in the central bucket at t = 0. The longitudinal momentum is defined as pt = γ − γ0, where γ0 is the Lorentz factor of the synchronous particle. The figure shows an idealized situation, where all particles start in the central bucket (reality is different because of the RF gymnastics). The distribution at 67 minutes was used as starting conditions in the longitudinal plane for the full tracking, due to its similarity with the measured bunch profile in Fig. 4. For speed, the smooth lattice approximation was used.
i = 1, 2) due to a strong coupling between the horizontal and vertical planes, as discussed in Sec. III E. As in the tracking simulation, we take only collisions, intrabeam scattering and radiation damping into account. In total we have six dynamical variables: ǫ xyi , ǫ li , N i , where ǫ li is the longitudinal emittance. The time evolution of the system is given by: Here T IBS,xy , T IBS,l are the instantaneous IBS rise times, and T IBS,N , T L the instantaneous lifetimes from debunching and collisions. Furthermore, T coll is the emittance rise time caused by core depletion in the collisions [42]. If the transverse distribution is gaussian, the collision probability is much higher for the particles in the core of the beam. When these particles are removed, the remaining ones therefore have a larger transverse emittance. Core depletion is included automatically in the tracking through the calculation of individual collision probabilities for every particle but has to be added explicitly in the ODE system. The core depletion effect is discussed in detail in Ref. [42].
By adding additional terms to the equations, e.g. as was done for beam-gas scattering in Ref. [6], the method can easily be expanded to account for other effects leading to particle loss or emittance growth. To solve the system numerically or, in certain special cases, analytically we use Mathematica [43]. Our implementation allows the kinetics of unequal beam intensities and emittances to be treated although we shall restrict ourselves to equal beams in this paper. When using the method of pre-calculated and interpolated values of T IBS,l and T IBS,xy , as explained below, the gain in execution speed when simulating the time evolution in a store is more than a factor 1000 compared with the tracking. A 10 h store takes on the order of a second to solve on a normal desktop computer.
For convenience, we used mad-x to calculate T IBS,l and T IBS,xy . The theoretical framework [44] is an extended version of the Conte-Martini model [33], which includes also vertical dispersion. gaussian beam distributions are assumed in all planes. The influence of the IBS model on the simulation result is discussed in Appendix B, where we make a comparison between mad-x and the modified Piwinski model. The evaluation of T IBS,l and T IBS,xy for just one set of values N i (t), ǫ xyi (t), ǫ li (t) requires a significant amount of computer time, so it is impractical to do this at runtime.
We therefore use pre-calculated values of the rise times evaluated on a grid of points in the relevant region of (ǫ xy , ǫ l ). Since T IBS ∝ N −1 i , it is only necessary to evaluate the rise times for one typical value of N i and then scale. The result is a smooth surface, which can be interpolated with cubic polynomials in the two emittances. This initial step, which has to be done for each optical configuration and beam energy under consideration, takes less than an hour of computer time on a normal desktop machine. In the Mathematica framework, T IBS,xy and T IBS,l are replaced by rapidly evaluated interpolating functions with no loss of precision. This is the key step that allows a fast interactive analysis of many cases of interest. Since we assume round beams, we use T −1 IBS,xy = (T −1 IBS,x + T −1 IBS,y )/2 (above transition in a flat accelerator lattice the uncoupled IBS calculation gives a large positive growth rate in the horizontal plane and a small damping rate in the vertical). The calculated rise times from mad-x for the initial distributions are shown in Table IV.
The debunching effect is in principle similar to Touschek scattering, with the difference that in the standard formulas for the Touschek effect [45] all scattering events leading to a loss are assumed to occur with large momentum transfers in the center of the bucket. The debunching losses are however diffusive in nature, since most of the lost particles were already very close to the separatrix before the scattering. To model this we use a method developed by Piwinski [46], as follows.
If we assume that the beam distributions change slowly, we can approximate them as being in a steady state during the time step in the numerical integration. The instantaneous lifetime arising when the longitudinal aperture cuts the tails of the gaussian distribution is then, in analogy with Ref. [46], given by Here δ max is the maximum allowed fractional energy deviation ∆E/E 0 (E 0 is the reference energy) and is the standard deviation of ∆E/E 0 , with Ω S being the angular synchrotron frequency and η c the frequency slip factor. Eq. (12) is derived for small oscillations, which limits the accuracy of the model. It should be noted that a similar effect exists in the transverse planes due to the cutoff of the physical and dynamic aperture. In the ideal case considered here, without significant impact of resonances and non-linearities, this effect is negligible. We determine the instantaneous partial lifetime due to the collisions, T L , from the total number of particles removed from each beam per time: Here L is given by Eq. (6) and σ by Table III. The reduction factor is calculated from Eq. (5) assuming gaussian distributions in the longitudinal plane. Finally, the emittance rise time T coll for beam i due to core depletion has been calculated to be [42] 1 Here i = 1, 2 and j = 2 for i = 1 and vice versa.

V. COMPARISON BETWEEN SIMULATIONS AND MEASUREMENTS IN RHIC
As an example of the detailed time evolution in a store, Fig. 8 shows the measured and simulated luminosity and bunch populations for store 8908 (parameters are given in Table VI). This store was above average in luminosity performance but not exceptionally good. Results from both simulation methods (tracking and ODEs) are shown. A very good agreement is obtained with the tracking method. Using ODEs, the agreement with data is significantly worse. This is clearly because of the limitations in the debunching model and the assumption of gaussian longitudinal profiles. In the remainder of this section, we therefore focus on the tracking method only. TABLE VI. Starting parameters for the example store called 8908. The transverse starting emittance was not measured, but inferred from the measured luminosity, bunch populations and hourglass factor using Eq. (A9). It was assumed to be equal for both beams and planes.
Parameter Unit value N1 (Blue) 10 9 particles 1.056 N2 (Yellow) 10 9 particles 1.004 L 10 27 cm −2 s −1 3.0 transv. geom. rms emittance mm mrad 2.34 FWHM of bunch length ns 2.6 In order to have more statistics, we simulated, using the tracking method, 139 stores without stochastic cooling with varying bunch populations and luminosity from RHIC Run-7. We use the fits to the measurements, Eqs. (1) and (2), for comparisons as they are rather accurate during the first 3 hours. We used the same current, shown in Fig. 4, for all stores, which introduces an error, but allows us to better test the predictive ability of the code. To measure the goodness of the tracking simulation, we use two benchmark parameters, ξ and ψ, defined as the average instantaneous ratio or ratio of the integrals between a simulated and measured quantity U over the first three hours: Here U is one of the quantities L, N 1 , N 2 . Some histograms of ξ and ψ for 139 stores are shown in Fig. 9. We have deselected 9 stores, where the automatically calculated fit parameters were unrealistic or the beam current or luminosity dropped too rapidly to possibly be accounted for by collisions and IBS. The mean values and standard deviations of ξ and ψ are shown in Table VII. An excellent agreement is found for the bunch population, while the integrated luminosity is on average overestimated by 13%. Given the realities of practical machine operation, we still consider this as a good agreement.  Several possible sources of the discrepancies between measured and simulated luminosity exist. Apart from the uncertainty resulting from the approximations in the simulation model, and the use of the same longitudinal initial conditions in all stores, variations of the machine optics, in particular β * xy , between different stores could give rise to errors. The logged data show that in some stores, the luminosity varies between the IPs at STAR and PHENIX, although they have nominally the same β * xy .
The interaction cross sections have uncertainties of the order of a few 10%. Decreasing σ to 180 barn in the simulation changes the luminosity by 4% after 3 h. The cross section for mutual EMD, which was used to convert event rate to measured luminosity, has an uncertainty of not less than about 5% [15]. The uncertainties in cross sections could therefore explain a significant part of the overestimation of the luminosity in the simulation.
Calculating IBS rise times in the smooth lattice approximation, which we consider to be a less accurate model that in this case overestimates the influence of IBS, improves agreement with the measured luminosity (ψ = 1.10) while keeping the excellent agreement with the measured intensities (ξ ≈ 1). This need not mean that the used IBS rise times calculated with the more detailed models are too low. It could also be that some other process, not included in the simulation, causes additional beam losses. Such unaccounted processes include orbit variations, non-linearities, RF noise, ground motion, triplet quadrupole errors, current noise and the effect of dynamic aperture. These processes can not easily be included in the simulation, since their strengths are not well known.

VI. PREDICTIONS FOR THE LHC
As the overall agreement with data from RHIC is very good, we use the tracking code to make predictions for the LHC, both at injection and collision energy. Run parameters are presented in Table I. There are important differences with respect to RHIC. The LHC uses a single RF system and bunches are expected to have a gaussian distribution in the longitudinal plane, meaning that the ODE method also has the potential to work well.
In Fig. 10 we show simulations of the time evolution at the injection plateau, where bunches are kept circulating without colliding while the ring is filled. This is expected to take about 10 minutes per ring but, to cover cases where there may be some delay in starting the ramp, we have simulated 1 h. IBS rise times are longer than in RHIC, especially in the transverse plane (see Table IV). The transverse emittance is expected to be 2% larger after 20 minutes and 7% larger after 1 h. The simulated bunch current is 1.4% lower after 20 min and 5.2% after 1 h because of debunching losses. The profile stays approximately gaussian.
Simulations of the time evolution at collision energy using ODEs were done in Ref. [6]. Here we redo the calculations, including also core depletion, and compare with tracking for 1-3 active IPs (the three experiments ALICE, ATLAS and CMS are expected to study heavyion collisions). The results are presented in Fig. 11. At collision energy the dynamics in the LHC changes significantly, since radiation damping (see Table V) becomes important and compensates the emittance blowup from IBS. As particles are removed from the beams by collisions, IBS becomes weaker while damping stays constant. Therefore, the emittance shrinks during the store. The agreement between the simulation methods is excellent. Losses from debunching are predicted to be negligible in the conditions simulated for the LHC. Beam losses from collisions are by far the dominant effect.

VII. CONCLUSIONS AND OUTLOOK
We have presented measurements of the time evolution of the luminosity and bunch intensities in both beams during RHIC Run-7 and compared with two simulation methods: tracking and ODEs. Tracking simulations of 139 stores without stochastic cooling show that the parameter ξ, defined as the average ratio between a simulated and measured quantity, is close to 1 with a standard deviation of only 3% for the bunch populations. An average over-estimate of the measured integrated luminosity by around 13% was found. Possible sources of the discrepancies include the use of the same longitudinal starting profile in all stores, variations in β * xy , the cross section for 2-neutron electromagnetic dissociation (used to convert the measured luminosity) and the neglection of certain factors including RF noise and dynamic aperture.
The ODE simulations show significant discrepancies with measurements in RHIC, which come from the gaussian approximation of the longitudinal bunch profile and the limitations in the debunching model. Including nongaussian profiles in this method is not possible in the present framework.
We have also made predictions of the time evolution in the LHC, where gaussian bunches are expected and the two simulation methods are in very good agreement. In this case, the ODE method increases the execution speed by a factor 1000. The dynamics in the LHC is however significantly different, since radiation damping is a stronger effect than IBS. This gives rise to a shrinking emittance at collision energy (although this benefit will vanish rapidly if the LHC is operated at less than full energy). Therefore, debunching is not an issue in the LHC and the collisions are the main source of particle losses.
Both simulation methods are suitable as testing tools to optimize the luminosity. If stochastic cooling were to be included, the tracking could be used to make precise predictions of the improvement in luminosity due to stochastic cooling in one or both beams in RHIC.
In the most general collision routine in the tracking code, ρ 2 is determined by sorting the particles in bunch 2 in discrete bins along the directions (x, y, z 2 ). We assume that the transverse distributions at s = τ = 0 are independent, writing them as ρ * 2x (x)ρ * 2y (y), and that the longitudinal density ρ 2z is independent of x and y. The transverse binnings are performed at s = τ = 0.
However, we must take into account that the transverse distributions change along s with the optical function β xy (s), which we assume to be equal in both planes in the proximity of the IP. In the drift section around the IP β xy (s) = β * xy (1 + s 2 /β * xy 2 ), where β * xy is the presumed minimum at the IP. At a given s close to the IP, the transverse distributions are wider by a factor that we call κ(s), defined as Inserting ρ p1 in Eq. (A1), multiplying by σ, and integrating, gives P 1 for a particle with given coordinates (x 1 , x ′ 1 , y 1 , y ′ 1 , z 1 ): y1+y ′ 1 s κ(s) ) κ(s) ρ 2z (2s − z 1 ) ds.
(A5) In the code, the integral in Eq. (A5) is replaced by a sum over all bins that a specific particle passes through.
This method is however slow. A faster code is obtained by assuming gaussian distributions in x and y. Eq. (A5) then becomes Here the standard deviation of coordinate u in bunch 2 at s = 0 is given by β * xy ǫ 2u . We then change to actionangle variables (J x , φ x ) using Eq. (3) and average P 1 over φ x . The x 1 -dependent exponential in Eq. (A6) becomes Eq. (A8) is the exact phase-averaged collision probability for a particle in bunch 1 with coordinates (J x , J y , z 1 ). We note that the integral represents the hour glass factor for a particle at a given z 1 . Furthermore we approximate the hourglass factor to be equal for all particles, by averaging over all z 1 -values. Changing integration variable from s to z 2 for ease of implementation, the global hourglass factor R h is It can be shown that, if ρ zu are gaussian, Eq. (A9) is equivalent to the well-known formulas in Ref. [48]. With this approximation, P 1 becomes (A10) Simulations show that the errors introduced for single particles by using a global R h are insignificant when considering the whole bunch. For the cases considered in this article, only negligible changes in the simulated time evolution of the luminosity and bunch intensity appear if Eq. (A10) is used instead of Eq. (A5), while the execution time drops by a factor 15. Therefore, we only use the faster method for the results in this article.
If the bunches collide with a small crossing angle 2θ, we replace R h by a more general luminosity reduction factor R tot , which accounts for both the crossing angle and the hourglass effect. Such a factor is derived in Ref. [49] for equal gaussian beams by integration of Eq. (A1). Our calculation is completely analogous, with the generalization that we assume unequal transverse beam sizes and unknown longitudinal distributions. The integrations over (y, τ ) can be carried out directly yielding Eq. (6), with R tot given by Eq. (5).   Table I. subscript m indicates that mad-x was used for IBS calculations, while p stands for the modified Piwinski model. The emittance shows the largest deviations (around 1% after 10 h), which is small compared to the overall error. The IBS rise times as a function of transverse emittance are shown for both models in Fig. 14.