Comparison with experimental data of different theoretical approaches to high-energy electron bremsstrahlung including quantum coherence effects

The basic expressions for the differential nuclear bremsstrahlung cross section at high electron energy, as derived under different theoretical approaches and approximations to quantum coherence effects, are compared. The Baier-Katkov treatment is reformulated to allow introduction of the same value of the radiation length in all calculations. A dedicated Monte Carlo code is employed for obtaining photon energy spectra in the framework of the Baier-Katkov approach taking into account multiphoton emission, attenuation by pair production, and pile-up with photons from the background. The results of Monte Carlo simulations for both the Migdal and Baier-Katkov descriptions are compared to all available data that show the Landau-Pomeranchuk-Migdal suppression. The issue of the sensitivity of the experiments to the difference of the two approaches is investigated.


I. INTRODUCTION
Ever since the pioneering papers by Sauter [1], Bethe and Heitler [2, 3], and Racah [4], the bremsstrahlung process undergone by electrons and positrons in crossing matter has attracted the interest of theorists and experimentalists. As a matter of fact, a huge range of disciplines directly or indirectly benefits from any progress of the research in this field, from elementary particle physics, to cosmic ray studies, which need accurate simulations of shower development. At the highest energies, one of the most important steps forward in the theory was the recognition by Landau and Pomeranchuk [5,6] of the possible suppression of the intensity of the bremsstrahlung radiation due to a reduction of the coherence length caused by the multiple scattering suffered by the radiating particle. These authors used pure classical arguments, while Migdal [7] developed a fully quantum-mechanical theory of the bremsstrahlung process including the mentioned suppression. This effect, named the LPM effect after the discoverers (Landau-Pomeranchuk-Migdal), becomes progressively more important as the energy of the electron/positron increases since the suppression extends up to a larger fraction of the total radiated spectrum [8]. Early review papers further clarifying the LPM effect were written by Feinberg and Pomeranchuk [9] and Galitsky and Gurevich [10], while more recent surveys are due to Klein [8] and Baier and Katkov [11]. Although the LPM effect was postulated for QCD [12][13][14][15][16] and weak interactions [17], direct experimental evidence under controlled conditions was gathered only in the QED case [18][19][20][21][22].
Conceptually, the LPM effect is interesting because it goes beyond the standard perturbative formulation of quantum field theory describing the point interaction of free particles: the photon emission requires a formation length, which can also be interpreted as a coherence length, to be fully emitted and behave as a parti-cle independent from the radiating electron. Any disturbance during this phase, reduces the coherence length thereby leading to a suppression. In the QED case, the electromagnetic interaction has an infinite range and this gives rise to a nonperturbative effect also lying outside the standard perturbative formulation: the Coulomb correction. Bethe and Maximon discovered that such nonradiative corrections can be treated, with appropriate wave functions, at the leading order of α Z in the high-energy limit [23]. Data with systematic uncertainties low enough to validate this conclusion were only recently obtained [24]. The original quantum treatment of the LPM suppression by Migdal [7] did not include the Coulomb correction, which has since been taken into account semiempirically by expressing his cross section in terms of the radiation length and then adding the Coulomb correction to the latter [18]. Needless to say, without this rescaling, Migdal theory cannot reproduce the data for high-Z elements. A complete treatment of the LPM effect directly embodying the Coulomb correction was developed by Baier and Katkov [11]. Recent work was also published improving the original approach by Migdal and essentially justifying the renormalization procedure to the radiation length [25]. However, the comparison with data of these approaches: the one by Migdal, with rescaling, and the one by Baier and Katkov, directly dealing with the Coulomb correction, was performed in a quite unsystematic way up to now. The Migdal treatment was implemented in several Monte Carlo codes allowing to accurately take into account the emission of multiple photons by the same electron crossing the target and the absorption of the photons by pair production. Moreover, in some experimental conditions, it is necessary to consider the pile-up of photons from the target with photons from the background, which reach simultaneously the calorimeter. In the case of the Baier-Katkov approach, only multiphoton emission was taken into account by an approximate analytic treatment [11], while the other effects were never accounted for and are, altogether, actually larger than the difference between the theories. The first Monte Carlo implementation of the Baier-Katkov description is performed in the present work, allowing a comparison with measurements on the same footing as was done for the Migdal one. Moreover, again for the first time, the Baier-Katkov formulae are rewritten in a form that allows to use the same value of the radiation length introduced in the Migdal ones. Only then it is possible to find out whether the data have enough sensitivity to show a clear preference. We cover all available accelerator experiments for amorphous targets with good quality: SLAC E-146 [18][19][20], CERN LPM [21,22], and CERN LOW-Z [26].
From the point of view of applications, it should be stressed that, to the best of our knowledge, all publicly available Monte Carlo codes for particle physics (GEANT4 [27], EGS5 [28], and EPICS [29]) or cosmic rays (AIRES [30], CORSIKA [31], and COSMOS [32]) implement the Migdal cross sections. The highest-energy single-photon spectra measured under controlled laboratory conditions were collected by the LHCf collaboration [33] reaching up to ≈ 3 TeV. The authors found discrepancies in the shower profile between experiment and EPICS simulations, especially in the initial part of the shower, where the LPM suppression is strong. They attribute them to problems in the channel-to-channel calibration or the description of the LPM suppression (based on the Migdal approach, as mentioned). Important motivations to improve as far as possible the basic electromagnetic cross sections under strong LPM suppression, used in simulations, derive also from the recent opening of PeV gamma ray astronomy [34,35] and the present differences between experiments in the measured electron and positron primary cosmic ray fluxes in the TeV region [36,37].
The paper is structured as follows. The available theoretical descriptions of the LPM suppression are presented in Sec. II, with particular emphasis on the Migdal (see Sec. II C) and Baier-Katkov (see Sec. II D) ones. The reformulation of the Baier-Katkov approach to use a given radiation length is described in Sec. II D 3. A first simplistic comparison between the theories, without the Monte Carlo, is discussed in Sec. III. Then the need for Monte Carlo simulations is fully motivated in Sec. IV by showing that the multiphoton effect (see Sec. IV A) and the pile-up with the background (see Sec. IV C) are more important than the difference between the approaches. The implementation of the Monte Carlo code is briefly summarized in Sec. V, while most of the technical details are reported in Appendixes A, B, C, and D. The comparison with the data is presented in Sec. VI and, finally, the conclusion is offered in Sec. VII.

II. THE BASIC FORMULAE FOR THE DIFFERENT THEORETICAL APPROACHES
The most important expressions that different authors proposed over the years to account for bremsstrahlung at high energies are described in the following with particular focus on those that take into account quantum coherence effects. The basic quantity is the probability per unit length dp γ /dx for an ultrarelativistic electron of total energy E to emit a photon of energy k, differential in the fractional photon energy x = k/E, while crossing matter. For a thin amorphous and homogeneous target of number density n this can be related to the usual differential cross section per atom by the expression dp γ dx = n dσ dx . (1) The expressions may not coincide with those in the original papers and are reported in the precise form utilized in our calculations since they have been "normalized" (as far as possible) to a common (modern) radiation length as given in Tsai [38,39], in order to allow a meaningful detailed comparison. The reader interested in the historical development of the theory can find more detailed information in the review papers quoted above.

A. Bethe-Heitler
As the simplest expression for the radiation probability in the collision of a high-energy electron with a single atom we will adopt the one appearing as Eq. (3.84) in the review paper by Tsai [38] (corresponding to Eq. (11) by Klein [8]), dp BH γ dx = 1 3 X 0 x x 2 + 2 1 + (1 − x) 2 (2) valid for the so-called "complete screening limit". In the following, we refer to this result as the Bethe-Heitler (BH) expression. For the radiation length X 0 we assume the numerical values tabulated in Ref. [38] according to the definition where α is the fine-structure constant, r e the classical electron radius and Z the atomic number of the material. The function f (α Z) describes the Coulomb correction which is due to the distortion suffered by the impinging plane wave of the electron as it approaches the atomic nucleus [23], while φ 1 and ψ 1 are Z-independent quantities characterizing the amount of screening of the nuclear and electrons electric field. Their detailed expressions, calculated using the Moliére representation of the Thomas-Fermi model, are also given in Ref. [38]. In particular, in case of complete screening, −φ 1 /4 = 5.216 = ln(184) ≡ ln(B) [40]. For later use we also define a "simplified" radiation length X c 0 which does not include the Coulomb correction f (Zα) and the inelastic scattering term (proportional to Z), namely Equation (2) constitutes a very useful reference commonly adopted to characterize quantitatively the amount of radiation suppression occurring when the influence of the medium on the bremsstrahlung process is taken into account [41].
To arrive at Eq. (2), it is necessary to neglect a term which does not scale with X 0 [compare Eq. (3.83) with Eq. (3.84) in Ref. [38]] amounting to a maximum reduction of cross section for x ≈ 0 between ≈ 1.6% at low Z and 2.4% at high Z. Moreover, the complete screening condition, implying that the atomic elastic and inelastic form factors are evaluated in the low momentum transfer limit, is applied for all values of x. While this is certainly correct in the high-energy limit for E and at small x, it is bound to fail when x ≈ 1 (i.e. at the high-energy end of the spectrum, denoted as short-wavelength limit (SWL) in the rest of this work), when the momentum transfer is not small. It is then interesting to test Eq. (2) against a more complete calculation, including the Coulomb correction at the leading order [23,44,45] with the Furry-Sommerfeld-Maue wave functions [46,47] and screening with the Olsen-Maximon-Wergeland [48] additivity rule. Such an approach was recently validated against accurate experimental data measured for 500-MeV electrons at MAMI in Ref. [24] and using exact partial-wave calculations at low energies [49]. The comparison is shown in Fig. 1, adopting the realistic Hartree-Fock-Slater atomic form factors by Hubbell et al. [42,43]. When x is small, Eq. (2) is indeed accurate as expected and any limitation due to the Moliére approximation of the form factors or the exclusion of the term not proportional to X 0 is hardly visible. However, for E = 500 MeV the discrepancy increases for x 0.5, reaching ≈ 10% for x ≈ 0.9. It grows further for x > 0.9. For E = 2 GeV, the situation is better and at x ≈ 0.9 the discrepancy is about half the value quoted for E = 500 MeV.

B. The formation length
The BH expression for the probability of emission of a quantum of radiation of energy k refers to the interaction of a high-energy electron with a single atom and as such does not take into account the effect of the environment (typically represented by the presence of nearby atoms in a target bombarded by an electron beam) on the radiative process. The idea of a formation (or coherence) length of the photon, which was first conceived by Ter-Mikaelyan in the early fifties of the past century [50], proved to be of crucial importance in providing physical (2). The power spectrum k dσ/dk is actually shown to cancel the BH divergence at low values of k. The SM-LO results take into account the screening correction with the Olsen-Maximon-Wergeland additivity rule and the atomic form factors by Hubbell et al. [42,43].
insight in the description of high-energy electromagnetic processes and in estimating the effect of the different influences of the environment or medium on the radiated energy spectrum. The concept was described at length in the old and recent literature (see e.g. Refs. [8-10, 41, 51]). In its simplest form, which considers an electron, with a high energy E, colliding on a single atom and neglects the small scattering angle of the order mc 2 /E = 1/γ, it is based on the recognition that for γ ≫ 1, the minimum value of the uncertainty of the component of the momentum delivered to the atom, parallel to the momentum of the impinging electron, is given approximately by the expression [52] It follows, from the Heisenberg uncertainty relation, that the actual longitudinal dimension of the region in which the photon is created is of the order This is usually called the "vacuum formation length" and is thought of as the distance where coherence of the emitted wavelets is maintained. It can be shown (see Ref. [10]) that the radiation intensity (as well as the cross section per atom differential in the photon energy) emitted in the whole solid angle, for any given k and E, is proportional to the formation length, meaning that any influence of the medium in altering this length will reflect as a factor on the radiated intensity. Limiting the analysis to radiative electromagnetic processes occurring in amorphous media, it turns out that the presence of the medium inevitably results in a shortening of the formation length compared to l f0 . Strictly speaking, this only applies to the nuclear bremsstrahlung process, namely the one having a Z 2 dependence, while the possible suppression of the bremsstrahlung involving atomic electrons is still an open question, perhaps to be investigated by studying bremsstrahlung on low-Z elements. In turn, the relation l f ≤ l f0 suggests that the BH expression, given above for the emission probability in a collision with a single atom, is to be interpreted as a kind of maximum for the radiation intensity (see e.g. Fig. 1 in Ref. [9]). Such an upper limit must be respected by any theory utilizing the same basic physical ingredients and parameters (like electron screening and radiation length) as Eq. (2) and aimed at quantitatively describing the influence of the medium in reducing (by a factor depending both on E and k) the intensity of the associated bremsstrahlung. It is also worth noting that, for a fixed energy E of an electron impinging on a given medium, the radiation probability given in Eq. (2) is proportional to the formation length l f0 (up to first-order terms in x = k/E). This further supports the idea of an upper limit for the radiation emission, although no definite and rigorous argument valid for the whole radiated spectrum is known to the authors.
Leaving aside the case in which an external field is present (typical of the facilities for synchrotron radiation), the two most important "influences" of the environment in modifying the formation length and the ones most relevant to the present work, are the multiple scattering affecting an electron during the time it takes for the photon to be formed [5,6] and the socalled "polarization" effect [50,53]. The main features of the two processes are briefly discussed hereafter. As is well known, the average multiple scattering angle θ ms for an electron of energy E crossing a distance d of a medium characterized by the radiation length X 0 is given by θ ms = E ms /E d/X 0 where E ms = mc 2 4π/α. This results in an additional contribution to the longitudinal momentum transferred to the nucleus (to be added to that given in Eq. (5)) which is of the order ≃ k θ 2 ms /c and thereby to an overall formation length l f < l f0 . Obviously, it is expected that the multiple scattering will be important when this contribution to q exceeds the quantity in Eq. (5). In particular, when the latter is much smaller than the former, the coherence length is found to be given by the expression l f = mc 2 /E s √ 2 X 0 l f0 and a suppression factor due to the multiple scattering can be defined as which actually reduces the section of the spectrum below a photon energy k LP M ≃ (E 2 E 2 s c)/(m 4 c 8 X 0 ). For instance, k LP M ≃ 125 MeV for 25-GeV electrons in lead.
The polarization effect (also called the "longitudinal density effect") is caused by a phase change in the wave function of the emitted photon due to a sequence of forward Compton scatterings on the electrons of the medium. This can be macroscopically described to a good approximation (see page 128 in Ref. [51]) by considering the modified phase velocity of the photon v = c/N (k) where N (k) is the refraction index of the medium for the photon energy k. The relevant medium parameter in this case is the well-known plasma frequency ω p given by ω p = (4π n Z α c 3 )/(mc 2 ). For photon frequencies far exceeding the atomic ones (as is here the case, the minimum photon energy we are interested in being ≃ 1 MeV) one can write N (k) = 1 − k 2 ǫ /k 2 where k ǫ = ω p . Again the process implies the addition of one more term to the longitudinal momentum imparted to the atom leading to a reduced formation length and finally resulting in an attenuation factor Clearly, this effect is particularly important for photon energies k ≤ k p = γ k ǫ . As it turns out (see page 118 in Ref. [51]), the polarization effect significantly reduces the intensity of the photon spectrum only for k ≤ 10 −4 -10 −5 E. We only have given a short summary of the individual main effects leading to a reduced intensity of the radiated photon spectrum, whereas they should be simultaneously considered in a proper description of their influence on the shape of the photon spectrum. Such a discussion can be found in Ref. [10]. Finally, it should also be clear that the above considerations are only meant to supply a general guide to the shape of the photon spectrum to be expected in the bremsstrahlung process for high-energy electrons crossing an amorphous medium, while detailed theories are needed for a quantitative comparison with experimental data. The description of some of them is summarized in the following sections.
C. The Migdal approach In the papers by Landau and Pomeranchuk [5,6], starting from the classical expression for the energy radiated (per unit solid angle and unit frequency interval) by a charged particle moving along a path r(t), −∞ < t < ∞ (see e.g. page 676 in Ref. [54]) approximate expressions were given for the modification to the BH photon energy spectrum induced by the multiple scattering of the particle in crossing the formation length l f0 in a medium. Shortly after, Migdal [55], still in a completely classical framework, derived a kinetic equation of the Fokker-Planck type to be obeyed (when the momentum of the radiated photon is neglected) by the distribution function of electrons suffering multiple scattering in atomic collisions determined by a screened Coulomb potential. The solution of this equation (see page 164 in Ref. [51]) leads to an expression describing the reduced intensity of the low-energy section of the photon spectrum fully consistent with the reduction factor given in Eq. (7) above.

Effect of multiple scattering
The decisive step in obtaining a fully quantummechanical description of the bremsstrahlung process (taking into account the momentum of the radiated photon and the influence of the medium) and one valid over the whole radiated spectrum was taken by Migdal in Ref. [56]. Indeed, he realized that the state of the radiating particle in presence of multiple scattering is of mixed type and therefore properly described by a density matrix in the momentum representation of the positive energy spinor eigenfunctions of the free-particle Hamiltonian. A quantum kinetic equation obeyed by the density matrix (including the spin degree of freedom and averaged over the spatial coordinates of the scattering centers) was set up in Ref. [56] using the Born approximation. This was then utilized in Ref. [7] where the transition probability from the initial to the final state was connected to the density matrix. The integral equation satisfied by the average density matrix is then simplified by using a series expansion up to second order in the small parameter given by the change in the momentum component of the electron, transverse to the photon momentum, in a single scattering process obtaining again a differential equation of the Fokker-Planck type (see Eq. (24) in Ref. [7]).
After a long series of transformations, this equation is solved in terms of the functions Φ(s) and G(s) (to be discussed below) where the basic quantity s, defined by (see Eq. (47) in Ref. [7]) essentially expresses (when polarization is neglected) the square root of the ratio of the minimum longitudinal momentum given in Eq. (5) to the additional longitudinal momentum transferred to the nucleus due to multiple scattering (see also pages 157-158 in Ref. [51]). It follows that s ≫ 1 corresponds to the high-energy section of the photon spectrum, where the effect of the multiple scattering is negligible, while s ≪ 1 characterizes the suppressed low-energy section of the spectrum. A further refinement, aimed at taking into account the weak energy dependence of the ratio of the maximum to minimum scattering angle in the single bremsstrahlung process, led Migdal to define implicitly a parameter s M slightly different from that given in Eq. (9) .
The function ξ(s M ) is a slowly varying monotonic function of s M defined by In Migdal's paper the quantity s 1 was defined as s 1 = (Z 1 3 /190) 2 , while in order to comply as far as possible with our constraint to consistently refer to a common value for the radiation length, we must adopt as explained in detail in Ref. [57] the value defined by ln(s 1 ) = − 2 α r 2 e Z 2 X 0 n −1 . Notice that the first derivative of ξ(s M ) is discontinuous for s M = 1 and s M = s 1 . Equation (10) should in principle be solved recursively for s M . As a final expression for the probability per unit target length of emitting a photon differential in the fractional photon energy x = k/E, the following relation is adopted: The functions Φ(s) and G(s), introduced by Migdal [7], describe the influence of the LPM effect: It can be seen that G(s) and Φ(s) approach unity for high values of the variable s and in this limit Eq. (12) goes over to Eq. (2).
As a matter of fact, the original treatment by Migdal, whose solution of the quantum kinetic equation for the density matrix is based on the Born approximation, did not include the Coulomb correction, which is, however, automatically brought in when a modern value of X 0 is adopted in Eq. (12). Such a procedure has been applied since the first detailed comparison with experimental data performed by Anthony at al. [18,20], but received a full theoretical support only recently in the work by Voskresenskaya et al. [25]. As a first step, the Moliére multiple scattering theory was improved to take into account the Coulomb correction in the elastic cross section [58]. This was then used for the calculation of the LPM suppression. From Fig. 2 of Ref. [25], it is possible to see that the approximation of a Coulomb correction independent from x (or s), as implied by Eq. (12), is only approximately true. However, for gold, the largest variation shown is few percent of the asymptotic value of the Coulomb correction, which is only several percent of the total cross section, well below present experimental uncertainties.

Effect of the polarization of the medium
In his fundamental paper [7], Migdal tackled also the problem of including the polarization effect at low photon energies. To this end, he introduces the quantity Γ = 1 + (k p /k) 2 where k p = γ ω p . It was then shown that to account for the suppression due to the polarization effect, it suffices to replace Φ(s M ) in Eq. (12) with Φ(s M Γ)/Γ neglecting the term containing the function G which is of minor importance at low photon energies. When the substitution s M → s M Γ is applied consistently to all the s M -dependent quantities appearing in Eq. (12), we obtain D. The Baier-Katkov approach Baier and Katkov (BK in the following) also studied the LPM effect in a series of papers [41,[59][60][61], trying to improve on the original treatment by Migdal [7] and including several additional features like boundary radiation and multiphoton effects. In order to simplify the comparison with other theoretical approaches, we restrict the description of their work to the case of a semi-infinite target. In addition, the polarization effect will be neglected for the moment and introduced at a later stage.

Effect of multiple scattering
Starting from the general expression (based on the quasiclassical operator method [62]) for radiation emission by high-energy electrons along a given classical trajectory, Baier and Katkov solve the problem of averaging over all possible trajectories in an amorphous medium by using the classical kinetic equation method arriving at the following expression for the mean transition probability differential in the fractional photon energy x = k/E: where and p is the momentum operator in the two-dimensional transverse (to the original electron momentum) space and the functions ϕ µ ≡ (ϕ 0 , ϕ) satisfy a Schrödingertype equation in the two-dimensional space of the impact parameter ρ = b/γ where b is measured in units of the reduced Compton wavelength λ c = /(mc). The Schrödinger-type equation is arrived at by introducing the Fourier transform of the scattering cross section on a screened Coulomb potential evaluated in the Born approximation and reads (using nondimensional variables as detailed in Ref. [11]) to be solved using the initial conditions ϕ 0 (ρ, 0) = δ(ρ) and ϕ(ρ, 0) = p δ(ρ). In Eq. (17), the "potential" V (which, due to the azimuthal symmetry around the initial electron momentum, depends only on the magnitude of ρ) reads V (ρ) = −Qρ 2 ln γ 2 θ 2 1 + ln where Q = (2π ( c) 3 Z 2 α 2 n E(1 − x))/(m 4 c 8 x), C is the Euler-Mascheroni constant (C = 0.577..), and θ 1 = ( c)/(E a s ) is the angular parameter appearing in the scattering cross section (a s = 0.81 Z −1/3Å is the screening radius). The potential V is then refined to take into account the Coulomb correction by defining a parameter θ 2 = θ 1 e f (Z α)−1/2 where f is the Bethe-Maximon Coulomb correction rederived, by a different method, in Appendix A of Ref. [59]. The authors then determine the effective impact parameter ρ c giving the main contribution to the cross section. As it turns out, when the root-mean-square scattering angle over a distance of a formation length is much less than 1/γ, one can assume ρ c = 1, whereas, in the opposite case, ρ c can be found by solving the transcendental equation which always results in a value ρ c ≤ 1. The authors then define the all-important function L c ≡ L(ρ c ) = − ln(γ 2 θ 2 2 ρ 2 c ). This function has an obvious minimum value L 1 ≡ L(1) = − ln(γ 2 θ 2 2 ) which is estimated as The function L c has also a maximum value since, if ρ c becomes comparable to the nuclear radius R n , the form of the potential V (ρ) changes completely (it acquires a harmonic oscillator form [11]) and the expressions derived below would no longer be valid. The maximum value of L c is currently estimated as 2L 1 . The next step is to decompose the potential V (ρ) in Eq. (18) (with the substitution θ 1 → θ 2 ) as a sum of two terms, a main one given by V c = Q L(ρ c )ρ 2 and a minor one (that will be treated as a perturbation) given by v(ρ) = −Q ρ 2 (2 C + ln(ρ 2 /(4 ρ 2 c ))) (notice that a term ln(1/ρ 2 c ) has been added and subtracted in Eq. (18) so that the final result for the total radiation probability will be independent of ρ c up to the first order in v c ). Through a lengthy and complex calculation, Baier and Katkov were able to solve Eq. (17), where only the main term V c is considered, and after inserting this solution into Eq. (15) arrive at the following expression for the probability of radiation per unit length, differential in the fractional , R 1 and R 2 have been defined in Eqs. (16), and the functions G and Φ are those defined in Eqs. (13). A discontinuity in the slope of the photon energy spectrum is expected for the minimum value of k resulting in ρ c = 1.
It is interesting to compare the radiation intensity given in Eq. (12) by Migdal with that given in Eq. (21) by Baier and Katkov. However, since the Coulomb correction and inelastic scattering were not included in the Migdal theory, a proper comparison is only obtained by setting f (Zα) = 0 in Eq. (20) and substituting the previously defined (in Sec. II) X c 0 for X 0 in both Eq. (10) [which influences the definition of ξ(s M )] and Eq. (12). The ratio R of the main term in the BK approach to the probability of the Migdal theory is given by As an example, the value of R as a function of the photon energy k for a 287-GeV electron beam crossing an iridium medium is shown in Fig. 2. It is seen that differences up to 13 % occur, the Migdal approach always lying above the main term of the BK theory. The discontinuities of the first derivative of R around k = 30 GeV and k = 153 GeV are due to the discontinuities of the derivative of ρ c (k) (at ρ c = 1) and of ξ(s M ) (at s M = 1), respectively.
As anticipated above, the correction v(ρ) to the potential is treated perturbatively by expanding the electron propagator and keeping terms up to first order in v(ρ).
The final expression for the correction term to the radiation probability requires the definition of several functions such as where H and Li 2 represent the Heaviside unit step function and the dilogarithmic function, respectively. In addition, it is necessary to introduce the functions D 1 (ν 0 ) and D 2 (ν 0 ) via the following integral representa- where p = 1/( √ 2 ν 0 ). The final expression for the correction term of the radiation probability per unit length then reads dp BKC

Effect of the polarization of the medium
Baier and Katkov also estimated the influence of the medium polarization on the radiation probability. They first introduce the basic quantity κ (similar to the quantity Γ of the Migdal approach introduced in Sec. II C 2), As it turns out, to include the polarization effect, it suffices to implement the following substitutions (listed on page 298 of Ref. [11]) in Eq. (21) and Eq. (25) of the previous subsection: The main consequence of these substitutions is thatρ c can assume the value 1 not only for high enough values of x = k/E but also for small enough values of x. In addition, the quantityν 0 is no longer a monotonic decreasing function as k increases, but starting from zero (due to the fact thatQ → 0 as k → 0) reaches a maximum and then decreases monotonically. This in turn will imply that the slope of the photon energy spectrum could possibly exhibit, depending on the energy E, two discontinuities. As stated in Sec. II B, the polarization effect results in a strong suppression of the radiation only for photon energies below ≃ 10 −4 E.

Normalization to an arbitrary radiation length
In presenting the BK approach for the spectral distribution of the radiation in an infinite amorphous medium, no reference has yet been made to the concept of radiation length, since the really basic quantity of the theory is L 1 . However, as remarked at the beginning of Sec. II, in order to compare different theories on the same basis, we need to refer them, for any given material, to the same radiation length which we have chosen to be the one reported in Ref. [38]. Fortunately, the authors of Ref. [11], when discussing the integral characteristic of bremsstrahlung, provide a very useful relation between the radiation length and L 1 (see Eq. (2.36) in Ref. [11]) which is here reported using our notation where L 0 rad = (m 2 c 4 )/(2 Z 2 α 3 n ( c) 2 L 1 ). Indeed one can invert this relation and derive L 1 from the value of the radiation length given in Ref. [38]. This value of L 1 is then substituted for the one given by Eq. (20), which defines the minimum value assumed by the function L(ρ c ), whenever appropriate to obtain the final expressions actually used in our calculations (see Eqs. (21) and (25)).
It is anyway interesting to compare (for each atomic number Z) the radiation length obtained from Eq. (28), using for L 1 the value of Eq. (20), and the one given in Ref. [38]. The percentage deviation between the two values is reported in Fig. 3 as a function of Z. It is seen that important differences arise for low-Z materials so that a direct comparison of the BK approach with different theories would have been misleading had we avoided to "normalize" to a common radiation length. The discrepancy is most probably due to the different atomic form factors adopted. In Ref. [11], a simple parametrization of the atomic screening function with a single exponential was used resulting in the Schiff approximation for the form factor (see Eq. (2.14) in Ref. [11]). The parameter of the exponential is adjusted to reproduce on average all atoms with a Z dependence taken from the Thomas-Fermi model. In Ref. [38], a sum of three exponentials is used to approximate the screening function leading to the Molière approximation for the form factor (see Eq. (3.66) in Ref. [38]). Again, the parameters of three exponentials are adjusted to reproduce on average all atoms with a Z dependence taken from the Thomas-Fermi model. It is in general expected that the Molière approximation is superior to the Schiff one [64].

E. The Blankenbecler-Drell approach
A completely different approach to the problem of the LPM suppression was taken by Blankenbecler and Drell [65] (BD in the following). They started by solving a Klein-Gordon equation in an eikonal approximation including terms up to first order (in the inverse electron initial and final momentum) in the total phase of the wave function while neglecting these terms in the expression for the amplitude. This approximation enables also to include electron spin effects in a simple way at a later stage of the calculation (see Ref. [15] in Ref. [65]). Such an approach basically takes into account, in the stated approximation, the total phase difference and momentum variation accumulated by the wave function as the electron propagates between all possible pair of points having coordinates (z 1 , z 2 ) (with z 1 ≤ z 2 ) along the axis of cylindrical symmetry provided by the initial direction of the impinging electron; these points can even lie outside the physical dimensions of the target. The approach does not include the concept of cross section per atom as it considers the whole target as a single unit. The theory is valid for any target thickness t and even for inhomogeneous amorphous targets including the case of several slabs of any composition separated by gaps [66]. For the simplest case of a homogeneous target of thickness t and radiation length X 0 (for which no "adjustment" is necessary so that the above-mentioned tabulated values of Tsai [38] can safely be used) the final result for the probability of emission per unit length, differential in the fractional photon energy x = k/E can be concisely written as dp BD Here the calculation of the function J(t, x) requires, for each value of x, the careful evaluation of a double integral of oscillating functions of two variables All the details can be found in Ref. [66]. The proper evaluation of the integral [which is actually conveniently split over four integration regions in the (b 1 , b 2 ) plane] requires the introduction of a convergence factor and has been implemented in a dedicated code. It must also be mentioned that in a subsequent paper [67], Blankenbecler added a correction term (usually referred to as δ-term) to the previous formula in an attempt to take into account correlations between amplitude and phase change in the wave function during the electron propagation. This generally leads to a significant predicted reduction (up to about 20 %; see Ref. [67]) of the radiation intensity. The question whether the inclusion of the δ-term could result in an improved matching of the theory to the experimental data remained somewhat controversial for some time [8,26,68]. However, the authors of Ref. [69] eventually not only presented convincing experimental data which clearly support the superiority of the approach developed in Ref. [65] without corrections, but also raised some doubts about a possible inconsistency in the procedure of adding the δ-term, given that terms of the first order in the inverse electron momentum in the amplitude of the wave function had been neglected in the original eikonal approximation. This is why the results of our calculations including the δ-term (which also point to an excessive suppression) are not reported in this paper. Unfortunately, the BD approach does not include the dielectric suppression at low photon energies and, while taking into account in a natural way the surface effects, cannot easily be corrected for multiphoton emission, which plays an important role in almost all experimental conditions as remarked by Klein [8]. Finally, no discontinuity in the slope of calculated radiation spectra appears anywhere in this approach.

F. Other theoretical approaches
The accurate measurement of the LPM effect at SLAC [20] encouraged theoreticians to find alternative and possibly more accurate methods to describe the suppression effects. We very briefly describe the ones not mentioned above and give some explanations why they are not considered any further in the present work.
Baier et al. [70] based their theory on the multiple scattering of high-energy electrons on a large number of scattering centers fixed in the medium. The LPM suppression was ascribed to the destructive interference between radiation amplitudes from a number of scattering centers. However, only the soft photon limit was considered and no general formula was given for an accurate description of the radiation process valid over the whole photon energy range. Moreover, different mechanisms for suppression, like that due to the polarization of the medium, were not considered.
Zakharov [12] introduced since 1996 the so-called "light-cone path integral approach" to describe the LPM suppression effect. The intensity of the photon radiation is expressed in terms of the Green function of a twodimensional Schrödinger equation. Simple expressions for the radiation probability are only obtained for the soft photon limit and an infinite medium (see Eq. (45) in Ref. [71]) and, in such a case, the results are almost coincident with those by Migdal. In the general case, very complicated expressions for the radiation probability, involving multiple integrals, are given (see e.g. Eq. (52) in Ref. [71]). These were successfully compared both to the SLAC E-146 data in Ref. [71] and to the highenergy CERN LPM data in Ref. [72]. However, the lack of adequate information in the description of the relevant formulae, as well as the absence of any consideration of dielectric suppression, led us to renounce the implementation of a Monte Carlo code for this approach.
As a last issue, it should be mentioned that Baier and Katkov [11], as well as Zakharov [71], developed versions of their approaches capable of dealing with the presence of boundary surfaces for finite targets like the BD approach. These effects are very interesting and lead to a direct measurement of the formation length with structured targets [26]. However, these formulations are incompatible with the basic Monte Carlo philosophy, since they do not localize the emission of a photon in a particular region of the target [8].

III. GENERAL COMPARISON OF THE THEORIES
We are now in a position to compare the results of the three approaches including quantum coherence effects for a few specific cases. A general preliminary remark is in order: in the BD theory, one has to introduce a specific value for the target thickness and no restriction on the validity of the results is introduced by the consideration of the (k-dependent) coherence length. On the contrary, an infinite medium is assumed both in the Migdal approach and in the version of the BK theory which is considered in the present work, so that the comparison with the BD approach is only meaningful for the section of the spectral probability distribution for which the coherence length is smaller than the assumed target thickness in the BD theory. It must also be noted that a comparison of this type was previously reported in Fig. 9 in Ref. [26] for a 178-GeV electron beam impinging on a thin (1.97 % X 0 ) carbon target. However, in that study no attempt was made to compare the three approaches for the same value of the radiation length, as is done in the present work, so that the drawn conclusions might be different in some details from those presented hereafter. Figure 4 shows the spectral distribution of the quantity x dp γ /dx in units of mm −1 for a 287 GeV electron beam impinging on a 128µm-thick iridium foil (one of the cases covered by the CERN LPM experiment). The BH result is compared to the three theories considered here for the LPM suppression (for the BK approach both the contribution of the main and correction terms are shown separately together with their sum). In this case, the coherence length, as given in Eq. (6), is larger than the target thickness for photon energies lower than 1 GeV. The polarization effect is quite negligible for the photon energy range displayed in the figure so that a valid comparison with the results of the BD approach, which does not include this effect, is possible. Here is a list of comments to this figure.
1. All approaches agree with the BH limit at the SWL, where the formation length is smallest.
2. The Migdal theory overshoots the BH approach in the 45-125-GeV range with a maximum deviation of 2.6 % at 72 GeV. This was already pointed out at lower energies in Ref. [20]. The most probable origin are the numerous approximations in the theory even though it is almost impossible to pinpoint the reason for the discrepancy in that particular energy range.
3. The Migdal approach exceeds the BK theory (main+correction term) by 5 % in the photon energy range 15-45 GeV, suggesting the possibility of an experimental check to discriminate between the two theories.
4. The correction term in the BK approach reaches a maximum of 8 % of the total probability at 31 GeV  where a discontinuity in the derivative of the spectrum is apparent. A less apparent discontinuity (of the opposite sign) is also present at the same energy in the main term, so that the total spectrum appears smooth at the 1 % level.

5.
Remarkably, for photon energies larger than 1 GeV the BK and BD approaches, based on quite different approaches, agree to within 4 %.
As an example which illustrates the influence of the polarization effect on the predicted theoretical photon spectrum, we report in Fig. 5 the calculated spectra for a 25-GeV electron beam impinging on a 3 % X 0 aluminum target (one of the cases covered by the SLAC E-146 data) showing separately the Migdal and BH spectra and the main term, the correction term, and their sum for the BK approach, including and excluding the polarization effect. The above-mentioned condition for a formation length (reduced by the LPM and polarization effect) smaller than the target thickness is satisfied for photon energies larger than ≈ 0.3 MeV. One notes that the polarization effects are indeed most important in the k 5-MeV region.

IV. CONSIDERATIONS ON MULTIPHOTON EFFECTS
The theoretical calculations of the bremsstrahlung cross section described in Sec. II cannot be directly compared to experimental data because of the thickness of the targets considered here. In particular, a lower limit on the thickness is imposed by the neglect of the presence of boundary surfaces, as mentioned in Sec. III. Three main sources of distortions have then to be taken into account. They are discussed in turn in the following subsections. The most straightforward way to handle all of them, and the one followed in the present work, is to implement the theoretical cross sections in a Monte Carlo program, as described in Sec. V. To justify the need for such an effort, the magnitude of the three distortions is illustrated under typical conditions by using such a code.

A. Direct multiphoton effects
A single crossing of the target by one electron can result in the emission of multiple photons, which, reaching the calorimeter, give rise to a signal proportional to the sum of their energies. For such a reason, it is necessary to clearly distinguish between the energy of the photon radiated in a single bremsstrahlung act, indicated by k, and the total energy deposited in the calorimeter as the sum of all the emitted photons by a single electron, indicated by K hereafter. The area of the spectrum is not altered by multiphoton emission (if not at least one photon is emitted, no energy is deposited in the calorimeter), but its shape is. In particular, photon pile-up tends to deplete the low-energy part of the spectrum and enrich the high-energy one [20]. A typical example is shown in Fig. 6 for the BK approach and the CERN LPM data. It is apparent that the bremsstrahlung cross section is far from the data, while the Monte Carlo, accounting for the emission of multiple-photons, is much closer.
The need to include multiphoton emission was recognized both in the experimental [20,22] and theoretical works [11,60]. However, while the former employed Monte Carlo simulations, which are essentially exact, the latter developed analytic approximations giving explicit correction formulae. The accuracy of such expressions was investigated in detail in our previous publication [73] for the BH and Migdal cross sections. Here we extend that study by considering the BK approach.
A multiplicative multiphoton correction factor is defined in analytic calculations [11,60,72] as where N is the total number of events in the dN/dK spectrum (not to be confused with the total number of impinging electrons N 0 used in Sec. VI) and t is the target thickness. The same quantity can be obtained from the Monte Carlo simulations as [73] f where multiphoton "mph" and first photon only "fph" spectra refer to the same set of events and λ is the mean free-path for the bremsstrahlung interaction. All other processes are disabled for the simulations considered in the present subsection. Baier and Katkov [11,60] started from the Landau solution of the kinetic equation valid under the assumption that the particle energy loss is much smaller than the particle energy. Then they were able to find an expression for f valid in the case of the BH cross section even when an arbitrary number of hard photon is emitted, namely where β = 4 t/(3 X 0 ). Baier and Katkov claim that this expression is valid for all photon energies and all target thicknesses, but this could not be confirmed by Monte Carlo simulations [73]. However, it is a reasonable approximation for the thicknesses considered in the present work. Baier and Katkov [11,60], starting again from the Landau solution, found also the expression for f in the case of strong LPM suppression (described by their approach) valid for β ≪ 1: where Γ is the Euler Gamma function, the constant c 1 can be calculated, once and for all, from the integral and is the photon energy below which the LPM suppression shows up. By assumption, Eq. (33) is valid only for k ≪ k c , where the LPM suppression is strong. Finally, Baier and Katkov [11,60], still employing the Landau solution, give an expression for a thin target β ≪ 1 and an arbitrary cross section, Here, this integral has been evaluated numerically employing the cross section corresponding to the sum of Eqs. (21) and (25). All the three previous approximations, Eqs. (32), (33), and (36), are compared to Monte Carlo simulations in Fig. 7 for the case of a 287-GeV electron beam impinging on a 128-µm-thick iridium target. The expression for the LPM case [Eq. (33)] works with an error of ≈ 2 % for K 10 GeV, but then fails more and more deeply close to the SWL. The expression for the BH case [Eq. (32)] works essentially in the complementary region K 10 GeV. Finally, Eq. (36) performs better than Eq. (33) for K 30 GeV, but still fails more than Eq. (32) close to the SWL. Baier and Katkov [11] state that to compare their approach with the CERN LPM data they employed Eq. (36). For the SLAC E-146 measurements, they mention [11,60] an interpolation of Eqs. (32) and (33) with no further details. Given the present comparison with exact Monte Carlo simulations, it is difficult to imagine that their accuracy is better than ≈ 2 %, getting worse close to the SWL.

B. Multiphoton effects coupled to self-absorption
Beyond multiphoton emission, a Monte Carlo code allows to take into account other sources of distortion in the shape of the spectrum that are not included (and hardly could be) in the approximate analytic approaches [11,60]: namely pair production, Compton scattering, and photoelectric absorption in the target. To illustrate their importance in one typical case, a 25-GeV electron beam impinging on a 3.12-mm-thick aluminum target is considered in Fig. 8. The difference of a simulation with bremsstrahlung (BK approach) and pair production enabled with respect to one where only the former is taken into account (solid squares), clearly indicates that for K 20 MeV an attenuation is present, which increases up to to ≈ 2 % at the SWL. The enhancement for K 20 MeV is less expected and it is an example of a hard to predict consequence of multiphoton emission [73]. Consider an event where one low-energy and one high-energy photon have been emitted by the same electron crossing the target. The calorimeter only registers the sum of the two energies, resulting in the depletion of the low-energy part of the spectrum. However, if the high-energy photon is absorbed in the target by pair production, the surviving low-energy one can reach the calorimeter producing a signal (remember that electrons and positrons are swept away by the magnets present in the setup and do not reach the calorimeter). If beyond pair production, also Compton scattering and photoelectric absorption are enabled, the difference with respect to a simulation with only bremsstrahlung (solid circles) is now a reduction of the spectrum for all values of K with a steep increase in attenuation at the lower end. It can be mentioned that few simulations have also been performed taking into account the influence of the Fermi motion of bound electrons inside atoms on Compton scattering with the GLECS extension by Kippen [74]. No differences were found for the photon energy range of interest in the present work.
We note that the photon path was kept in vacuum in the SLAC E-146 setup to allow to reach down to photon energies of 200 keV. The only material present between the target and the calorimeter was a thin aluminum window with a thickness of 0.7 % X 0 [20]. However, since pair production in this material hardly results in a loss of signal (the window acting like a preshower) no account in the simulation is necessary. The photon path was not kept in vacuum in the CERN LPM experiment, but the lowest photon energy considered was 2 GeV [22]. Thus, the only relevant attenuation process is pair production, which leads to a loss of collected energy in the calorimeter only if it happens in the small distance between the target and the deflecting magnet. Background estimates [22] suggest that all the crossed materials are equivalent to 0.7 % X 0 [22]. The setup of the CERN LOW-Z experiment was also not kept in vacuum and the minimum photon energy considered was about 0.05 GeV. The background intensity was somewhat worse, corresponding to 3 % X 0 [26]. The attenuation in such media is approximately taken into account when the secondary target, representing the background, is placed after the primary one in the Monte Carlo simulations (a detailed account would require inclusion of the full setup). To summarize, the neglect of secondary processes can produce an error in the theoretical predictions at least around 1 to 2 %, depending on the value of K and the amount of background. Such a correction was taken into account in previous comparison of the experiments with the Migdal approach [22,57,73,75], but not in the only available confrontation with the BK one [11,60], based on the analytic approximations. Thus, the uncertainty in the inclusion of the multiphoton and secondary processes in the results presented by Baier and Katkov are possibly rather close to the discrepancy between theirs and the Migdal approach. The first comparison of the Migdal and BK approaches on the same footing is the one presented in the present work.

C. Multiphoton effects coupled to background subtraction
The CERN LPM experiment covered the highest energies reached so far in measurements of the LPM suppression. As a result of the more difficult experimental conditions, despite all efforts [21,22], the photons from the background amounted to 0.7 % X 0 , as mentioned. The employed thicknesses of the targets were ≈ 4 % X 0 , leading to a non-negligible probability of photons from bremsstrahlung interactions in the target and the background media simultaneously reaching the calorimeter. As a consequence, their energy is summed causing a distortion of the measured spectrum, closely resembling that originated by multiphoton emission in a single event. As first shown in Ref. [68], the simple subtraction of a "notarget" run is not a full solution to this problem and does not allow to completely remove all distortions resulting from pile-up of photons emitted from the target with the background. Thus, it is necessary to apply the same background subtraction in the simulation: one run is performed with the nominal target (e.g. iridium with a thickness of 128 µm) and a secondary target (e.g. carbon because most of the elements producing the background have low Z) and another run with the secondary target alone. Then the two results are subtracted, to repro-duce the procedure employed in the experiment. The comparison with measurements of the target-only Monte Carlo and the more correct procedure just described is shown in Fig. 9. The distortion produced by the background subtraction is not negligible and well comparable to the difference between the Migdal and BK approaches. It was not taken into account in all previous publications [11,21,22,57,73,75] and is included for the first time in the present work.
On account of the distortions produced by background subtraction, as realized in Ref. [68], it was chosen not to apply this procedure in the analysis of the CERN LOW-Z data [26]. Consequently, the data can only be compared with Monte Carlo simulations including the pile-up with the background and cannot be handled by the analytic approximations discussed in Sec. IV A.

V. DEDICATED MONTE CARLO
First numerical implementations of approaches describing the LPM effect were developed for Monte Carlo codes in the seventies by three groups: one in Japan, one in the U.S., and one in Bulgaria. The main emphasis was the study of electromagnetic showers initiated by ultra-high-energy cosmic rays. The first group worked alone and validated the results against analytic approximations [76], while the other two joined [77] arriving at a common consensus that has been the basis of all subsequent efforts. Nowadays, the Migdal approach is available in Monte Carlo codes for the simulation of extensive air showers, like AIRES [30], CORSIKA [31], or COSMOS [32] and even in general-purpose packages, like GENAT4 [27], EGS5 [28], or EPICS [29]. We also developed our version of the Migdal approach [57,75], which improves on all the others for the careful handling of the discontinuity present in the first derivative of the cross section. It is based on the GEANT3 code [78], since, when the work begun, GEANT4 had not yet gained widespread acceptance. It was used to compare with the CERN LPM data [57]. To the best of our knowledge, no implementation of the BK approach in a Monte Carlo code has yet been reported. Thus we extend our previous work on the Migdal approach [57] to the BK one by reusing, in particular, the careful handling of the discontinuity of the first derivative of the cross section.
As discussed in Sec. IV B, the emitted photon can be absorbed inside the target due to the pair production process. This has the consequence that the program needs to handle positrons as well. In the present implementations, the same cross sections for electrons are applied to positrons. Due to the different kinematics [8], the LPM suppression in pair production is small for the energies of interest here (e.g. for 1-TeV gamma rays in iridium, the maximum reduction of the cross section over the full electron-positron energy range is ≈ 1 % [57]) and is therefore not taken into account.
In general, dielectric suppression is only important for low photon energies (see Sec. II B) and its inclusion is not necessary under some conditions, like those of the comparison with the CERN LPM data. Therefore, the program has the option to disable dielectric suppression to simplify the logic and speed up the computation.

A. Implementation
Two main difficulties have to be overcome for an efficient implementation of the BK approach in a Monte Carlo code.
First, the functions G and Φ [see Eqs. (16)] and the functions D 1 and D 2 [see Eqs. (24)] need to be evaluated each time the photon energy k is sampled, so that direct use of their definition is very inefficient. Approximate methods for the evaluation of G and Φ were given by Stanev et al. [77] employing rational functions, and were widely utilized in essentially all programs written afterwards [27,28,57,75], one notable exception being Ref. [30], where other approximations, still based on rational functions, were proposed. Here, the original one by Stanev et al. [77] is maintained. A fit to the functions D 1 and D 2 in terms of rational functions of ν 0 has been developed for different ranges of the variable to ease the repetitive evaluation of D 1 and D 2 in the calculations presented here; an analysis of its accuracy together with the values of the coefficients employed can be found in Appendix A.
Second, for a given material, electron energy E, and photon energy k, the corresponding value ofρ c has to be determined by solving Eq. (19). The simple bisection method has been employed because, once proper bracketing points are selected, its convergence is always guaranteed. Typically ≈ 20 iterations are needed to achieve a relative precision better than 10 −8 , which can be very fast given the simplicity of the function that needs to be evaluated at each step. As remarked,ρ c cannot exceed 1. Note that the efficiency can be improved by checking the value of the variableν 2 1 = 4QL 1 : if it is greater than 1, the variableρ c can be set to 1 and the bisection procedure skipped.
As discussed in Sec. II D 2, a discontinuity in the first derivative of the cross section is present both in the main term and in the correction term for the photon energy k = k d corresponding toρ c = 1. Although the effect on the total cross section is much less severe than in the case of Migdal approach (see Sec. III) an explicit handling has still been implemented following closely Ref. [57]. More details on the determination of the location of the discontinuity are give in Appendix B, on evaluation of the total cross section in Appendix C, and on the efficient sampling of the photon energy in Appendix D.

B. Simulation parameters
The developed program has been run with 2·10 8 events with the GEANT3 ABAN flag set to 0 (the default being 1), as recommended in Ref. [79], to enforce effective tracking of all electrons and positrons down to the chosen low-energy cut T min = 50 MeV. The low-energy cut for photons, TCUT, has been set to 10 keV, 50 MeV, and 10 keV in the simulations for the SLAC E-146, CERN LPM, and CERN LOW-Z data, respectively. Dielectric suppression is enabled, disabled, and enabled for the previous sets, respectively. The angular cuts have been set to 1.4, 1, and 0.6 mrad for the SLAC E-146 [20], CERN LPM [22], and CERN LOW-Z [26] data, respectively, according to the geometry of the setup described in the corresponding publications. Although the program can include the effect of the resolution of the calorimeter, this has been found to be negligible for all datasets [57,75] and has thus been switched off. The complete GEANT3 configuration can be found in Ref. [73] (see, in particular, the rightmost column of Table III). Typical execution times on a modern multicore 64-bit unit running at 2.0 GHz (opteron 6128 HE manufactured by AMD ® processing one simulation per core) are of the order of 21.0 and 21.2 µs per event for the implementation of the Migdal and BK approaches, respectively, in the case of the simulations for the SLAC E-146 data (these timings include tracking with one cylindrical radiator volume and histogram filling).

C. Tests of the implementation
The program contains a sophisticated logic to monitor the tracking status, including inconsistencies due to multiple entering of an electron into a volume without an intermediate exiting. This can happen due to the finite precision of the tracking steps; it is however limited, with the selected tracking parameters, to a few cases out of 2 · 10 8 events. Multiphoton emission can be investigated constructing a series of histograms like those in the detailed studies presented in Ref. [73]. The first-photon spectrum can be employed to validate the present implementation: in fact, a direct match with the BK approach is expected. The case of a 25-GeV electron impinging on iridium is shown in Fig. 10. The differences of the simulation from theory are well within the estimated statistical uncertainties (the bars indicate one standard deviation) in almost all cases. More than five orders of magnitude in photon energy are covered in Fig. 10. A problem in the evaluation of the total cross section would result in a systematic shift of all points, while a mistake in the sampling would produce local deviations from zero incompatible with the statistical fluctuations. Due to simultaneous LPM and dielectric suppressions, the number of events at low energies decreases dramatically, resulting in larger error bars. To definitively confirm that the total cross section is properly calculated, it is interesting to compare the average deviation for all bins with the corresponding statistical uncertainty. For iridium, copper, and carbon, under the same conditions of Fig. 10, these values are (−0.034 ± 0.067) %, (0.020 ± 0.050) %, and (0.027 ± 0.027) %, respectively. The same test has been repeated for all the simulations employed for the comparison with data in Sec. VI, finding always very similar results.
Finally, before accepting the results of the simulations, it is fundamental to verify their stability. The most important test concerns the effect of the electron or photon energy cuts E min and TCUT. For the SLAC E-146 case, TCUT and E min have been increased from 10 to 50 keV and from 50 to 250 MeV, finding no change within the statistical uncertainty for all the cases compared to the data in Sec. VI. We do not show the corresponding figures here, but they are very similar to those displayed in Ref. [73] for the Migdal approach (see in particular Figs. 13 and 14). For the CERN LPM case, TCUT and E min have been decreased from 50 to 5 MeV finding again no difference, within the statistical uncertainty, for all the cases considered in Sec. VI. The cuts for CERN LOW-Z simulations are identical to those of the SLAC E-146 ones, but the beam energy is 178 versus 25 GeV and the lowest photon energy covered is 50 MeV against 100 keV; thus any further testing has been deemed unnecessary. The targets used in the SLAC E-146, CERN LPM, and CERN LOW-Z experiments are rather thin, so that a check of the impact of the boundary crossing precision EPSIL on the final result is in order. Full stability has been found when EPSIL is below 0.1 µm. The adopted value for all the simulations compared with data in Sec. VI is EPSIL = 0.1 µm.

VI. COMPARISON WITH ALL EXPERIMENTAL DATA
The direct measurements of the LPM suppression are essentially limited to the SLAC E-146 [18][19][20], CERN LPM [21,22], and CERN LOW-Z [26] experiments. The main differences are the following.
1. The SLAC E-146 experiment used 8-and 25-GeV electron beams covering more than 3 orders of magnitude in photon energy but limited below 500 MeV, thus leaving the region close to the SWL unmeasured. The CERN LPM measurements reached the highest beam energies so far, up to 287 GeV, and covered the spectral region of the SWL, but was downward limited to a photon energy above 2 GeV. The CERN LOW-Z data employed 178-GeV electrons but again covered only a part of the spectrum from 50 MeV to 3.8 GeV.
From the considerations exposed in Sec. III, the highest discriminating power between the Migdal and BK approaches is expected for the CERN LPM results.

No inclusion of dielectric suppression is necessary
for the CERN LPM data so that it has been disabled in the simulations. On the contrary, it is always taken into account for comparing with the SLAC E-146 and CERN LOW-Z results.
3. The SLAC E-146 experiment reached photon energies down to 200 keV, where transition radiation is present and should be incorporated into the theoretical description. The dominance of such a contribution can easily be spotted by the upturn of the spectrum. The Monte Carlo employed in the present work does not include transition radiation, so that the part of the spectrum at lower photon energies must be excluded from the comparison to the data.

4.
The SLAC E-146 study included rather thin gold targets corresponding to 0.1 % and 0.7 % X 0 to investigate surface effects. They cannot be compared with the present Monte Carlo simulations based on approaches valid for infinite targets (see Sec. III).
5. The photon flight-path, from the target to the calorimeter, was ≈ 50 m in the SLAC E-146 experiment and was kept in vacuum, except for a very thin exit window in front of the calorimeter [20], as mentioned. Note that even if this length can be decreased using a larger field magnet to deflect the electrons that crossed the target, this has the price of a stronger and higher-energy synchrotron radiation. As a matter of fact, synchrotron radiation was present in the SLAC E-146 setup and had to be removed by a special cut on the reconstructed position of photon impact on the calorimeter (which was segmented) [20]. The final amount  [20] as, on average, 1 photon in the 200-keV to 500-MeV energy range per 1000 electrons. The spectra shown in Fig. 4 of Ref. [20] do not appear to have a definite BH profile, but by assuming such a shape as a reasonable approximation, an equivalent background of 0.01 % X 0 can be estimated. Thus, it is not necessary to handle background subtraction in the Monte Carlo code for the SLAC E-146 setup. As noted in Sec. IV C, the same solution could not easily be applied to the CERN LPM or CERN LOW-Z setups employed at energies about an order of magnitude larger. Indeed, the length of the photon path in the CERN LPM case was ≈ 80 m, part in helium and part in air [21,22]. The corresponding background, mostly due to the aluminized mylar windows of the first drift chamber located between the target and the deflecting magnet, was found in Refs. [21,22] to be equivalent to 0.7 % X 0 . As demonstrated in Sec. IV C, such a background level must be explicitly handled in the processing of the Monte Carlo results by following the same subtraction procedure adopted to derive the data. As shown in Fig. 5 of Ref. [22], the background spectrum has, in quite good an approximation, a BH shape. For all the results shown here, we use a secondary carbon target (placed downstream from the primary one) with a thickness of 0.7 % X 0 . To speed up the simulation, a BH spectrum without LPM suppression is assumed for the secondary target. The CERN LOW-Z case was similar, except that background subtraction was not performed, to avoid any distortion, and thus the background has to be explicitly added in the simulations to compare with data. The background was equivalent to 3 % X 0 [26] and again had a BH shape to a quite good approximation for the energy range covered by the final data (see Fig. 11 of Ref. [26]). Again, a secondary target made of carbon with a thickness of 3 % X 0 and a BH spectrum has been placed downstream the primary one in all simulations.
It should be emphasized that all comparisons to be presented here are absolute, i.e. no free overall normalization factor is present. Moreover, both the Migdal and BK approaches have been reformulated, as described in Secs. II C and II D, respectively, to adopt the same radiation lengths tabulated by Tsai [38,39].
The vertical scales of the data agree with the original publications, to ease the comparison. Unfortunately, different choices were made by the original authors. In general, the number of events in each channel dN has always been normalized to the total number of impinging electrons N 0 (thus including those that do not radiate). In the case of the SLAC E-146 values, a normalization to the target thicknesses t in units of X 0 , i.e. t/X 0 , has been included and, in that of the CERN LOW-Z ones, the bin width w b has been taken into account.
The comparison between the final full Monte Carlo simulations (see Sec. V B for the parameters) and the SLAC E-146 data, collected for E = 8-and 25-GeV electron beams, are shown in Figs. 11 and 12, respectively. The main apparent feature is that the Migdal and BK spectra are rather close, especially at low Z and for the lowest beam energy E = 8 GeV. As mentioned, the discriminating power of the experiment is small, given also the statistical and systematic uncertainties [20], so that we deemed unnecessary to compare with all the measured targets; we have excluded uranium and kept only one thickness for each element. In agreement with the discussion of the basic cross sections (see Sec. III), it remains true, even after the full Monte Carlo, that the Migdal predictions are above the BK one in the photon energy region of the transition between the BH limit at the SWL and the strong LPM suppression on the opposite side of the spectrum. On the other hand, the discontinuity in the first derivative of the basic cross section does not appear in the simulations due to the smearing brought about by the multiphoton emission. The effect of transition radiation is also very clear, note in particular the tungsten target for E = 25 GeV. Since it is not included in the Monte Carlo, the simulated spectra steady drop towards low photon energies and do not show the characteristic upturn of the experiment. A more objective comparison of the simulation with the SLAC E-146 data can be made on the basis of the χ 2 normalized to the number of degrees of freedom n DF . The simulation has the same logarithmic binning of the data (25 per decade) and a direct calculation of χ 2 is possible without interpolation. The contribution of the statistical  [20], i.e. 500 MeV. The indicated low cut, Kmin, on the radiated energy has been applied to avoid the contribution from transition radiation, which is not taken into account in the Monte Carlo.
error of the simulations is negligible when compared to the experimental one. The values of χ 2 /n DF are reported in Fig. 13 and Table I. Unfortunately, no clear-cut general preference for either approach is apparent. Considering first the data for E = 8 GeV, most of the targets show rather similar values of χ 2 /n DF , except for tungsten and lead, which favor the Migdal and BK theory, respectively. Direct examination of Fig. 11 confirms these cases: in particular for lead, there is no doubt that the Migdal approach is closer to the data. However, it is also necessary to note that for iron, tungsten, and most notably lead, the experiment has a tendency to overshoot all simulations at the high-energy end of the spectrum. Even considering all targets at E = 8 GeV, half has a lower χ 2 /n DF for the Migdal (carbon, aluminum, and tungsten) and half for the BK (iron, gold, and lead) approach, respectively.
For E = 25 GeV, the clearest preferences appear for iron, tungsten, gold, and lead, of which only the first is in favor of the BK approach. Inspection of Fig. 12, clearly shows that, in the case of iron, the BK approach is particularly successful. Again, the tendency of the measurements to stay higher in the uppermost energy section of the spectrum results in the other three cases being better reproduced by the Migdal formulae. Considering all targets, still half has a lower χ 2 /n DF for the Migdal (tungsten, gold, and lead) and half for the BK (carbon, aluminum, and iron) approach, respectively. Only iron has the same preference for the same theory at both energies. It is therefore safe to conclude that the SLAC    E-146 data do not allow to unambiguously single out one of the approaches as the best. The comparison of the final full Monte Carlo simulations and the CERN LPM data are displayed in Figs. 14, 15, and 16 for beam energies of E = 149, 207, and 287 GeV, respectively. All targets have been considered (note that even though a carbon one was measured, it was used to determine the efficiency of the calorimeter [22] and therefore does not offer independent information). The background subtraction procedure has been applied to the simulations to be consistent with the data (see Sec. IV C). Especially for Ta and Ir, the Migdal approach remains clearly higher than the BK one by ≈ 5 % around the maximum of the power spectrum. Both approaches merge into the BH limit at the SWL and get again rather close towards the low-energy end: no discriminating power is expected in those regions. Therefore, it is important to measure the part of the spectrum around the maximum, which was done only by the CERN LPM experiment.
The χ 2 /n DF of the comparison between simulations and data is plotted in Fig. 17 and the values are given in Table II (again both have the same 25 logarithmic bins per decade and no interpolation is necessary in calculating the χ 2 ). Despite the difference between the Migdal and BK theories in the region of the maximum being apparent, as mentioned above, it is unfortunately of the same order of the statistical uncertainties of the measurements so that, overall, the results of the comparison are not different from those of the SLAC E-146 case: indeed, of the 9 target-energy combinations, 5 favor the BK approach (clustered at E = 149 and 207 GeV). Note that one of these cases is iridium at E = 149 GeV, which shows the worst χ 2 /n DF of all (as a matter of fact the point corresponding to the Migdal theory falls outside the scale of Fig. 17). Inspection of iridium at E = 149 GeV in Fig. 14 actually reveals that indeed the measurements remain well below the data even in the SWL limit, where both cross section options merge into the BH value. In general, the overall values of χ 2 /n DF are all reasonable except for the mentioned case of iridium at 149 GeV, for both theories, and tantalum at E = 149 GeV, for the Migdal one. Unfortunately, from the CERN LPM experiment, it is also not possible to conclude that one of the approaches is to be preferred.
The comparison of the final full Monte Carlo simulations and the CERN LOW-Z data for the beam energy of E = 178 GeV is depicted in Fig. 18. The aluminum reference target has not been included because it consisted of 80 foils with a thickness of 25 µm, to avoid any LPM suppression, and it was used in the calibration of the calorimeter [26]. The LDPE one is also not considered since our code cannot handle mixtures, but again there is no loss of information since its shape is almost BH. The background was not subtracted in the experimental analysis to avoid distortions, as mentioned, and is included in the simulation (mind that the scale does not start from zero in Fig. 18). As in the SLAC E-146 case, the Migdal and BK approaches give very close results because the covered energy range is below the maximum of the power spectrum and it is, once more, not possible to discriminate between them within the statistical and systematic uncertainties of the measurements.
The χ 2 /n DF of the comparison between simulations and data is reported in   ment does not follow the 25 logarithmic bins per decade adopted in the simulations and a linear interpolation has been performed). There is a general preference for the Migdal approach with only one target, Ta, showing a preference for the BK one. Unfortunately, the larger values of χ 2 /n DF , possibly indicating somewhat higher systematic uncertainties, do not allow, once more, to reach a firm conclusion about which is the best theory.

VII. CONCLUSIONS
Quantum coherence effects, resulting in the LPM suppression, are an essential ingredient to reproduce bremsstrahlung measurements for high-energy electrons. Traditionally, they have been coupled to the Coulomb correction rescaling the Migdal cross section, which does not include such a feature, by a realistic radiation length. The Baier-Katkov approach to quantum coherence effects embodies the Coulomb correction in a consistent way. The first Monte Carlo implementation of this latter theory has been presented. Enough details have been provided to allow an independent development of a new code. We have also shown how the Baier-Katkov formulae can be rewritten to allow the reference to an accurate radiation length.
The Monte Carlo simulations have enabled the comparison of the Migdal and Baier-Katkov approaches to all available data collected for amorphous targets with accelerators under controlled conditions including multiphoton emission, attenuation by pair production, and, finally, pile-up with photons from the background. When all these effects have been accounted for, the two theo-ries end up being very close and, unfortunately, currently available experiments cannot discriminate between them within present statistical and experimental inaccuracies. It would require a large effort to reduce those uncertainties at the point of supporting one of the two versions. If such an endeavor is undertaken in the future, the present work clearly demonstrates the need to cover the photon energy range around the maximum of the power spectrum, where the LPM suppression sets in. So it appears that the best results could possibly be achieved by an experiment carried out under conditions similar to those that produced the CERN LPM data. However, it would be necessary to attain a background suppression at a fraction of percent of X 0 and combined statistical and systematic uncertainties better than 2 % around the maximum of the power spectrum. All in all, there are no compelling reasons to modify the Monte Carlo codes generally used to simulate the response of calorimeters at the LHC or extensive air showers, all based on the Migdal approach.
The LHCf collaboration reported inconsistencies in the initial profile of the shower, where the LPM suppression is strong. If this evidence should gain more support, the present work suggests that the reasons should be looked for in additional limitations of current theories. In particular, the general arguments about the formation length would lead to predict a correlation between the magnitude of the scattering angle and the amount of suppression. No experimental evidence was found within present uncertainties [22], but theoretical predictions for the angular distribution under LPM suppression were made [80]. For the accurate simulation of extensive air showers, the issue of the LPM suppression in electronelectron bremsstrahlung, which is always assumed to be the same as in nuclear bremsstrahlung, should also receive attention in the future. Finally, it would be interesting to develop a treatment of the LPM suppression beyond the full screening approximation, especially for the simulation of showers, when it is necessary to follow the degradation of the initial energy down to the detection threshold.  As anticipated in Sec. II D, we discuss here the evaluation of the functions D 1 (ν 0 ) and D 2 (ν 0 ) together with their fit by means of rational functions of ν 0 . For the numerical integration of the functions appearing in Eqs. (24a) and (24b), the routines included in the QUAD-PACK quadrature package appropriate for each case are used [81]. For D 1 , the integral is split in two ranges, namely 0 ≤ ν 0 ≤ 1 and 1 < ν 0 < ∞ and no problem arises in reaching a relative precision of 10 −5 for both ranges. The integral appearing in the expression of D 2 is more problematic as the functions d(z) and G(z) defined in Sec. II D both suffer from a logarithmic divergence which cancels in the limit z → 0. Again, the integral has been split in two ranges 0 ≤ ν 0 ≤ z min and z min < ν 0 < ∞, where the choice of z min will be explained in a while. The numerical integration on the second range is easily performed. Instead, to execute the first integral, the integrand is developed in a McLaurin series up to third order in z and the integral is computed analytically choosing z min in such a way that the percentage relative difference between the integral including  the third-order term and the one including only up to the second-order term is smaller than 0.001. The numerical values of D 1 and D 2 , obtained by the procedure described, are shown in Fig. 19 for ν 0 from zero up to 20. Note in particular the discontinuity for ν 0 = 1 and the negative minimum of D 1 close to ν 0 ≈ 0.5. It has been checked that the values calculated by the numerical procedure described here agree with those shown in Fig. 2 of Ref. [11].
Since the functions D 1 and D 2 have to be used in lengthy Monte Carlo calculations, a suitable approxima-tion to the "exact" calculated values is in order. A fit by means of rational functions is chosen for each of three ranges of ν 0 , namely and similarly (A2) Actually, for the lowest range of ν 0 , the coefficients p 1 and p 2 have been determined by a fitting procedure and subsequently rescaled to satisfy the requirement of the continuity of D 1 for ν 0 = 0.1 (as fixed by the fit in the range 0.1 ≤ ν 0 ≤ 1). The coefficient q 1 has been determined by requiring the continuity of D 2 in ν 0 = 0.1. The numerical values of the parameters appearing in Eqs. (A1) and (A2) are listed in Table IV. In principle, this adjustment procedure does not grant the continuity of the first derivative in ν 0 = 0.1. The fits to the functions D 1 and D 2 are plotted in Fig. 20 for the range 0 ≤ ν 0 ≤ 0.2. It is seen that, nevertheless, the slope of the fit to D 1 and D 2 in the 0.1 ≤ ν 0 ≤ 1 region joins smoothly in ν 0 = 0.1 to the slope for ν 0 < 0.1.
Finally, the percentage deviation of the fitted values from the exact ones is reported for both D 1 and D 2 in Fig. 21. It is seen that for D 1 the maximum deviation is below 0.3 % while for D 2 it reaches 3.5 % for the lowest values of ν 0 . It should be noted that this deviation refers to a first-order correction to the main term for the total radiation intensity, i.e to a contribution that is, in the worst case, no larger than 10 % of the main term (see Fig. 4 in Sec. II D). Moreover, the mentioned largest deviation is reached at small values of ν 0 , where the correct behavior in the limit ν 0 → 0, i.e. quadratic for D 1 and linear for D 2 , is more important then the exact reproduction of the numerical values of the functions to ensure, when inserted in Eq. (25), that the correction term to the cross section vanishes in the short-wavelength limit at the tip of the spectrum (see again Fig. 4 in Sec. II D). For this reason, it is not convenient to increase the order of the parametrizations in the part ν 0 ≤ 0.1.
Appendix B: Location of the discontinuity For simplicity, in what follows, we refer to k = k d as discontinuity, being understood that we mean the one  Table IV. Note that two different parametrizations are used below and above ν0 = 0.1 and they join smoothly both in value and slope.
in the first derivative. The behavior of the discontinuity is markedly different according to whether dielectric suppression is taken into account or not. The two cases are correspondingly handled differently in the program. When dielectric suppression is not enabled, for a given material and a given impinging electron energy E, there is always one unique minimum value of the photon en-  ergy k = k d resulting in ρ c = 1 (see Sec. II D 1). It can actually be directly determined analytically (this is in contrast to the Migdal approach, where k d has to be found numerically, even when no dielectric suppression is present [57]) and it is shown in Fig. 22 with a dashed line. It clearly increases with E and, for a fixed E, it decreases for lower atomic numbers Z. To quote a value, for E = 300 GeV electrons impinging on iridium, k d ≈ 35 GeV, well in the region covered by the CERN LPM data, and for E = 8 GeV, k d ≈ 29 MeV, again well in the region covered by the SLAC E-146 data.
When dielectric suppression is present, three cases are possible. For a certain unique value of E = E d , there is again one unique minimum value of the photon energy k = k d resulting inρ c = 1. However, for E < E d , it is alwaysρ c < 1, while for E > E d , there is an interval of values k l d < k < k u d for whichρ c = 1 resulting in the presence of two discontinuities: one for k = k l d and another for k = k u d (see Sec. II D 2). The values of k l d and k u d are plotted in Fig. 22 with dotted and continuous lines, respectively: they have to be determined numerically. The simple bisection method is used because of its guaranteed convergence; here again typically ≈ 20 steps are required to achieve a relative precision better than 10 −8 . This part of the code works in double-precision (i.e. 64-bits) arithmetics to further improve the reliability. For E above few times E d , the values of k u d are very close to the corresponding ones of k d for no dielectric suppression. Around E d , k u d gets reduced since it has to merge with k l d : this behavior is actually used in the code to tune the bracketing strategy of the bisection method. An interesting feature of k l d apparent in Fig. 22 is that it rises much less slowly with energy than k u d and it is also much less sensitive to the atomic number Z of the target material. To quote a value, for an E = 300 GeV electron impinging on iridium, k l d ≈ 6 MeV. This is a kind of "worst case" scenario for the simulations of the CERN LPM data (not very sensitive to Z), where a low-energy cut of TCUT = 50 MeV is applied for photon propagation. For E = 25 GeV, k l d ≈ 1 MeV, which is not below the value of TCUT = 10 keV used in the simulations for the SLAC E-146 data. This situation is very similar to what was found for Migdal approach in Ref. [57].
Finally, the values of E d for both the Migdal and BK approaches are compared in Fig. 23. The general behavior is quite similar, but the values for the former are approximately one order of magnitude larger than for the latter.

Appendix C: Total cross section
The probability for an impinging electron with energy E to emit a photon of energy k is controlled by the total cross section, which has to be supplied to the Monte Carlo code to fill the internal tables. It is given basically by the sum of Eq. (21) and (25)) [multiplied by the factor 1/(n E) to convert radiation probability per unit length to differential cross section]: where R 1 and R 2 have been defined in Eqs. (16), the functions G and Φ in Eqs. (13), and the functions D 1 and D 2 in Eqs. (24). The dielectric suppression is taken into account with the substitutions presented in Sec. II D 2. A low-energy cut TCUT is applied for photon propagation in the Monte Carlo code, so that Eq. (C1) must be integrated numerically over the range TCUT to E. Actually, the maximum allowed value of k is given by the kinetic energy of the electron T E corresponding to E, but this is a small difference. In the present case, all formulae are valid in the ultrarelativistic limit, but the same correction described in Appendix D is applied to enforce energy conservation under all conditions during the simulations. The adaptive Gaussian quadrature routine DADAPT from the CERN mathlib library [82] is used and then a change of integration variable from k to ln(k) is performed to regularize the 1/k BH divergence of Eq. (C1), contained in ν 2 0 . To improve the stability, if the discontinuity is located in the region of integration, i.e. if k d > TCUT, the integral is split into two parts: one from TCUT to k d −δk d and another from k d +δk d to E, where δk d is the estimated uncertainty in the numerical localization of k d . The contribution from k d − δk d to k d + δk d is simply handled by one application of the trapezoidal rule. If dielectric suppression is active, only the upper discontinuity is handled explicitly following Ref. [57] and in such a case k d = k u d . Explicit handling of k l d , which falls within the interval of integration only for the simulations to be compared with the SLAC E-146 data (see Appendix B), would increase too much the complexity of the code. No problems have been detected during the tests described in Sec. V C. All the calculations are again made with double-precision (i.e. 64-bits) arithmetics to further improve the reliability with the absolute tolerance in DADAPT set to zero and the relative tolerance to 10 −6 . A typical value for δk d /k d is 10 −8 , as mentioned. The minimum number of initial subdivisions performed by DADAPT is set to 25.
Typical results are shown in Fig. 24. It is apparent how the LPM effect brings the initial growth of the total cross section to saturation for increasing E and then steadily reduces it. The inclusion of the dielectric suppression even enhances this trend. Finally, a comparison of the Migdal and BK approaches reveals that they are quite close, which is to be expected from the discussion in Sec. III. Total bremsstrahlung cross section [barns] FIG. 24. Total bremsstrahlung cross section for the radiation of a photon with energy k > TCUT = 10 keV by an electron with energy E impinging on the indicated material. The vertical scale on the left is used for copper and iridium and the one on the right for carbon. Three approaches are considered: the BK one without dielectric suppression (dashed lines), the BK one with dielectric suppression (solid lines), and, finally, the one by Migdal with dielectric suppression (dotted lines).
Appendix D: Sampling of the differential cross section Once a bremsstrahlung emission has been selected to happen by the code, it is necessary to sample the photon energy k according to the differential cross section given by Eq. (C1). Of course, it is not possible to find analytically the primitive of Eq. (C1) and use directly the transformation method. The straightforward application of the acceptance-rejection method would be quite inefficient because of the 1/k BH divergence in Eq. (C1). This problem was tackled by Butcher and Messel to perform the first Monte Carlo simulations of electromagnetic showers in the fifties: they introduced the "composition and rejection" method [83]. The natural variable for the sampling is the ratio y = k/T E which spans the interval TCUT/T E ≤ y ≤ 1, where TCUT is the lowenergy cut imposed on photon propagation in the Monte Carlo code. Then y can be sampled from the function 1/(y ln(T E /TCUT)) by the transformation y = exp(r 1 ln(TCUT/T E )) , where the random number r 1 is uniformly distributed over the 0-1 range. Afterwards, the acceptance-rejection method is applied to the rejection function r(y) = q(y) q max (D2)
[%] by drawing a second random number r 2 , also uniformly distributed between 0 and 1, and accepting it if r 2 ≤ r(y). If this is not the case, the whole procedure is repeated from the sampling of r 1 . Since the 1/k BH divergence of Eq. (C1) is already taken into account in Eq. (D1), the corresponding expression for q(x) reads The dielectric suppression is included by means of the substitutions given in Sec. II D 2. Although Eqs. (C1) and (D3) are only valid in the ultrarelativistic limit, it is necessary to enforce energy conservation exactly in the Monte Carlo code; thus, it is assumed x = y T E /E. This distinction is always negligible except very close to the lower cut T min = 50 MeV, imposed for electron propagation. Indeed, the details of the cross section close to this threshold do not affect appreciably the final result of the simulation. The value of q max in Eq. (D2) has to be determined by finding the maximum of Eq. (D3). Of course, q max depends on the target material and the electron energy E. The task is performed by using the subroutine DMINFC from the CERN mathlib library [84]. All the calculations are again made with double-precision (i.e. 64-bits) arithmetics to further improve the reliability, the relative accuracy parameter is set to 10 −8 , and the tolerance for boundary checking is 10 times larger, following the recommendations given in Ref. [84]. Unpredictable behavior of DMINFC with jumps towards the limits of the search intervals have been observed due to the discontinuity at k d . Therefore, it is mandatory to search for the maximum separately in two regions: the first below ln(k d /T E ) and the second between ln(k d /T E ) and 0. It has been found advantageous to search for the minimum using the variable ln(k/T E ), since k/T E → 0 for decreasing Z. The maximum is found to move from the branch above k d to the branch below k d when the energy E increases. This is true both when dielectric suppression is enabled or disabled, contrary to the case of Migdal approach, where, without dielectric suppression, the maximum is always found below k d [57]. In the case of dielectric suppression and for the energy range of interest here, where T min = 50 MeV, it is sufficient to consider k d = k u d disregarding the presence of k l d . To avoid searching the maximum during each step of the Monte Carlo, this is done once for every specific material at initialization time for 1750 energy values between T min = 50 MeV and 10 TeV. Then a fit is performed with a sixth degree polynomial to represent the evolution of q max with E. To give an idea of the inaccuracies involved, the deviation of the value calculated from the polynomial with respect to the true one is shown, over the full E range considered for the fit, for the usual representative target elements in Fig. 25. Even in the worst case, the deviation does not exceed 0.3 %. Due to the smoother behavior of the BK approach, it has not been found useful to use two separate sixth degree polynomials to represent the evolution of q max below and above k d , as it is necessary for Migdal approach [57]. To save memory, the 1750 values for the fit are stored in a temporary ZEBRA bank, which is dropped afterwards, and only the coefficients of the polynomial are stored in the JMATE data bank of GEANT for each specific material. Then, these parameters can be easily retrieved at tracking time and employed to evaluate Eq. (D2).