TREDI simulations for high-brilliance photoinjectors and magnetic chicanes

The TREDI Monte Carlo program is brieﬂy described, devoting some emphasis to the Lienard-Wiechert potentials approach followed to account for self-ﬁeld effects and the covariant technique devised to achieve regularization of electromagnetic ﬁelds. Some guidelines to the choice of the correct parameters to be used in the simulation are also sketched. The predictions obtained for the reference work point of the space-charge compensated SPARC photoinjector and a benchmark chicane designed to study coherent synchrotron radiation effects in a magnetic compressor are compared to those of other well-established simulation codes.


I. INTRODUCTION
The main issue in the development of coherent ultrabrilliant x-ray sources is the generation of ultrahigh peak brilliance electron beams. One of the basic motivations of the SPARC [1] experiment is the investigation of the quality of the beam delivered at the undulator entrance. The scheme proposed for the SPARC's accelerating system consists of a BNL/UCLA/SLAC type, 1.6 cell rf gun operated at 2.856 GHz with high peak field ( 120 MV=m) on the cathode (Cu or Mg). The gun, surrounded by a focusing solenoid, delivers a beam of 6 MeV and is followed by a drift (up to 1.5 m from the cathode) and two traveling wave linac sections operating in an S band boosting the beam to the final 150 MeV energy required to avoid further emittance growth due to space-charge effects.
According to theoretical predictions [2] the working point for high brightness rf photoinjectors and the velocity bunching technique [3] were chosen, in order to achieve both longitudinal compression and emittance preservation. The relevant parameters of this scheme are summarized in Table I. In Fig. 1 are shown the profiles of the rf gun (unnormalized) and solenoid fields.
For the above mentioned scheme, theory and simulations (done typically with 2D axisymmetric codes assuming instantaneous propagation of space-charge effects) nicely agree in predicting for transverse emittance both compensation and a double minimum, mainly due to a chromatic effect between the solenoid and the beam energy spread. The region close to the local maximum between the two minima is the optimal position for the first linac section [2]. This choice, in fact, minimizes the beam emittance on a slice basis, a concept deeply connected to that of the cooperation length in free-electron laser (FEL) dynamics.
In this paper we essentially discuss and compare the predictions mentioned above with those obtained with TREDI, a fully 3D Monte Carlo program devoted to simulations of charged beam dynamics by direct integration of particles trajectories, accounting for self-fields through Lienard-Wiechert retarded potentials [4]. The development of TREDI was originally motivated [5,6] by the necessity of simulating, e.g., (i) rf injectors in not-axisymmetric conditions (like those encountered in the high aspect ratio injectors proposed for future colliders); (ii) the effects on emittance compensation schemes of axial symmetry breaking, possibly amplified by nonlinearities of the system; (iii) emittance growth in magnetic beam compressors due to radiative/acceleration fields.
A detailed description of the simulation code and its capabilities can be found in Ref. [7]. Notwithstanding, it is worth remarking here that principia prima Monte Carlo's usually model the beam as a collection of mutually interacting objects (''macroparticles''), whose number, because of practical computer limitations, is necessarily bounded to a few thousands or a few millions at most. As a consequence, except perhaps for diluted systems where self-fields can be neglected, suitable techniques must be devised to cancel the effects of many possible numerical artifacts, either leading to unreliable results or posing stability concerns [8]. For example, since 10 ps ( 10 rf) Solenoid peak field 3.09 kG * Electronic address: giannessi@frascati.enea.it in all practical cases each macroparticle mimics a fairly large number ( > 10 4 ) of ''real life'' electrons, one needs to subtract the unphysical collisional contribution lurking in the model because of the huge electromagnetic fields that develop whenever macroparticles are close to each other. While closeness poses a serious concern at low energies, where static (velocity) fields dominate, collinearity [see Eq. (9)] can well be a source of noise at fully relativistic regimes, making difficult any prediction about, e.g., coherent synchrotron radiation (CSR) effects in magnetic compressors. In both cases numerical noise appears to be concentrated essentially in the higher region of frequency domain, possibly limiting the code's capabilities to correctly reproduce genuine phenomena occurring at smaller scales (e.g., microbunching). The next section thoroughly describes the procedure adopted in TREDI in order to achieve ''regularization'' (smoothing) of velocity dependent fields. Regularization of acceleration terms is also briefly sketched. In the third section the problem of tuning this procedure will be discussed. The fourth and fifth sections are devoted to the comparison of results with those obtained through other, wellestablished numerical codes for photoinjectors and magnetic compressors, respectively.

II. SMOOTHING OF EM FIELDS.
The smoothing approach followed in TREDI to regularize the divergencies of electromagnetic (EM) fields developing during simulations is that of dressing macroparticles with a form factor, i.e., assuming the elementary dynamical objects to be extended rather than pointlike charge distributions. This technique has proven to be very effective at suppressing the high frequency noise directly related to the well-known divergencies of electrodynamics. Quite expectedly, it turns out that the choice of the form factor's size and functional shape (typically Gaussian or alike) largely determines the properties of the smoothing applied to the model. As a consequence, a careful tuning is needed either to avoid masquerading genuine effects or to weaken the dependence on the number of macroparticles. It should be noteden passant -that also a suitable initialization of the phase space [6] by sophisticated (quasi-)random generators [9] or purposedly designed form factors [10] may significantly help to speed up convergence. Moreover, while the working mechanism of this regularization technique is not new [11] in a computational context and dates back to the earliest models of the electron conceived by Thomson, Poincarè, Lorentz, Abraham, and Schott [12], only recently have efforts been devoted to render the smoothing procedure fully covariant (see [13], and references therein). A covariant treatment is certainly preferable for it gives more confidence in the overall validity of the method. On the other hand, the procedure (as it will be clear in the following) is devised on a particle-to-particle basis in the reference frame where the source particle is at rest. Since this is obviously not the frame where kinematical variables are directly available, a covariant formulation presents at least two valuable properties such as the following: (i) Computational efficiency: the calculation can be actually carried out wherever the dynamical variables of the particles are directly available, namely, the accelerator (''lab'') reference frame, avoiding the necessity of a huge number [ ON 2 ] of costly Lorentz transformations from one frame to another.
(ii) Consistency/generality: in different reference frames, the smoothing yields values of the fields connected through Lorentz transformations, as they should be. As a consequence, the regularization's effects turn to be naturally independent from the reference frame itself.
In order to explain how the mechanism of covariant smoothing works, let us consider the expression of the EM field strength tensor produced by a moving charge q following a universe line r [14]: where is the charge's proper time; r and x are the 4positions (events) of field emission and observation, respectively; V ; is the 4-velocity (hereupon c 1). Note that r and x must fulfill the ''retarded time'' condition: x ÿ r 2 x 0 ÿ r 0 2 ÿ x x ÿr r 2 0; x 0 ÿ r 0 0 usually expressed in a nonmanifestly covariant form [14]. The same result, however, can be devised carrying on the derivative with respect to proper time in (1) in a fully covariant way, which can be shown to yield T U x ÿ r U ÿ x ÿ r U (where U is an arbitrary 4-vector). Hence, Eq. (3) can be recast as follows: One can easily check that ÿF 0i V and ÿF 0i A yield back E i V and E i A as defined in (3). Direct inspection of Eq. (7) suggests that both velocity and acceleration fields become very large whenever SV x ÿ r V R1 ÿn n ! 0: It is clear that SV ! 0 for R ! 0 or 1 ÿn n ! 0, or both. The velocity term describes nothing but the static, Coulomb part of the field. Its / 1=R 2 behavior poses problems only at short distances, because of a ''genuine'' divergence at R 0. Acceleration field diverges at short distances too, but with weaker grade than before because of the typical / 1=R behavior. On the other hand, acceleration field can experience a dramatic growth for n n ) 1 ÿn n ! 0: In fact, even though strictly speaking 1 ÿn n can never vanish, for relativistic particles andn n nearly parallel to , it can get so close to zero to easily make the fields change by several orders of magnitude. This ''collinear blazing'' can extend at large distances from the radiating charge because of the slow field's falloff [ 1=R, as compared to the 1= 2 R 2 behavior of the static part]. The remarks made above show that the blowing up of velocity and acceleration fields occurs in kinetic regions inherently different. This must be taken into account when devising a procedure to regularize EM fields produced by macroparticles in a Monte Carlo simulation. What seems most sensible in this respect is to put (in a covariant fashion) in front of F V or F A ''regularizing'' terms so that and F A ! 0 for n n: (11) In both cases preserving the tensor structure of the fields requires resorting either to a Lorentz scalar either to a 2nd-rank tensor to be partially saturated with T V (for velocity fields) or T W ÿ SW=SVT V (for acceleration fields). In order to control the divergence at R 0 the simplest and physically cogent choice seems to introduce an ''effective'' charge (i.e., a Lorentz scalar) (the V superscript stands for velocity) such that It is easily seen from Eq. (7) that Eq. (13) implies regularization of F V and, a fortiori, of F A . Note, however, that acceleration terms need to be regularized as well in the kinematical region singled out by Eq. (11).
In order to complete the program described above we must devise an expression for the effective charge (12) matching the requisites of covariance and fulfilling Eqs. (10) and (13). To this purpose, consider a macroparticle described in its own rest frame by the static charge density: [do not confuse the function with the Heaviside's function (''step'' function) often indicated with the same symbol] where x x x x ÿx x S ; Here and in the following, the subscript S (''source'') will be attached to quantities of the macroparticle generating the fields, so that x x x x ÿx x S is the distance from the macroparticle's center. The shape and size of the macroparticle's form factor are described, respectively, by function and matrix . For the latter, we only assume it to be symmetric and positive definite. In other words, we assume the charge density to be a convex function. As an example, for a 3D Gaussian charge distribution is the covariance matrix, while for an upright ''hard spheroid'' z f 3 4 if z 1; 0 elsewhere: (17) is directly connected to the ellipsoid's semiaxes (note that det p abc): Now, charge density x x is well known to be the time component of the charge density 4-vector J x J t;x x which depends, in general, both onx x and t. Moreover, signals propagate at the speed of light, i.e., the fields experienced by an ''observer'' at t O ;x x O build up from the sum of contributions generated by infinitesimal ''source'' charges x xdV centered at events t;x x on the observer's (past) light cone: It must be stressed that retard condition (19) must be fulfilled anyway, including the case of a fixed charge distribution (that is, a distribution observed in its own reference frame), where only the geometrical shape as described by (14) matters and time delays are not relevant because the contribution from any given infinitesimal charge does not change over time. In the macroparticle's reference frame, where an observer at rest atx x O experiences a purely static electric field, an ''effective charge'' can be defined as the total charge included in the isodensity surface associated with the value of at the observer pointx x O : where: On the other hand, consistency with special relativity suggests the quantity defined in Eq. (20) to be a Lorentz scalar (for it describes an electric charge), making it desirable to cast it in a form which is manifestly covariant. By standard manipulations (see Appendix A) the effective charge can be reduced to the following onedimensional integral: so that for distributions (16) and (17) Q eff reads (see Fig. 2) and respectively. Note that the term det p in Eq. (14) has simplified in Eq. (22), which is the rationale of explicitly factoring it out in Eqs. (14) and (15) where it must be present to make charge density transform accordingly to prescriptions of special relativity [see (35) and (36)]. Moreover, the result (22) clearly shows that the heuristic requirement of effective charge's covariance can be fulfilled assuming that R O in (22) be a Lorentz scalar. A closer inspection of Eq. (21) strongly suggests to generalize both R 2 O and R 2 x x by casting them as completely saturated tensor products, between (lightlike) space-time intervals x c jx xj; x x and a 4D tensor ÿ1 that in the macroparticle's rest frame must reduce to the special form: Fractional effective charge as a function of R [as defined in (21)] for a macroparticle with Gaussian (solid line) and ''hard spheroid'' (dashed line) form factors possessing the same covariance matrix. Note that for a hard sphere of radius r the following result holds: The vanishing of ÿ1 's time components explains why in the rest frame of the distribution time delays are irrelevant to the value of charge density and fields. The requirement that Eq. (25) be invariant implies that the generalized covariance matrix (the inverse of ÿ1 ) possesses the expected tensor properties with respect to Lorentz transformations. In fact, in an arbitrary inertial frame where the macroparticle moves at a constant speed (for instance, the lab frame) the following equality holds: that is Since x and x 0 are obviously connected by a Lorentz boost or, equivalently Note that the symmetry of ÿ1 and of Lorentz transformations implies that of 0ÿ1 (and 0 ) as well.
It can be shown (see Appendix B) that for the (inverse) generalized covariance matrix 0 the following relation holds: is the (inverse) form factor in the laboratory frame while ,1 1, and T are shorthands, respectively, for which implies det 0 2 det : On the other hand, since 0 x 0 x; it follows that is, Eqs. (25), (29), and (30) allow one to cast (14) in a covariant form. Equation (22) can be used to regularize the electric field (in the macroparticle's rest frame) by the formula: (see Fig. 3). Note that the electric field as defined in Eq. (37) is always radial, which is true only when the charge distribution is spherical (so one can make use of Gauss's theorem). Since in numerical simulations the most sensible choice seems to assume the macroparticles' form factor to be a down-scaled copy (i.e., same aspect ratio, reduced size) of the whole beam, in general this is not the case and Eq. (37) is only an approximation. Notwithstanding the basic result (see, e.g., [15]) that for a charge distribution of the form (14) the charge on the outside of an isodensity surface does not contribute to the fields on the inside suggests that the main concept of effective charge as a key quantity to form the fields remains valid. In a rigorous approach the effective charge Q eff should be substituted by a tensor reflecting the form factor's anisotropy. Explicit evaluation of such a tensor, however, can be an awkward task for most but the simplest distributions [like (17)]. In Fig. 2 the effective charge vs R as defined in Eqs. (23) and (24) is shown. Figure 3 shows the effective electric field E z vs R for a spherically symmetric macroparticle (i.e., for R / jx x O j).
The approach followed in the regularization of acceleration fields is somehow different from that discussed above. The remarks following Eqs. (7) and (9) provide a hint on a possible strategy. The basic idea is that of giving to macroparticles a spread in transverse momentum in order to reduce the contribution of collinear effects as given by (9). From a practical point of view this is equivalent to give a finite size to the target macroparticle and translates in substituting the ''bare'' values of 1=S 2 0 and 1=S 3 0 in (7) with ''smoothed'' versions. Assuming again Gaussian-shaped macroparticles, the terms 1=S 2 and 1=S 3 become While this approach proved to be quite effective at suppressing the effects of collinearity, it is not yet completely satisfying, mainly for the lack of both relativistic covariance and strong physical cogency. Moreover the consideration about the particle size drawn in the following section does not apply rigorously to this method and the choice of a correct smoothing factor still remains an open question.

III. THE ''IMPACT'' PARAMETER
As mentioned above, a down-scaled copy of the whole beam seems the most reasonable choice when deciding the detailed structure of the form factor being assigned to macroparticle's charge distributions like (14). This statement must be understood, however, only in a statistical sense, for the shape of the beam becomes distorted during time evolution with respect to the usually (but not necessarily) simple geometries assumed at initial time. Besides that, the use of different functional forms during time evolution can be awkward mathematically and quite unpractical. For this reason a fixed functional structure has been chosen, assuming that macroparticles are threedimensional Gaussian distributions of the type (16). Heuristically we will assume the ''size'' of such a distribution to be proportional to the rms values of the beam as a whole multiplied by a scale factor 3 N p , where N is the number of macroparticles. This scaling law is consistent with the expectation that the macroparticles are homogeneously distributed over the beam volume. The scale factor can be ''tuned'' in order for the superposition of the macroparticles (the ''discrete approximation'') to reproduce the features of the continuous distribution representing the beam. To this purpose, assume that the whole beam is described by a continuous charge distribution x x normalized to a total charge Q, being approximated by the superposition of N identical ''lo-calized'' microdistributions (i.e., macroparticles) of the type (14) centered at positionsx x 1 ;x x 2 ; . . . ;x x N : with the understanding that the approximation turns into an equality in the limit N ! 1. Note that in Eq. (38) the symbol has been used to identify the charge density of the macroscopic distribution, while in Eq. (14) the same symbol was used to identify the microscopic charge distribution of the single macroparticle. For any given finite value of N , however, one wants to ''tune'' the form factor's parameters in such a way that Eq. (38) turns out to be at least a reasonably good approximation. In order to evaluate the accuracy (as a function of N ), it is useful to take the Fourier transform of (38): wherẽ Assuming thatx x 1 ;x x 2 ; . . . ;x x N are distributed according to the x x, it turns out that [with the same understanding as (38)]. Equations (39) and (40) imply that the result PRST-AB 6 TREDI SIMULATIONS FOR HIGH-BRILLIANCE . . .

(2003)
120101-6 120101-6 must hold for (38) to turn into an equality as the number of macroparticles becomes infinitely large, i.e., the macroparticles' form factor must converge to a 3D delta function. Since in simulations one always deals with a finite number of objects, so that (39) and (40) are affected by numerical noise, the optimal form factor is somehow different from the white spectrum choice represented by (41). The reason for the fairly obvious last remarks is that in some cases it is quite easy to evaluate the left-hand side (lhs) of (40) as a function of N and extend the result to other distributions for which one can guess the same form factor to work equally well. Assume, in fact, that the lhs of (40) can be cast in a closed form: then the problem of finding the optimal form factor can be cast as follows: Here variational symbol must be understood as differentiation with respect to any relevant parameter characterizing the form factor, namely, the . Here we intentionally disregard the possibility of considering thẽ x x 1 ;x x 2 ; . . . ;x x N as parameters. In tracking codes, the positions of macroparticles are rather an outcome of the simulation, except perhaps at initial time, when a careful ''preparation'' of phase space can preemptively suppress the numerical noise at an early stage, preventing it from fully developing and being amplified (as, e.g., the quiet start in FEL simulations [16]). As an example consider a box of charge of uniform density and sides equal to S x ; S y ; S z (along x; y; z directions) and suppose one wants to represent it as the superposition of a number of macroparticles having one and the same Gaussian form factor. Assume, for the sake of simplicity, that these macroparticles are placed at equally spaced positions along the three axes. In other words, we approximate a continuous distribution by the following ''discrete'' superposition of N N x N y N z localized macroparticles: etc. Let us evaluate the Fourier transforms of (44) and (45). We obtaiñ respectively. The sums in (48) can be readily evaluated: where x S x =N x etc. The Fourier transform of the sampled distribution becomes Equation (49) clearly shows that the ideal result would be to manage things in such a way that PRST-AB 6 This is strictly true, of course, only when (along a given axis, say x) x ; x ! 0: Equation (50) clearly suggests that one wants to ''match'' the Fourier transforms of the macroparticle's form factor with that of the ''building blocks'' tiling the macroscopic distribution x x, that is the N (microscopic) flat distribution of sides x y z (the average spacings of macroparticles along the x; y; z direction). On the other hand, the comparison takes place at a ''local'' (''microscopic'') level, which means that an optimal tuning of (50) should work as well for a charge distribution differing from (44) only at boundaries (as a uniform distribution over a homogeneous cylinder), or for a distribution not uniform at all, provided there is a large enough number of macroparticles over a scale at which the distribution itself can be considered ''smooth.'' A closer inspection of (50) suggests that one is comparing the Fourier transform of the macroparticle to that of a uniform density distribution over a length scale equal to , i.e., the average spacing of macroparticles along a given direction.
Given N (i.e., the 's), finding the value of 's minimizing the distance of d from c in the sense of (43) is a difficult task for which a number of workarounds can be exploited: (i) choose values of 's that nullify the first nonzero coefficient (apart from the 0th term) in the series expansion of (50). To this aim, we define an (dimensionless) ''impact parameter'' P such that The rationale of 12 p in (51) is to define P to compare directly with the rms value of a flat distribution of width (the ''building block''). Since it is easily seen that the value not only nullifies the 2 ! 2 term in (52), but minimizes the 4 ! 4 's coefficient (which is always positive) as well. Equation (53) clarifies the remarks made about Eq. (50), suggesting that one must choose the macroparticle's to be the same as the rms value of a flat distribution of width (the building block).
(ii) choose a value of P that minimizes the functional ''distance'' between the numerator and the denominator of (50), i.e., imposing that or, more rigorously, (iii) A different approach consists of minimizing in coordinate space the following functional distance: This approach, while more rigorous than the ones above has the drawback of yielding a value of impact parameter P that depends, yet very weakly, on the number of macroparticles used in the simulation. For the sake of simplicity, the calculations will be made in the 1D case (the extension to 3D being straightforward), for which Eq. (60) reduces to In the case considered so far where c x is a homogeneous distribution normalized to Q with edges at x 0 and x S: and the i 's are defined according to (46). After introducing the impact parameter as in (51), Eq. (62) reduces (taking apart the irrelevant normalization factor Q 2 ) to It is easily seen that the optimal value for P corresponds to the minimum of 3 which depends on N only through the sum. The ''optimal value'' of P turns out to be (see Fig. 4) (65) corresponding in 3D to N 3 5 10 4 , 10 6 , and 10 9 particles, respectively. A similar approach can be followed for other distributions. As a relevant example, consider a zero centered Gaussian macroscopic distribution In other words, we assume that It can be shown that the distance 1D in this case is (neglecting as above the irrelevant normalization factor Q 2 ) In Fig. 5 Eq. (68) is plotted as a function of P. The minimum occurs at a value P * 3:5: Some remarks are in order at this point.
(i) A close inspection of Fig. 4 suggests that a value of P too small can lead to an overestimation of space-charge effects much more dramatic than the underestimation resulting from a value larger than the optimum, for the steepness of the curves is much higher for small values than for large values of P.
(ii) One could think that the optimal values (53) and (58) (or its 3D (59) variant) and (65) have been derived from the somehow ad hoc assumption that the beam is a continuous distribution of homogeneous density, and the macroparticles are placed at equally spaced positions. It turns out that this assumption is in fact very conservative since it leads to a value of P much smaller than one would need to describe accurately a beam as the superposition of randomly distributed macroparticles, as can be seen in Fig. 6. Since in a simulation the positions of particles happen to be distributed at random, one would expect a realistic value of P to be higher than the optimal values obtained in the previous analysis.
(iii) It turns out that enlarging the impact parameter by a factor of 2 or 3 around the value (65), with the caveats of point (i), has almost no effect on the final results of typical simulations. Quite remarkably, all the relevant quantities (rms values, energy spread, etc., emittance) remain visually indistinguishable, except perhaps for the very tiny emittances achieved in simulations of layout as that described in Sec. IV. In those cases, the modest differences one observes at different values of P can be ascribed more to the amplification effect of statistical fluctuations (much stronger in a simulation than in reality) on a slightly enhanced or suppressed self-field interaction than to the lack of validity of the approximation itself.
The basic result of this section is that an ''impact'' parameter tuning the strength of the self-field effects must be introduced to suppress the numerical artifacts described in the Introduction. It turns out that this parameter must only be not smaller than P 1:6-1:7 and can be larger by a factor of 2 -3 without changing visually the results, even though much larger values would be required to suppress fluctuations effectively. On the other hand, fluctuations -to a certain degree -must be present, for they exist in the real beam too: a macroscopic electron beam is likely to exhibit features at scales much larger than one would expect considering, e.g., the average distance between electrons in the beam. It seems reasonable to allow a certain degree of ''spikiness'' in the simulations since this is likely to reproduce the real physical system more faithfully than an artificially a ''flattened'' version of the beam. For these reasons the simulations described in Sec. IV have been run with Gaussian macroparticles and a conservative value P 1:7, although the effect of smaller and larger values is briefly discussed.

IV. BENCHMARK RESULTS I: THE RF PHOTOINJECTOR
In this section some results obtained on the SPARC benchmark case mentioned in the Introduction will be  shown in comparison with those obtained by other numerical codes. Figure 7 plots the energy spread behavior obtained with TREDI (solid lines) and the HOMDYN [17] simulation code (dotted lines). The results are almost indistinguishable. In Fig. 8 the transverse envelope (rms) dimension are shown. In this case the peak dimension falls between the values predicted by HOMDYN and PARMELA [18] (dotted and dashed lines, respectively), the situation being reversed for the waist (PARMELA has the highest maximum and the lowest minimum and HOMDYN the other way round). The overall agreement is fairly good, and it is worth remarking that both the maximum and the minimum of the envelope occur at the same positions for all the codes. The same holds for the longitudinal size and emittance (Figs. 9 and 11, respectively).
The differences are much more relevant for the radial emittance (Fig. 10), for both TREDI and PARMELA do not exhibit a double-minimum effect as pronounced as HOMDYN's. It is worth remarking, however, that all the codes considered more or less agree in predicting the emittance compensation and the value of the minimum. The double-minimum effect can be enhanced in TREDI as well, by slightly changing the physical parameter set.
A final note on the effect of the impact parameter. All the results in Figs. 7-10 have been obtained with P 1:7. Figure 12 shows the radial emittance obtained for the four different values P 1, 1.7, 2, and 3. The differences between P 1:7 and P 3 are negligible. All other quantities except emittances at the minima are indistinguishable.

V. BENCHMARK RESULTS II: THE MAGNETIC CHICANE
Benchmarking of TREDI against other codes specifically designed for CSR calculation has been done on a test case designed on occasion of the ICFA Beam Dynamics mini-workshop of Zeuthen [19]. The test was based on an idealized compressor composed by four bends as sketched in Fig. 13. A list of the main compressor parameters is given in Table II. The simulations have been performed at electron beam energies of 0.5 and 5 GeV with electron pulse shapes both Gaussian and stepwise. We report here the main results concerning the 5 GeV, Gaussian beam case. The input normalized emittances are 1 mm mrad in both planes; other electron beam parameters for the simulation are shown in Table III. Beam parameters at the end of the chicane are summarized in Table IV. The average projected emittance growth amounts to 62%. Figure 14 shows the beam energy loss as a function of the bunch coordinate along the beam line. The TREDI output is compared to the simulations obtained by other codes based on the Lienard-Wiecher approach as TRAFIC 4 [20] and that by Li [21], as well as predictions from ELEGANT [22] and a program by Emma [19] taking into account CSR effects by means of semianalytical formulas [23]. The total energy loss amounts to 0.045%. The sharp energy increase at the entrance of the fourth magnet is due to the beam interaction with the radiation emitted at the exiting edge of the third dipole. The field produced in the third dipole is indeed almost transverse with the direction of motion between the third and the fourth dipole, and it is oriented, for a negatively charged particle, toward the internal side of the chicane (see Fig. 15). At the entrance of the fourth dipole the bending of trajectories produces two effects: (i) the retarded condition between heading electrons and trailing electrons from the end of the third magnet is suddenly fulfilled; (ii) the bending itself induces a transverse velocity component with opposite orientation with respect to the field   [19] simulations. An analogous behavior is observed in Fig. 16 where the beam energy spread versus the average bunch coordinate along the beam line is shown.
The agreement between TREDI and the other codes on energy loss and projected emittance seems qualitatively reasonable (with TREDI on the lower end for the energy loss and on the higher end for the latter). The discrepancies on the relative energy spread, where TREDI only partially reproduce the sharp decrease at the entrance of the fourth bending may well be an effect of the different field regularization procedures and require further investigation.   [19]. The second column (E=E 0 ) refers to beam relative energy loss at the end of the chicane, while the third ( E = E 0 ) and fourth ( x ) columns refer to the variation of the relative rms energy spread and the final emittance, respectively.   [19]. Some manipulation was required to adapt the data to be shown in the same plot.

VI. CONCLUSIONS
In this paper the TREDI Monte Carlo program is described, which is based on retarded potentials to account for self-field effects and a covariant smoothing technique to control the numerical artifacts associated with the model. The only free parameter is the impact parameter P discussed in Sec. III, which turns out to be -for Gaussian macroparticles -a number P * 1:6. The simulations discussed in Sec. IV have been obtained with these assumptions. A noncovariant, essentially phenomenological procedure to regularize the acceleration fields is also briefly discussed. The predictions obtained for the reference work point of the space-charge compensated SPARC photoinjector seems to be in quantitative agreement with those from other codes, while for a benchmark chicane designed to study coherent synchrotron radiation effects in a magnetic compressor the agreement is less satisfactory and in some cases mainly qualitative. In a forthcoming paper the effect of finite propagation speed of signals in rf photoinjectors will be addressed by comparisons with results obtained assuming instantaneous interactions. A ''static'' version of the screening mechanism described in Sec. II will be discussed, along with a novel approach to the problem of covariant smoothing of acceleration fields based on renormalization group techniques.

ACKNOWLEDGMENTS
The authors wish to thank M. Ferrario, V. Fusco, P. Musumeci, C. Ronsivalle, J. B. Rosenzweig, and L. Serafini for many helpful discussions and suggestions, providing data for comparison, and revising the code. The comparison between TREDI and the other codes described in Sec. 5 required additional work to cast the results in a unique consistent format necessary to show the data in the same plot. The authors made their best efforts to reproduce results obtained with other codes unaffected. We apologize in advance for any imprecision. In any case, we are indebted to M. Borland, M. Dohlus, P. Emma, A. Kabel, R.Li, and T. Linberg for making the results of their simulations available.

APPENDIX A
In order to show how Eq. (22) follows from (20) observe first that and ÿ1 have the same eigenvectors, with the associated eigenvalues in one-to-one correspondence by inversion. Let j ÿ1 j be the jth eigenvalue of ÿ1 , and v j j 1; 2; 3 the associated (normalized) eigenvector. Symmetry and positiveness of imply that j 's must be all real and strictly positive, eigenvectors orthogonal and real valued. Let then It is readily seen thatB B T ÿ1 B B 1 1: Let us now make the change of variable (choosing the name with a look ahead): We obtain then which completes the proof.

APPENDIX B
In this section we derive the explicit form of 0ÿ1 in a generic inertial frame where the macroparticle moves at a constant speed . Let