New physics and tau $g-2$ using LHC heavy ion collisions

The anomalous magnetic moment of the tau lepton $a_\tau = (g_\tau -2)/2$ strikingly evades measurement, but is highly sensitive to new physics such as compositeness or supersymmetry. We propose using ultraperipheral heavy ion collisions at the LHC to probe modified magnetic $\delta a_\tau$ and electric dipole moments $\delta d_\tau$. We introduce a suite of one electron/muon plus track(s) analyses, leveraging the exceptionally clean photon fusion $\gamma\gamma \to \tau\tau$ events to reconstruct both leptonic and hadronic tau decays sensitive to $\delta a_\tau, \delta d_\tau$. Assuming 10% systematic uncertainties, the current 2 nb$^{-1}$ lead-lead dataset could already provide constraints of $-0.0080<a_\tau<0.0046$ at 68% CL. This surpasses 15 year old lepton collider precision by a factor of three while opening novel avenues to new physics.

However, a τ continues to evade measurement because the short tau proper lifetime ∼ 10 −13 s precludes use of spin precession methods [6]. The most precise singleexperiment measurement a exp τ is from DELPHI [16,17] at the Large Electron Positron Collider (LEP), but is remarkably an order of magnitude away from the theoretical central value a pred τ, SM predicted to 10 −5 precision [18] a exp τ = −0.018 (17), a pred τ, SM = 0.001 177 21 (5). (1) The poor constraints on a τ present striking room for BSM physics, especially given other lepton sector tensions [19][20][21][22][23][24][25][26], and motivate new experimental strategies. This Letter proposes a suite of analyses to probe a τ using heavy ion beams at the LHC. We leverage ultraperipheral collisions (UPC) where only the electromagnetic fields surrounding lead (Pb) ions interact. Tau pairs are produced from photon fusion PbPb → Pb(γγ → τ τ )Pb, illustrated in Fig. 1, whose sensitivity to a τ was suggested in 1991 [27]. We introduce the strategy crucial for experimental realization and importantly show that the currently recorded dataset could already surpass LEP precision. The LHC cross-section enjoys a Z 4 enhancement (Z = 82 for Pb), with over one million γγ → τ τ events produced to date. Existing proposals using lepton beams require future datasets (Belle-II) or proposed facilities (CLIC, LHeC) [28][29][30][31][32][33][34], while LHC studies focus on high luminosity proton beams [35][36][37][38][39][40]. No LHC We thank the hospitality of the LHC Forward and Di↵ractive Physics Workshop at CERN, w began. We are grateful to Alan Barr, Lucian Harland-Lang, Larry Lee Jr, Valery Khoze, a interesting discussions. LB is supported by St John's College, Oxford. JL is supported by STF Pair production of tau leptons τ from ultraperipheral lead ion (Pb) collisions in two of the most common decay modes: π ± π 0 ντ and ν ντ . New physics can modify tauphoton couplings affecting the magnetic moment by δaτ .
analysis of γγ → τ τ exists as the taus have insufficient momentum for ATLAS/CMS to record or reconstruct. Our proposal overcomes these obstructions in the clean UPC events [41], enabling selection of individual tracks from tau decays with no other detector activity akin to LEP [16]. We exploit recent advances in low momentum electron/muon identification [42][43][44] to suppress hadronic backgrounds. We then present a shape analysis sensitive to interfering SM and BSM amplitudes to enhance a τ constraints. Our strategy also probes tau electric dipole moments d τ induced by charge-parity (CP) violating new physics. This opens key new directions in the heavy ion program amid reviving interest in photon collisions [45][46][47] for light-by-light scattering [48][49][50][51], standard candle processes [52][53][54][55][56], and BSM dynamics [57-67].
To introduce BSM modifications of a τ and d τ , we use SM effective field theory (SMEFT) [68]. This assumes the scale of BSM physics Λ is much higher than the probe momentum transfers q i.e., q 2 Λ 2 . At scale q, two dimension-six operators in the Warsaw basis [69] modify a τ and d τ at tree level, as discussed in Ref. [68] Here, B µν and W µν are the U(1) Y and SU(2) L field strengths, H (L τ ) is the Higgs (tau lepton) doublet, and C i are dimensionless, complex Wilson coefficients. We fix C τ W = 0 to parameterize the two modified moments (δa τ , δd τ ) using two real parameters (|C τ B |/Λ 2 , ϕ) [33] where ϕ is the complex phase of C τ B , we define M = Λ 2 /( √ 2v cos θ W ), θ W is the electroweak Weinberg angle, and v = 246 GeV.
In the SM, pair production of electrically charged particles X from photon fusion γγ → XX have analytic cross-sections σ γγ→XX [64,70,71]. For BSM variations, we employ the flavour-general SMEFTsim package [72], which implements Eq. (3) in FeynRules [73]. This allows a direct interface with MadGraph 2.6.5 [74,75] for cross-section calculation and Monte Carlo simulation. To model interference between SM and BSM diagrams, we generate γγ → τ τ events with up to two BSM couplings C τ B in the matrix element.
Turning to the source of photons, these are emitted coherently from electromagnetic fields surrounding the ultrarelativistic ions, which is known as the equivalent photon approximation [76]. We follow the MadGraph implementation in Ref. [77], which assumes the LHC exclusive cross-section σ (PbPb) γγ→XX is factorized into a convolution of σ γγ→XX with the ion photon fluxes n(x) σ (PbPb) where x i = E i /E beam is the ratio of the emitted photon energy E i from ion i with beam energy E beam . In this factorized prescription, n(x) assumes an analytic form from classical field theory [77,78] wherex = xm N b min , m N is the nucleon mass m N = 0.9315 GeV, and Z = 82 for Pb. We set the minimum impact parameter b min to be the nuclear radius b min = R A 1.2A 1/3 fm = 6.09A 1/3 GeV −1 , where A = 208 is the mass number of Pb used at the LHC. We use Ref. [79] to numerically evaluate the modified Bessel functions of the second kind of first K 0 and second K 1 order.
We modify MadGraph to use the photon flux Eq. (6) for evaluating σ (PbPb) γγ→XX . This prescription neglects a nonfactorizable term in Eq. (5), which models the probability of hadronic interactions P |b1−b2| , where b i is the impact parameter of ion i. The Superchic 3.02 [80] program includes a complete treatment of P |b1−b2| , along with nuclear overlap and thickness. Using this, we validate that these simplifications in MadGraph do not majorly impact distributions relevant for this work, namely tau p T . We generate 3 million γγ → τ τ events for each coupling variation at √ s NN = 5.02 TeV. For the SM, we find σ (PbPb) γγ→τ τ = 5.7 × 10 5 nb. To improve generator statistics, we impose p τ T > 3 GeV in MadGraph, which has a 21% efficiency. Due to destructive interference, σ (PbPb) γγ→τ τ falls to a minimum of 4.7 × 10 5 nb at δa τ −0.04 before returning to 5.7 × 10 5 nb at δa τ −0.09. Further validation of these effects is in Appendix A. We employ Pythia 8.230 [81] for decay, shower and hadronization, then use Delphes 3.4.1 [82] for detector emulation.

III. PROPOSED ANALYSES
To record γγ → τ τ events, dedicated UPC triggers are crucial for our proposal. With no other detector activity, the ditau system receives negligible transverse boost and each tau p T reaches a few to tens of GeV at most. Taus always decay to a neutrino ν τ , which further dilutes the visible momenta, rendering usual hadronic tau triggers p τ jet T 20 GeV unfeasible [83,84]. However, UPC events without pileup enable exceptionally low trigger thresholds by vetoing large sums over calorimeter transverse energy deposits E T < 50 GeV [51]. Other minimum bias triggers are also possible [85,86]. A recent UPC dimuon analysis additionally requires at least one track and no explicit p T requirement for the trigger muon [56]. The light-by-light observation also considers ultralow E T > 1 GeV calorimeter cluster thresholds at trigger level [51], which can similarly benefit electrons.
We design our event selection around two objectives. First, we consider standard objects already deployed by ATLAS/CMS to efficiently reconstruct tau decays with the following branching fractions [17]: We develop signal regions (SR) targeting these decays based on expected signal rate and background mitigation strategies. We impose the lowest trigger and reconstruction thresholds p e/µ T > 4.5/3 GeV, |η e/µ | < 2.5/2.4 supported by ATLAS/CMS [42,43]. Second, we optimize sensitivity to different couplings δa τ , δd τ , where interfering SM and BSM amplitudes impact tau kinematics, which propagates to e.g. lepton p T . Entries  Events / 0.20 Distributions of lepton pT in SR1 1T (left) and the 3-track system pT in SR1 3T (center) for benchmark signals with various δaτ , δdτ couplings. These are normalized to unit integral to illustrate shape changes with varying δaτ , δdτ . The lepton-track azimuthal angle |∆φ( , trk)| in SR1 1T (right) is shown for backgrounds (filled) and signal δaτ = δdτ = 0 (line), illustrating powerful discrimination against dilepton processes.
Dilepton analysis. Requiring two leptons is expected to give the highest signal-to-background S/B, with half being different flavor eµ free of ee/µµ backgrounds. But even using low p e/µ T thresholds, we find insufficient signal yields at 2 nb −1 to pursue this further.
1 lepton + 1 track analysis (SR1 1T). This requires exactly 1 lepton and 1 other track that is not 'matched' to the lepton (the matched track is the highest p T track with ∆R( , track) < 0.02). Tracks must satisfy the standard requirements p track T > 500 MeV and |η track | < 2.5. This topology targets the high branching ratio of the single charged pion decay mode and background suppression from lepton identification. The track also recovers events failing the dilepton analysis, in which a lepton is too soft to be reconstructed. We divide this SR into two bins p e/µ T ∈ [≤ 6], [> 6] GeV to exploit shape differences shown in Fig. 2 (left). We require nonplanar lepton-track system |∆φ( , trk)| < 3 to suppress back-to-back ee/µµ processes, as demonstrated in Fig. 2 (right). We veto invariant masses m ,trk ∈ [3, 3.2], [9, 11] GeV to reject dilepton decays of J/ψ and Υ resonances.
1 lepton + multitrack analysis (SR1 2/3T). We augment the previous analysis with 3 non-lepton-matched tracks. This targets the distinctive 3 charged pion decay. We also construct an orthogonal 2 tracks SR to recover misreconstructed 3-pion decays. The non-leptonmatched tracks are used to define the tau candidate as the vectorial sum of the tracks p tracks Fig. 2 (center) for SR1 3T. We find removing lepton identification significantly increases hadronic backgrounds.
Leptonic backgrounds are dominated by dielectron/dimuon production γγ → , ∈ [e, µ]. The single flavor cross-section is sizable σ (PbPb) γγ→ = 4.2 × 10 5 nb, which includes a generator level |η | < 2.5 requirement. The back-to-back leptons are suppressed by the |∆φ | < 3 requirement, which we verify by generating 1 million events per flavor. Photon radiation from leptons → γ is only expected to modify the tails marginally. Track impact parameters exploiting displaced tau decays could further suppress this background.
Hadronic backgrounds arise from diquark production γγ → qq and we generate 1 million events for each of the 5 flavors. For q ∈ [u, d, s] assuming massless quarks gives a cross-section σ (PbPb) γγ→uū (dd,ss) = 3.0 × 10 5 (1.9 × 10 4 ) nb. Parton showering produces more tracks than tau decays, which we suppress using lepton isolation and requiring no more than 4 tracks at most. For q ∈ [c, b], heavy flavor B and D mesons undergo semileptonic decays e.g. D → π 0 ν. The default MadGraph parameters assume massless charm quarks (which is conservative as a finite mass decreases cross-sections), yielding σ (PbPb) γγ→cc = 3.0 × 10 5 nb. Bottom quarks assume finite mass resulting in a smaller cross-section σ (PbPb) γγ→bb = 1.5×10 3 nb. The leptonic branching fraction D → π 0 ν is of order a few percent so is under control, and is further suppressed by isolation.
Smaller potential backgrounds include γγ → W W but the cross-section σ (PbPb) γγ→W W = 14 pb implies this is safely neglected. Exchange of digluon color singlets (Pomerons) also contributes to diquark backgrounds. These involve strong interactions and as the binding energy per nucleon is very small ∼ 8 MeV [77], the Pb ions emit more neutrons than QED processes, which can be vetoed by the Zero Degree Calorimeter [87]. Soft survival for Pomeron exchange is also lower [77], which gives greater activity in the calorimeter and tracker, and are suppressed by our stringent exclusivity requirements.
Systematic uncertainties require LHC collaborations to reliably quantify, but we discuss expected sources and suggest control strategies. Experimental systematics from current UPC PbPb dimuon measurements have systematics of around 10%, dominated by luminosity and trigger [56]. Systematics from lepton reconstruction are p T -dependent and thus sensitive to δa τ . These are most significant at low p T , but are currently determined in high luminosity proton collisions with challenging backgrounds from fakes [88,89], and could be better controlled using clean γγ → events.
Theoretical uncertainties are expected to be dominated by modeling of the photon flux, nuclear form factors and nucleon dissociation. Fortunately, these initial state effects are independent of QED process and final state. So, experimentalists could use a control sample of γγ → events to constrain these universal nuclear systematics or eliminate them in a ratio analysis with dileptons σ γγ→ . Hadronic backgrounds are susceptible to uncertainties from modeling the parton shower, but are subdominant given S/B 1 in our analyses.

IV. RESULTS & DISCUSSION
We now estimate the sensitivity of our analyses to modified tau moments δa τ , δd τ . Assuming the observed data correspond to the SM expectation, we calculate Here, B is the background rate, and S SM (S SM+BSM ) is the signal yield assuming SM couplings (nonzero δa τ , δd τ ). At L = 2 nb −1 , we find S SM = 1280, B = 7.6 for SR1 1T before binning in p T ; S SM = 520, B = 15 for SR1 2T; S SM = 370, B = 4 for SR1 3T. We denote the relative signal (background) systematic uncertainties by ζ s (ζ b ) and study ζ s = ζ b ∈ [5%, 10%] as benchmarks. For simplicity, we assume identical ζ s for all couplings, and combine the four SRs (SR1 1T has two p T bins) using χ 2 = χ 2 SR assuming uncorrelated systematics. We define the 68% CL (95% CL) regions as couplings satisfying χ 2 < 1 (χ 2 < 3.84). Appendix B details cutflows for signals and backgrounds, and χ 2 distributions. Figure 3 summarizes our projected a τ = a pred τ, SM + δa τ constraints (green) compared with existing measurements and predictions. Assuming the current dataset L = 2 nb −1 with 10% systematics, we find −0.0080 < a τ < 0.0046 at 68% CL, surpassing DELPHI precision [16] (blue) by a factor of three. Negative values of δa τ are more difficult to constrain given destructive interference. We estimate prospects assuming halved systematics giving −0.0022 < a τ < 0.0037 (68% CL). A tenfold dataset increase for the High Luminosity LHC (HL-LHC) reduces this to −0.00044 < a τ < 0.0032 (68% CL), an order of magnitude improvement beyond DEL-PHI. Importantly, these advances start constraining the sign of a τ and becomes comparable to the predicted SM central value for the first time.
substructure [15]. Nonetheless, our analyses are highly model-independent and we defer sensitivity to other BSM scenarios for future work. It would be interesting to correlate a τ with models that simultaneously explain tensions in a e and a µ [19][20][21] or B-physics lepton universality tests [22][23][24][25][26].
Our proposal opens numerous avenues for extension. Lowering lepton/track thresholds to increase statistics would enable more optimized differential or multivariate analyses. Recently, ATLAS considered tracks matched to lepton candidates failing quality requirements, allowing p track T (e/µ) > 1/2 GeV [44]. Moreover the 500 MeV track threshold is conservative given p track T > 100 MeV is successfully used in ATLAS [51]. Reconstructing soft calorimeter clusters could enable hadron/electron identification, or using neutral pions to improve tau momentum resolution. Proposed timing detectors may offer more robust particle identification in ATLAS/CMS [93] while ALICE already has such capabilities [94]. Ultimate a τ precision requires a coordinated worldwide program led by LHC efforts combined with proton-lead collisions at √ s NN = 8. 76 TeV, Relativistic Heavy Ion Collider (RHIC), and lepton colliders. To summarize, we proposed a strategy of lepton plus track(s) analyses to surpass LEP constraints on tau electromagnetic moments using heavy ion data already recorded by the LHC. The clean photon collision events provide excellent opportunities to optimize low momentum reconstruction and control systematics further. We encourage LHC collaborations to open these cornerstone measurements and precision pathways to new physics.
Acknowledgements-We thank the hospitality of the  We present additional material to validate the technical implementation of our simulation setup models the intended physics effects within the scope of our work. This includes the photon flux we implemented in MadGraph 2.6.5 [74,75], and the interface with SMEFTsim [72] for BSM modifications and interference with the SM. Figure 4 displays generator level differential distributions of p T (τ ) for γγ → τ τ considering various photon fluxes from protons and lead (Pb) beams. The distribution generated in MadGraph with Pb uses our custom implementation of Pb ion photon flux. We validate this with the corresponding distribution generated in Superchic 3.02 [80]. The latter includes a full treatment of nuclear effects that are neglected by the factorized prescription in MadGraph. These two distributions are in reasonable agreement for the scope of our work. Also shown are the corresponding distributions for proton beams. This illustrates that the impact of a nucleus with comparatively finite size is to soften the p T (τ ) spectrum compared to using proton beams. We present additional material to validate the technical implementation of our simulation setup models the intended physics e↵ects within the scope of our work. This includes the photon flux we implemented in MadGraph 2.6.5 [74,75], and the interface with SMEFTsim [72] for BSM modifications and interference with the SM. Figure 4 displays generator level di↵erential distributions of p T (⌧ ) for ! ⌧ ⌧ considering various photon fluxes from protons and lead (Pb) beams. The distribution generated in MadGraph with Pb uses our custom implementation of Pb ion photon flux. We validate this with the corresponding distribution generated in Superchic 3.02 [80]. The latter includes a full treatment of nuclear e↵ects that are neglected by the factorized prescription in MadGraph. These two distributions are in reasonable agreement for the scope of our work. Also shown are the corresponding distributions for proton beams. This illustrates that the impact of a nucleus with comparatively finite size is to soften the p T (⌧ ) spectrum compared to using proton beams. Unit normalized generator level tau pT distributions for γγ → τ τ using SM couplings. These are generated in Superchic 3.02, which includes a full treatment of nuclear effects for lead (Pb) ions (orange). Also shown is the corresponding sample with protons (dark blue). The MadGraph 2.6.5 samples uses a factorized photon flux prescription for protons (light blue) and our implementation of Pb ion flux (red). The ratio panel is with respect to the Superchic Pb ions sample.  We provide technical material supporting the results presented in the main text. These include signal and background counts after sequentially applying kinematic requirements (cutflow), and χ 2 distributions as functions of δa τ and δd τ used to derive the final constraints. For the γγ → τ τ signal processes, we show these for benchmark points with parameter values labeled by (δaτ , δdτ ) displayed in the column header. Backgrounds are shown for various dilepton µµ, ee and diquark where the letters denote the flavor. The initial value in each cutflow is the cross-section σ times luminosity L, followed by the efficiency filter of the filter applied at generator level to the γγ → τ τ samples. Table I presents the set of cutflows for the different analyses, sequentially displaying the yields normalized to 2 nb −1 after each signal region requirement. Three benchmark signals are shown for the γγ → τ τ samples at the SM values (δa τ , δd τ ) = (0, 0) and for values near the threshold of 68% CL sensitivity (δa τ , δd τ ) ∈ {(0.005, 0), (−0.01, 0)}. Figure 6 shows the χ 2 distributions as a function of δa τ and δd τ assuming the other is zero for separate signal regions. These are shown assuming 10% systematics, 2 nb −1 to allow comparison of constraining power between the different analyses presented in the main text. Figure 7 displays the combined χ 2 = i χ 2 i distributions. The combined χ 2 distributions are shown for 10% systematics at 2 nb −1 together with prospects using 5% systematics and extrapolation to 20 nb −1 . The red lines show the results from combining the three track SRs. The final combined χ 2 for the results in the main text take the green lines, which combine all four signal regions (SR1 1T is divided into two orthogonal p T bins). The final 68% CL and 95% CL intervals are defined by where the χ 2 distributions intersect with χ 2 = 1 and χ 2 = 3.84 respectively.