Search for flavor changing neutral current interactions of the top quark in final states with a photon and additional jets in proton-proton collisions at √ s = 13 TeV

A search for the production of a top quark in association with a photon and additional jets via flavor changing neutral current interactions is presented. The analysis uses proton-proton collision data recorded by the CMS detector at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 138 fb − 1 . The search is performed by looking for processes where a single top quark is produced in association with a photon, or a pair of top quarks where one of the top quarks decays into a photon and an up or charm quark. Events with an electron or a muon, a photon, one or more jets, and missing transverse momentum are selected. Multivariate analysis techniques are used to discriminate signal and standard model background processes. No significant deviation is observed over the predicted background. Ob-served (expected) upper limits are set on the branching fractions of top quark decays: B ( t → u γ ) < 0.95 × 10 − 5 (1.20 × 10 − 5 ) and B ( t → c γ ) < 1.51 × 10 − 5 (1.54 × 10 − 5 ) at 95% confidence level, assuming a single nonzero coupling at a time. The obtained limit for B ( t → u γ ) is similar to the current best limit, while the limit for B ( t → c γ ) is significantly tighter than previous results.


Introduction
In the standard model (SM) of particle physics, flavor changing neutral current (FCNC) interactions are not present at leading order (LO) and proceed through loop diagrams, which are strongly suppressed by the Glashow-Iliopoulos-Maiani (GIM) mechanism [1].Because of this, the predicted branching fraction for a top quark decaying to an up or charm quark and a photon B(t → qγ), with q = u or c, is of the order of 10 −14 [2][3][4].While these branching fractions are too small to be measured by current experiments, some extensions to the SM allow for significant enhancements to them.For example, supersymmetry with R parity violation, two-Higgs doublet models, and technicolor can enhance B(t → qγ) by many orders of magnitude compared to the SM value [5][6][7].As a result, an observation of this signature would be a clear sign of physics beyond the SM.
The tqγ FCNC interactions can be described by the effective field theory (EFT) framework in terms of dimension-six operators added to the SM Lagrangian [8].The most general effective Lagrangian up to dimension-six operators, L eff , used to describe the FCNC tqγ vertex has the following form: where e is the electric charge of the electron, q ν is the four-momentum of the photon, m t is the top quark mass, σ µν = 1 2 [γ µ , γ ν ], P L and P R are the left-and right-handed projection operators, t is the top quark field, λ L tqγ and λ R tqγ are the fractions of the FCNC couplings for left-and right-handed chiralities, q = u or c is the antiquark field, A µ is the electromagnetic field, and h.c.refers to the hermitian conjugate.The strengths of the FCNC couplings are denoted by κ tqγ , which are proportional to C ij /Λ 2 , where C ij are the Wilson coefficients defined in Ref. [8], Λ is the energy scale of new physics, and κ tqγ and C ij are dimensionless coefficients.In this analysis, no chirality is assumed for the FCNC interaction of tqγ, and |λ L tqγ | 2 + |λ R tqγ | 2 = 1.The values of κ tqγ vanish in the SM at tree level.
Searches for the FCNC interactions tuγ and tcγ have been performed by several experiments, with no evidence of signal as yet [9][10][11][12][13][14].The latest results of a search for tqγ FCNC couplings through single top quark production in association with a photon by the CMS experiment are B(t → uγ) < 1.3 × 10 −4 , and B(t → cγ) < 1.7 × 10 −3 [13] at 95% confidence level (CL), based on proton-proton (pp) collisions collected at a center-of-mass energy of 8 TeV corresponding to an integrated luminosity of 19.8 fb −1 .The most stringent 95% CL limits to date have been obtained by the ATLAS Collaboration at a center-of-mass energy of 13 TeV and corresponding to an integrated luminosity of 139 fb −1 [14].They report 95% CL upper limits of B(t → uγ) < 0.85 × 10 −5 (1.2 × 10 −5 ) and B(t → cγ) < 4.2 × 10 −5 (4.5 × 10 −5 ) for left-handed (right-handed) chiralities, respectively.This paper describes the search for tuγ and tcγ FCNC couplings based on the effective Lagrangian approach introduced in Eq. (1).The analysis considers both the production of a single top quark with a photon, referred to as ST, and the decay of a top quark to a photon and light-flavor quark (u or c) in top quark pair production, referred to as TT.The representative Feynman diagrams at LO for ST and TT are shown in Fig. 1.
We focus on final states where the top quark decays into a W boson and a b quark, followed by the decay of the W boson to a neutrino and an electron or muon, with electrons or muons from leptonic decays of tau leptons also being included.The data for this analysis were collected by the CMS detector between 2016-2018 in pp collisions at a center-of-mass energy of 13 TeV, and correspond to an integrated luminosity of 138 fb −1 .Tabulated results are provided in the HEPData record for this analysis [15].: LO Feynman diagrams for the production of a single top quark in association with a photon (left), and the decay of a top antiquark to a photon and a light antiquark in top quark pair production (right) via a tqγ FCNC, where q = u, c.The leptonic decay of the W boson from the top quark decay is included.The charge conjugate diagrams are also included.The FCNC interaction vertex is marked as a filled red circle.

The CMS detector
The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections.Forward calorimeters extend the pseudorapidity (η) coverage provided by the barrel and endcap detectors, which improves the measurement of the imbalance in transverse momentum (p T ).Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid.In the barrel section of the ECAL, an energy resolution of about 1% is achieved for unconverted or late-converting photons that have energies in the range of tens of GeV.The remaining barrel photons have a resolution of about 1.3% up to a pseudorapidity of |η| = 1, rising to about 2.5% at |η| = 1.4 [16].The electron momentum is estimated by combining the energy measurement in the ECAL with the momentum measurement in the tracker.The momentum resolution for electrons with p T ≈ 45 GeV from Z → ee decays ranges from 1.6 to 5%.It is generally better in the barrel region than in the endcaps, and also depends on the bremsstrahlung energy emitted by the electron as it traverses the material in front of the ECAL [17].Muons are measured in the pseudorapidity range |η| < 2.4, with detection planes made using three technologies: drift tubes, cathode strip chambers, and resistive plate chambers.Matching muons to tracks measured in the silicon tracker results in a relative transverse momentum resolution, for muons with p T up to 100 GeV, of 1% in the barrel and 3% in the endcaps [18].Events of interest are selected using a two-tier trigger system.The first level (L1), composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events in a fixed time interval of less than 4 µs [19].The second level, called the high-level trigger, consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and decreases the event rate from around 100 kHz to less than 1 kHz before data storage [20].A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [21].

Signal and background simulation
To account for the changes in conditions related to the CMS detector over the three years in which the data were taken, the data from each year are analyzed independently using the corresponding corrections and calibrations.
Signal events and background processes are simulated for each of the three data-taking years using various Monte Carlo (MC) generator packages.The signal samples corresponding to ST production associated with a photon, and tt production with an FCNC decay of one of the top quarks, are generated at LO with MADGRAPH5 aMC@NLO 2.4.2 [22].Based on the cross section calculation with the TOP++ 2.0 program [23] at next-to-next-to-LO in quantum chromodynamics (QCD) including soft-gluon resummation to next-to-next-to-leading-logarithmic order, the top quark pair production cross section is taken as 832 pb.Two signal scenarios are generated: (κ tuγ = 0.1, κ tcγ = 0.0) and (κ tuγ = 0.0, κ tcγ = 0.1), with corresponding LO ST cross sections of 0.707 and 0.100 pb, respectively, and TT cross section times FCNC branching fraction of 1.367 pb for both scenarios.The σ NLO /σ LO K factors of 1.37 and 1.41 [24] are used to account for next-to-LO (NLO) QCD corrections for tuγ and tcγ couplings, respectively.Since the final state kinematical distributions are independent of the FCNC tqγ couplings, we only need to generate ST and TT signal samples for the two scenarios described above (and scale them as necessary).
The CMS detector response for both signal and background processes is modeled with the GEANT4 program [33].Simulated minimum-bias interactions in nearby or in the same pp bunch crossing, referred to as "pileup", are included.Events in the simulation are reweighted to reproduce the pileup distribution observed in the data [34].

Event reconstruction
The particle-flow (PF) algorithm is used to reconstruct and identify the particles (photons, electrons, muons, charged and neutral hadrons) produced in the event by combining the information from various subdetectors [35].The primary vertex (PV) is taken to be the vertex corresponding to the hardest scattering in the event, evaluated using tracking information alone, as described in Section 9.4.1 of Ref. [36].In this analysis, events of interest contain one isolated charged lepton (either a muon or an electron), an isolated photon, at least one jet, and missing transverse momentum.Quality criteria are applied to improve the purity of events containing genuine leptons and photons and to help reject candidates that originate from misidentified jets or from pileup events.
Photon candidates are identified according to the presence of an energy deposit in the ECAL with no associated charged-particle tracks matched with this deposit [17].The photon energy is obtained from the ECAL measurement.Corrections to account for the zero suppression and energy scale are applied in both simulation and data [17].The reconstructed photons must satisfy quality criteria based on the following quantities: the relative amount of deposited energy in the ECAL and HCAL; a "shower shape" variable quantifying the lateral development of the shower [17]; separate isolation variables for charged hadrons, neutral hadrons, and photons; and a variable that quantifies whether the photon candidate is consistent with a hit in the pixel detector.
Electron reconstruction relies on a match between the clusters in the ECAL and the tracks in the tracker.The energy of an electron is estimated from the combination of electron momentum as determined by the tracker at the PV and the energy of the corresponding cluster in the ECAL including the energy sum of all the bremsstrahlung photons compatible with the electron track [17].Electrons are required to satisfy identification criteria related to the track impact parameters with respect to the PV, the matching between the ECAL cluster and the associated track, the ratio of hadronic to electromagnetic energy deposits, and the shower shape in the ECAL.A veto on electrons originating from photon conversions is applied [17].
Muon reconstruction is based on the combination of information from the tracker and the muon system [18].A global track fit is produced using the combined information, and the muon momentum is obtained from the track curvature.Identification criteria are based on the impact parameters with respect to the PV, the number of hits in the muon systems and the tracker, the number of matched muon detector planes, as well as the fit quality of the combined muon track.
The photon and charged lepton candidates are required to be isolated from other particles in an event.For electrons and muons (ℓ), a relative isolation variable is defined as where the sums run over the p T of charged and neutral hadrons, as well as the photons, in a cone of ∆R < 0.3 (0.4) centered on the electron (muon), where ∆R = √ (∆η) 2 + (∆ϕ) 2 .Here, ∆η is the difference in pseudorapidity and ∆ϕ is the difference in azimuthal angle between the direction of the lepton and the other particles inside the isolation cone.To reduce the pileup effects, only charged hadrons compatible with originating at the PV are taken into account.For the sums corresponding to photons and neutral hadrons, an estimate of the expected pileup contribution (p pileup T ) is subtracted.For photons, three separate isolation variables, corresponding to charged hadrons, neutral hadrons, and other photons, are defined within a cone of ∆R < 0.3 [17].The effect of pileup in each variable is subtracted [37].
The muon and electron requirements, including the identification and isolations criteria mentioned above, describe the "tight" selection, and are intended to efficiently select leptons originating from the decay of gauge bosons [17,18].A less stringent set of requirements, referred to as the "loose" selection, is also used in the analysis for the purpose of vetoing events with additional leptons and to define a control region (CR) enriched in non-genuine leptons.The tight leptons are a subset of the loose leptons.
Jets are clustered from the PF candidates using the infrared-and collinear-safe anti-k T algorithm [38,39] with a distance parameter of ∆R = 0.4.To correct for the contributions from pileup, charged particles originating from vertices other than the PV are discarded and a correction is applied to the neutral contributions.Jet energy corrections are derived from simulation to, on average, equal the summed energy of the truth-level particles in the jet.In situ measurements of the momentum balance in dijet, γ+jet, Z+jet, and multijet events are used to account for any residual differences in the jet energy scale between data and simulation [40].The jet energy resolution (JER) is typically 15-20% at 30 GeV, 10% at 100 GeV, and 5% at 1 TeV [40].Additional selection criteria are applied to each jet to remove jets potentially dominated by anomalous contributions from various subdetector components or reconstruction failures [41].
The missing transverse momentum vector (⃗ p miss T ) is computed as the negative vector p T sum of all the PF candidates, and its magnitude is denoted as p miss T [42].The ⃗ p miss T is modified to account for corrections to the reconstructed energy scale of jets.

Event selection and analysis strategy
Events are collected with single-lepton triggers, with minimum p T thresholds for electrons of 27 GeV in 2016 and 32 GeV in 2017 and 2018, and for muons of 24 GeV in 2016 and 27 GeV in 2017 and 2018.
Events are selected by requiring the presence of either a tight electron with p T > 35 GeV and |η| < 2.5 or a tight muon with p T > 30 GeV and |η| < 2.4 that matches the triggering lepton.Events with only a tight electron candidate in the ECAL barrel-endcap transition region of 1.44 < |η| < 1.57 are rejected because the reconstruction of electrons in this region is suboptimal.Events are discarded if they contain an additional loose electron with p T > 20 GeV, or loose muon with p T > 15 GeV.Photons are required to have p T > 30 GeV and |η| < 1.44, and pass the medium identification and isolation requirements [17].Photons in the endcap region |η| > 1.57 are not considered as the signal purity is lower in this region.
For both electron and muon channels, we require p miss T > 30 GeV to reduce the background from events without neutrinos.In the electron channel, due to the higher probability of electrons to be misidentified as a photon, a veto on events with 81 < m eγ < 101 GeV is applied to suppress the contribution of Z → e + e − events.
Events are required to have at least one reconstructed jet with p T > 30 GeV and |η| < 2.7 or p T > 60 GeV and 2.7 < |η| < 3.0.Higher p T is required in the forward region to mitigate an excess of jets caused by noise in the endcap of ECAL.The selected jets are required to be separated from the selected lepton by ∆R > 0.4.The DEEPCSV tagging algorithm [43] is used to identify jets originating from the hadronization of b quarks.The chosen selection requirements correspond to an efficiency of around 70% in identifying b jets and to a misidentification rate of 12% for c quark jets, and 1% for light-quark and gluon jets.The region over which the b jets can be identified increased from |η| < 2.4 in 2016 to |η| < 2.5 in 2017-2018 following an upgrade to the pixel detector [44].The top quark is reconstructed using the lepton, the b-tagged jet, and ⃗ p miss T , where the neutrino p T is assumed to be equal to p miss T .The longitudinal component of the neutrino momentum is obtained by constraining the invariant mass of the lepton and the neutrino to the nominal W boson mass [45], and the top quark momentum is reconstructed from the combination of the momenta of the reconstructed W boson and the b jet [46][47][48].
In addition to the above requirements, photons are required to be well separated from the selected lepton and jets by ∆R > 0.5.This requirement helps to avoid distortion of the photon energy measurement due to the presence of close-by jets.Furthermore, it reduces the contributions of photons emitted from top quark decay products.
Two statistically independent signal regions (SRs) are defined for each lepton channel: • SR1: exactly one b-tagged jet and no additional jet (N j = 1 and N b = 1).This region targets the signal from FCNC ST quark production with a photon.
• SR2: at least two jets, exactly one of which is b-tagged (N j ≥ 2 and N b = 1).This region targets TT events with the FCNC decay of one of the top quarks.

Background estimation
Several processes with identical or similar final states to the signal are sources of background for this analysis.We distinguish four different sources of background: nonprompt-photon, nonprompt-lepton, misidentified-photon, and irreducible backgrounds.Nonprompt-photon backgrounds come from events in which a jet is misidentified as a photon.This is usually the result of a high-p T π 0 or η meson in the jet decaying to a pair of photons.Nonpromptlepton backgrounds are from events in which the selected lepton is from the decay of a hadron.Misidentified photon backgrounds originate from events in which an electron is misidentified as a photon.Irreducible backgrounds are events in which a genuine photon and charged lepton originate from the pp collision vertex.Expected and observed event yields for all background processes are presented in Table 1, where all background contributions are predicted from data except for tγ and VVγ.The estimated backgrounds are described in the rest of this section.

Nonprompt-photon background
Nonprompt-photon events arise primarily from W+jets and tt events when the jets have a high fraction of their energy deposited in the ECAL, and thus mimic prompt photons.Because the probability for jets to satisfy the photon identification criteria is not well modeled by the simulation, this source of background is estimated from data.
The SR photon requirements include selections on the charged-hadron isolation, neutralhadron isolation, and photon isolation, as well as requirements on the shower shape and the ratio of hadronic to electromagnetic energy.These five requirements are placed at the "medium operating point" of [17] with average signal efficiency of 80%.Each of these variables also has a "loose operating point" with average signal efficiency of 90%, which is less stringent than the medium one.
We begin by using the matrix "ABCD" method, with the two axes being the charged-hadron isolation variable and the shower shape variable, which are found to be uncorrelated.The A region is the SR, containing events in which the photon candidate passes all medium selections.The B, C, and D regions are similar to the SR except for these two mentioned variables.The B and C regions contain events where either the charged-hadron isolation or the shower shape variable fails the loose selection, and the other one passes the medium selection.The D region is constructed from events that fail the loose selection of both the charged-hadron isolation and shower shape variables.An estimate of the number of nonprompt-photon events in region A (the SR) can be obtained from N B N C /N D .
In addition to the normalization of the nonprompt-photon background in the signal region, a full set of background distributions is needed.Therefore, we also use a CR enriched in nonprompt-photons called the photon-like jet (PLJ) sample.The PLJ sample is created by selecting events in which the photon candidate fails exactly one of the five selection criteria at the loose operating point while passing the other criteria at the medium operating point.
We then define an extrapolation factor: which is calculated in p T bins, as the fraction of nonprompt photons is known to depend on p T .All of the events in the PLJ sample are weighted by EF(p T ) to obtain the estimated distribution of nonprompt-photon events in the SR.The estimation is performed separately for each lepton channel and for each data-taking year.
The method is validated using simulation samples.In the first step, the extrapolation factors are recalculated using simulated samples.Then the contribution of the W+jets sample, which is one of the main sources of nonprompt-photon background in the SR, is compared to the value estimated by the method described above for this sample.The comparison shows an agreement between 10-15% depending on the years and lepton channels.

Nonprompt-lepton background
The nonprompt-lepton background contribution is estimated from data using the "tight-toloose ratio" method [49].A dijet CR enriched in nonprompt-lepton events is selected by requiring exactly one loose lepton, exactly one jet with p T > 30 GeV and |η| < 2.5, p miss T < 30 GeV, ∆R(ℓ, jet) > 0.3, and a transverse mass of the lepton and p miss T less than 20 GeV.The last criterion is used to suppress W+jets events.The p miss T requirement ensures that this CR is disjoint from the SRs.The contribution of events with prompt leptons is estimated from simulation and subtracted from the sample.
The nonprompt rate (NPR) is calculated from this sample as the fraction of events in which the selected lepton passes the tight selection.This rate is measured in bins of lepton p T and η and converted to a weight w = NPR/(1 − NPR).A second CR is defined by requiring all of the SR criteria, except that the lepton requirement is changed to select leptons that pass the loose requirement and fail the tight requirement.The contribution of events with prompt leptons is estimated from simulation and subtracted from this sample.The weight w(p ℓ T , η ℓ ) is applied to all events in this CR to estimate the number of nonprompt-lepton events in the SR.
The method is validated in simulation using a γ+jets sample, which is the dominant source of the nonprompt-lepton background.The contribution of γ+jets events in the SR is compared to the estimate from the above method applied to the same simulated events and found to deviate by no more than 20%.

Misidentified-photon background
The misidentified-photon background arises from events in which an electron is misidentified as a photon.This can originate in both channels from tt, diboson, and single top quark tWchannel production.In addition, Z(→ e + e − )+jets events contribute to the electron channel.
The normalization for this background source is estimated using two CRs.The first CR has the same criteria as for the signal region, except that only the electron channel is considered and the invariant mass of the electron and photon is required to be in the range of |m eγ − m Z | < 10 GeV (the inverse of the veto used in the SR definition).The second CR also uses the SR criteria, except instead of requiring a photon and an electron, we require two electrons passing the tight selection with an invariant mass of |m ee − m Z | < 10 GeV.Both regions select Z(→ e + e − )+jets events, with the second CR containing two correctly identified electrons and the first CR containing one electron misidentified as a photon.The ratio of the number of events in the first CR to the number of events in the second CR is calculated for both data and MC simulation.From these two ratios, the double ratio is calculated as: which is applied as a scale factor to simulated background events in each SR in which the reconstructed photon is matched to a generated electron.

Irreducible backgrounds
Irreducible backgrounds arise from tt γ, Wγ+jets, Zγ+jets, tγ, and VVγ processes.The shapes of these background sources are obtained from simulation.The normalizations for the tγ and VVγ processes are obtained from their respective NLO cross sections, while the normalizations for the other processes are constrained from data using CRs.The tt γ CR is constructed with the same requirements as in the SRs, but with N j ≥ 2 and N b ≥ 2. A single-bin fit is made in this CR to obtain the correction factor that can be used in the SRs to correct the estimated number of tt γ events from MC.The Wγ+jets and Zγ+jets CRs use the same criteria as the SRs but with N j ≥ 1 and N b = 0 and are distinguished from each other by requiring the invariant mass of the lepton and photon to satisfy, for the electron channel, m ℓγ < m Z − 10 GeV for the Zγ+jets CR and m ℓγ > m Z + 10 GeV for the Wγ+jets CR, and for the muon channel, m ℓγ < m Z for the Zγ+jets CR and m ℓγ > m Z for the Wγ+jets CR.The reason that Zγ+jets events are populated in the low m ℓγ region is that in many cases the selected photon is radiated off the lepton from the Z boson decay with high angular separation.The normalization of each process in the SR is corrected by scale factors obtained from a simultaneous fit of the m ℓγ distribution in the two CRs.For all three CRs, the background from other sources is estimated using the methods described above for nonprompt-photon, nonprompt-lepton, and misidentified-photon backgrounds, and from simulation for the remaining sources.

Summary of the backgrounds
Event yields in data and MC for all background processes are presented in Table 1.The irreducible backgrounds from tt γ, Wγ+jets, and Zγ+jets are normalized with the control regions.The remaining irreducible backgrounds (tγ and VVγ) are estimated from simulation.The nonprompt-photon, nonprompt-lepton, and misidentifed-photon backgrounds are estimated from data.

Discrimination of signal and background
To distinguish the potential FCNC signal from backgrounds, a number of discriminant variables are combined into a multivariate classification based on boosted decision trees (BDTs) [50].Separate BDT classifiers are trained for the two signal scenarios (tuγ and tcγ), the two SRs (SR1 and SR2), and the two channels (electron and muon), for a total of eight BDTs.
The BDT input variables include photon p T and η, lepton η, reconstructed top quark mass (m ℓνb ), transverse mass of the W boson defined as m WT = 2p ℓ T p miss T (1 − cos ∆ϕ) where ∆ϕ is the difference in azimuthal angle between the lepton and ⃗ p miss T , invariant mass of the photon and non-b jet (m jet+γ ), p miss T , ∆R(ℓ, γ), ∆R(t, γ), jet multiplicity, ∆R(ℓ, b jet), ∆R(b jet, γ).In addition, for the tuγ signal, the lepton charge is added as an input variable, to provide sensitivity to the asymmetry between top quark and top antiquark production [51], which arises from different PDFs for the up quark and up antiquark in the colliding protons.Simulated tuγ and tcγ signal processes are used in conjunction with simulated Wγ+jets, tt γ, Zγ+jets, and tγ background events to train the BDTs.The four most discriminating BDT variables for SR1 are photon p T , ∆R(ℓ, b jet), reconstructed top quark mass, and ∆R(t, γ), and for the SR2 they are m jet+γ , photon p T , reconstructed top quark mass, and ∆R(ℓ, b jet).
The distributions for four selective discriminating BDT input variables are shown for the predicted background events and the observed data events from SR1 and SR2 in Figs. 2 and 3, respectively.The distributions from each channel are shown separately and the background events are estimated as described in Section 6.The figures also include the expected contribution of a tuγ and tcγ signal for a coupling of κ tqγ = 0.2.The output BDT distributions of the data, estimated backgrounds, and simulated signals are shown in Figs. 4 and 5 with the BDT trained to select tuγ and tcγ events, respectively, for the combined 2016-2018 data-taking period.Each figure shows the distributions separated by channel and SRs.For presentational purposes, the simulated FCNC signals are normalized to κ tqγ = 0.10 and 0.01 for SR1 and SR2, respectively.A binned maximum likelihood fit is used to combine the 12 BDT output distributions from the two channels, two SRs, and three data-taking years, with systematic uncertainties incorporated as nuisance parameters for tuγ and tcγ separately.

Systematic uncertainties
We consider an extensive list of systematic uncertainties from experimental sources, which primarily affect the background estimation, and from theoretical sources, which primarily affect the signal efficiency.The estimate of each source of uncertainty is determined by varying the relevant quantity and evaluating the effect on the yield and shape of the signal and background BDT distributions.The amount of variation is controlled by nuisance parameters in the maximum likelihood fit.We also account for correlations among the systematic uncertainties.In particular, all uncertainties are correlated between the two SRs.Uncertainties specific to a lepton flavor are uncorrelated between the two lepton-flavor channels, while other uncertainties are correlated.The correlation between the different data-taking years is described below.
The integrated luminosity is used to normalize predictions obtained from simulation.The associated uncertainties for the data collected in 2016, 2017, and 2018 are 1.2, 2.3, and 2.5%, respectively, and are 30% correlated [52][53][54].The pileup uncertainty affects the distribution of the number of pp collisions per bunch crossing.It is estimated by varying the total pp inelastic cross section by 4.6% [55] and is fully correlated for all three years.Uncertainties in the jet energy scale are derived by recalculating the four-momentum of each jet using variations from a variety of sources, which are binned in p T and η [40].The variation is also propagated to ⃗ p miss T .The amount of correlation for each source is evaluated separately across the three years.The uncertainty arising from the jet energy resolution is estimated by varying the simulated JER by its uncertainty in bins of η [40].This variation is also propagated Events / bin Data Events / bin Data The BDT output distributions for the data, the background predictions, and the expected tuγ signal for electron (left) and muon (right) channels in SR1 (upper) and SR2 (lower).The signal distribution is normalized to a cross section corresponding to κ tuγ = 0.10 (0.01) for ST (TT) and is stacked on the background expectations.The first bins include underflows, and the last bins include overflows.The vertical bars on the points depict the data statistical uncertainties and the hatched bands show the combined statistical and systematic uncertainties in the estimated background processes.to ⃗ p miss T .The JER uncertainties are assumed to be uncorrelated between the three data-taking years.The uncertainty from the calibration of the photon energy scale is assessed by varying the photon energy by ±0.1%, which is obtained from a study of Z → e + e − events [17].
The uncertainty associated with the data-to-simulation scale factors for the b tagging efficiency for b jets as well as the mistagging efficiencies for c jets and light-flavor and gluon jets are estimated by varying the scale factors within their uncertainties measured from a dedicated tt sample [43].Each uncertainty is further split into correlated and uncorrelated components across the three years.
To correct the observed differences in the efficiencies between data and simulation in lepton and photon reconstruction, identification, and isolation, p T and η dependent scale factors with their relevant uncertainties are derived using the "tag-and-probe" method [17,18].The impact of these sources is estimated by varying the scale factors within their uncertainties, which are propagated to the final fitting variables.The statistical component of these uncertainties is considered uncorrelated between the data-taking years while the systematic components are assumed to be fully correlated.The scale factors of leptons are derived from Z+jets events but applied to the kinematic region dominated by tt events, where the lepton isolation efficiency Events / bin  The signal distribution is normalized to a cross section corresponding to κ tcγ = 0.10 (0.01) for ST (TT) and is stacked on the background expectations.The first bins include underflows, and the last bins include overflows.The vertical bars on the points depict the data statistical uncertainties and the hatched bands show the combined statistical and systematic uncertainties in the estimated background processes.is reduced due to a larger number of jets.Therefore, to cover the possible differences of efficiencies between these two kinematic regions, an additional uncertainty of 1% for the electron channel and 0.5% for the muon channel is applied [56].
The trigger efficiency uncertainty is assessed by varying the scale factors of each trigger within their uncertainties [17].The resulting uncertainties are considered uncorrelated among the three years of data taking.To mitigate the gradual shift of the L1 trigger timing in the forward endcap region (|η| < 2.4) of the ECAL detector during the years 2016-2017, a correction factor is applied [19].The correction factor and its uncertainty are used to reweight the relevant simulated events, accounting for the trigger inefficiency.The L1 inefficiency uncertainties are fully correlated between 2016 and 2017 data samples.
The uncertainty in the normalization of WWγ is estimated from MADGRAPH5 aMC@NLO to be 23%.The normalization uncertainties for the WW [57], ZZ [58], WZ [59], tγ [60], single top quark tW-channel [61], and single top quark s-channel [62] processes are 5.8, 4.5, 40, 11, 11, and 4%, respectively, and are assigned to each background source separately.The normalization uncertainties are applied in cases where the background normalization is obtained from simulation.
The systematic uncertainty in the nonprompt-photon background estimation is calculated from three contributions.The first source is from the definition of the sideband regions in the ABCD method.To assess this uncertainty, the borders of the sideband regions are changed by 50% of their nominal values and the EFs are recalculated.The choice of 50% is made to keep the statistical uncertainty of the new sideband regions under 10%.The second source is the potential contribution of prompt-photon events in the sideband regions, which is not accounted for in the nominal method.We estimate this effect by subtracting the prompt-photon contribution using simulated events and recalculating the EFs.The third source is the EF statistical uncertainties, especially for the high-p T photon bins.The total nonprompt-photon background uncertainty is calculated as the sum in quadrature of the three sources and the corresponding systematic uncertainty is evaluated by varying the EFs within the associated uncertainties.The uncertainty in the nonprompt-lepton background estimation is 20%, as derived from the maximum difference found in the check of the method described in Section 6.2.The uncertainty in the estimation of the misidentified-photon background arises from the associated scale factor uncertainties and varies within the 16-22% range depending on the data-taking year.The background estimation uncertainties are uncorrelated between the three data-taking years.
Theoretical uncertainties arise from the choice of theoretical assumptions such as the PDFs and the renormalization and factorization scales, µ r and µ f , used in the cross section calculations.This affects both the calculation of the cross section and the signal efficiency.The impact of the uncertainties in the matrix elements of the generators due to µ r and µ f is obtained by independently varying the scales up and down by factors of 0.5 and 2. The uncertainty arising from PDFs used in the simulation is estimated by reweighting the signal events according to the NNPDF sets [63].In addition, the uncertainty resulting from initial-and final-state radiation in the parton shower of signal events is estimated by shifting the relevant scales by a factor of two.All of these uncertainties are correlated across the three data-taking years.
Among the systematic uncertainties, the integrated luminosity, background normalization, and nonprompt-lepton background and misidentified-photon background uncertainties affect only the rate, while the uncertainties from the remaining systematic sources affect both the shape and normalization of the BDT output distributions used in the fit.

Results
The data and SM prediction are in agreement within uncertainties and no excess from FCNC contributions is observed.A modified frequentist (CL s ) criterion [64,65] in the asymptotic approximation [66] is used to compute the upper limits on the signal cross sections and branching fractions.A test statistic is defined based on the profile likelihood ratio for the compatibility of the data with background-only and signal+background hypotheses.
The limits from the combination of the electron and muon channels separately for SR1, SR2, and a combination of two SRs are presented in Table 2.It summarizes the expected and observed 95% CL upper limits on the anomalous couplings κ tuγ and κ tcγ , and the corresponding branching fractions B(t → uγ) and B(t → cγ) with NLO QCD corrections.The one and two standard deviation (±1σ and ±2σ) ranges of the expected limits are also presented.The largest uncertainties affecting the final likelihood fit are the statistical uncertainties from the limited number of simulated events.Among the systematic uncertainties described in Section 8, the uncertainties due to the nonprompt-photon background, the normalizations of Zγ+jets and Wγ+jets, and the uncertainty arising from the misidentified-photon background have the greatest impact on the final limits.Overall, the post fit values of nuisance parameters deviate no more than one standard deviation from their initial values.
Table 2: The expected and observed 95% CL upper limits using the CL s criterion on the anomalous couplings κ tuγ , κ tcγ and the corresponding branching fractions B(t → uγ) and B(t → cγ) from the combination of the electron and muon channels at NLO for SR1, SR2, and combined (SR1+SR2).
Observed limit Expected limit ±1σ (expected limit) ±2σ The results show significant improvements with respect to the previous CMS results at 8 TeV [13].This is due to the addition of another signal region (SR2) as well as the electron channel, an increased signal cross section at the higher center-of-mass energy of 13 TeV, and larger integrated luminosity.

Summary
The results of a search for flavor changing neutral current (FCNC) interactions in the top quark sector associated with the tuγ and tcγ vertices have been presented.These vertices are probed by a simultaneous evaluation of single top quark production in association with a photon and top quark pair production with one of the top quarks decaying via FCNC.The search is performed using proton-proton collisions at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 138 fb −1 , collected by the CMS detector at the LHC.The results are in agreement with the standard model prediction.Upper limits are set at 95% confidence level on the anomalous FCNC couplings of κ tuγ < 6.2 × 10 −3 and κ tcγ < 7.7 × 10 −3 .The upper limits on the corresponding branching fractions are B(t → uγ) < 0.95 × 10 −5 and B(t → cγ) < 1.51 × 10 −5 .The obtained limit for B(t → uγ) is similar to the current best limit from the ATLAS experiment [14], while the limit for B(t → cγ) is significantly tighter.The result for B(t → cγ) benefits from the inclusion of the TT signal, which has comparable sensitivity for κ tuγ and κ tcγ , as well as from having two independent SRs, one optimized for ST and one for TT.
institutes for their contributions to the success of the CMS effort.In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid and other centers for delivering so effectively the computing infrastructure essential to our analyses.Finally, we acknowledge the enduring support for the construction and operation of the LHC, the CMS detector, and the supporting computing infrastructure provided by the following funding agencies: SC (Armenia), BMBWF and FWF (Austria); FNRS and

Figure 1
Figure1: LO Feynman diagrams for the production of a single top quark in association with a photon (left), and the decay of a top antiquark to a photon and a light antiquark in top quark pair production (right) via a tqγ FCNC, where q = u, c.The leptonic decay of the W boson from the top quark decay is included.The charge conjugate diagrams are also included.The FCNC interaction vertex is marked as a filled red circle.

Figure 2 :
Figure 2: From upper to lower, expected and observed distributions of photon p T , transverse mass of W boson candidate, reconstructed top quark mass, and ∆R(ℓ, b jet) for the electron (left) and muon (right) channels in SR1.For presentational purposes, the tuγ and tcγ signal distributions are normalized to a cross section corresponding to κ tuγ = κ tcγ = 0.2 and are superimposed on the background expectations.The last bins include overflows.The vertical bars on the points depict the data statistical uncertainties and the hatched bands show the combined statistical and systematic uncertainties in the estimated background processes.

Figure 3 :
Figure 3: From upper to lower, expected and observed distributions of photon p T , transverse mass of W boson candidate, invariant mass of jet and photon, and reconstructed top quark mass, for the electron (left) and muon (right) channels in SR2.For presentational purposes, the tuγ signal distributions are normalized to a cross section corresponding to κ tuγ = 0.2 and are superimposed on the background expectations.The tcγ distributions are not shown as they are the same as the tuγ distributions.The last bins include overflows.The vertical bars on the points depict the data statistical uncertainties and the hatched bands show the combined statistical and systematic uncertainties in the estimated background processes.

Figure 5 :
Figure5: The BDT output distributions for the data, the background predictions, and the expected tcγ signal for electron (left) and muon (right) channels in SR1 (upper) and SR2 (lower).The signal distribution is normalized to a cross section corresponding to κ tcγ = 0.10 (0.01) for ST (TT) and is stacked on the background expectations.The first bins include underflows, and the last bins include overflows.The vertical bars on the points depict the data statistical uncertainties and the hatched bands show the combined statistical and systematic uncertainties in the estimated background processes.

Table 1 :
Estimated background yields and observed event counts for the electron and muon channels in the signal regions SR1 and SR2.The uncertainties are the statistical and systematic contributions summed in quadrature.