A Guide to Constraining Effective Field Theories with Machine Learning

We develop, discuss, and compare several inference techniques to constrain theory parameters in collider experiments. By harnessing the latent-space structure of particle physics processes, we extract extra information from the simulator. This augmented data can be used to train neural networks that precisely estimate the likelihood ratio. The new methods scale well to many observables and high-dimensional parameter spaces, do not require any approximations of the parton shower and detector response, and can be evaluated in microseconds. Using weak-boson-fusion Higgs production as an example process, we compare the performance of several techniques. The best results are found for likelihood ratio estimators trained with extra information about the score, the gradient of the log likelihood function with respect to the theory parameters. The score also provides sufficient statistics that contain all the information needed for inference in the neighborhood of the Standard Model. These methods enable us to put significantly stronger bounds on effective dimension-six operators than the traditional approach based on histograms. They also outperform generic machine learning methods that do not make use of the particle physics structure, demonstrating their potential to substantially improve the new physics reach of the LHC legacy results.

An important aspect of the legacy of the Large Hadron Collider (LHC) experiments will be precise constraints on indirect signatures of physics beyond the Standard Model (SM), parameterized for instance by the dimension-six operators of the Standard Model effective field theory (SMEFT). The relevant measurements can easily involve tens of different parameters that predict subtle kinematic signatures in the high-dimensional space of the data. Traditional analysis techniques do not scale well to this complex problem, motivating the development of more powerful techniques.
The analysis of high-energy-physics data is based on an impressive suite of simulation tools that model the hard interaction, parton shower, hadronization, and detector response. The community has invested a tremendous amount of effort into developing these tools, yielding the high-fidelity modeling of LHC data needed for precision measurements. Simulators such as Pythia [1] and Geant4 [2] use Monte-Carlo techniques to sample the multitudinous paths through which a particular hard scattering might develop. A single event's path through the simulation can easily involve many millions of random variables. While Monte-Carlo techniques can efficiently sample from the distributions implicitly defined by the simulators, it is not feasible to calculate the likelihood for a particular observation because doing so would require integrating over all the possible histories leading to that observation. Clearly it is infeasible to explicitly calculate a numerical integral over this enormous latent space. While this problem is ubiquitous in high energy physics, it is rarely acknowledged explicitly.
Traditionally, particle physicists have approached this problem by restricting the analysis to one or two well-motivated discriminating variables, discarding the information contained in the remaining observables. The probability density for the restricted set of discriminating variables is then estimated with explicit functions or non-parametric approaches such as template histograms, kernel density estimates, or Gaussian Processes [3]. These low-dimensional density estimates are constructed and validated using Monte-Carlo samples from the simulation. While well-chosen variables may yield precise bounds along individual directions of the parameter space, they often lead to weak constraints in other directions in the parameter space [4]. The sensitivity to multiple parameters can be substantially improved by using the fully differential cross section. This is the forte of the Matrix Element Method [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] and Optimal Observables [20][21][22] techniques, which are based on the parton-level structure of a given process. Shower and event deconstruction [23][24][25][26] extend this approach to the parton shower. But all these methods still require some level of approximations on the parton shower and either neglect or crudely approximate the detector response. Moreover, even a simplified description of the detector effects requires the numerically expensive evaluation of complicated integrals for each observed event. None of these established approaches scales well to high-dimensional problems with many parameters and observables, such as the SMEFT measurements.
In recent years there has been increased appreciation that several real-world phenomena are better described by simulators that do not admit a tractable likelihood. This appears in fields as diverse as ecology, phylogenetics, epidemiology, cardiac simulators, quantum chemistry, and particle physics. Inference in this setting is often referred to as likelihood-free inference, where the inference strategy is restricted to samples generated from the simulator. Implicitly, these techniques aim to estimate the likelihood. A particularly ubiquitous technique is Approximate Bayesian Computation (Abc) [27][28][29][30][31][32][33]. Abc is closely related to the traditional template histogram and kernel density estimation approach used by physicists. More recently, approximate inference techniques based on machine learning and neural networks have been proposed [34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51]. All these techniques have in common that they only take into account simulated samples similar to the actual observables -they do not exploit the structure of the process that generates them.
We develop new simulation-based inference techniques that are tailored to the structure of particle physics processes. The key insight behind these methods is that we can extract more information than just samples from the simulations, and that this additional information can be used to efficiently train neural networks that precisely estimate likelihood ratios, the preferred test statistics for LHC measurements. These methods are designed for scalability to both high-dimensional parameter spaces as well as to many observables. They do not require any simplifying assumptions to the underlying physics: they support state-of-the-art event generators with parton shower, reducible and irreducible backgrounds, and full detector simulations. After an upfront training phase, they are very efficient to evaluate. Our tools directly provide an estimator for the likelihood ratio, an intuitive and easily interpretable quantity. Finally, limits derived from these tools with toy experiments have the reassuring property that even if they might not be optimal, they are never wrong, i. e. no points are said to be excluded that should not be excluded at a given confidence level.
In Ref. [52], the companion paper of this publication, we focus on the key ideas and sensitivity enabled by these techniques. Reference [53] presents the methods in a more abstract setting. Here we describe the actual algorithms in detail, developing several different methods side by side. Given the number of discussed variations, this publication might have the look and feel of a review article and we present it as a guide to the interested practitioner. We focus on the main ideas and differences between the approaches and postpone many technical details until the appendices.
We evaluate the performance of these different methods on a specific example problem, the measurement of two dimension-six operators in Higgs production in weak boson fusion (WBF) in the four-lepton mode at the LHC. For part of this analysis, we work in an idealized setting in which we can access the true likelihood function, providing us with a ground truth for the comparison of the different analysis methods. After establishing the precision of the likelihood ratio estimation, we turn towards the more physical question of how strongly the two operators can be constrained with the different techniques. We repeat the analysis with a simplified detector response where the ground-truth likelihood is no longer tractable.
We begin by laying out the problem in Sec. II: we summarize the effective field theory idea, list the challenges posed by EFT measurements, translate the problem from a physics perspective into the language of statistics, and discuss its important structural properties. We also set up the example process used throughout the rest of the paper. The description of the analysis methods are split in two parts: in Sec. III we define the different techniques to estimate the likelihood ratio, which includes most of the conceptual work presented here. Section IV then explains how to set limits on the EFT parameters based on these tools. In Sec. V we evaluate the performance of the different tools in our example process. Finally, in Sec. VI we summarize our findings and give recommendations for practitioners. The appendices describe the different algorithms in more detail and provide additional results. The code and data used for this paper are available online at Ref. [54].

II. THE EFT MEASUREMENT PROBLEM
A. Effective field theory Effective field theories (EFTs) [55][56][57] parameterize the effects of physics at an energy scale Λ on observables at smaller energies E Λ as a set of local operators. The form of these operators is fixed by the light particles and the symmetry structure of the theory and is entirely independent of the high-energy model. Systematically expanding the Lagrangian in 1/Λ, equivalent to ordering the operators by their canonical dimension, leaves us with a finite set of operators weighted by Wilson coefficients that describe all possible new physics effects up to some order in E/Λ.
In the absence of new particles at the TeV scale, and assuming the symmetry structure of the SM, we can thus describe any new physics signature in LHC processes in terms of a set of higherdimensional operators [58][59][60][61][62][63]. In this SM Effective Field Theory (SMEFT), the leading effects beyond the SM come from 59 independent dimension-six operators O o with Wilson coefficients f o , where the SM corresponds to all f o = 0 and any measurement of a deviation hints at new physics. The dimension-six Wilson coefficients are perfectly suited as an interface between experimental measurements and theory interpretations. They are largely model-independent, can parameterize a wide range of observables, including novel kinematic features, and are theoretically consistent beyond tree level. On the technical side, dimension-six operators are implemented in standard Monte-Carlo event generators [64], allowing us to generate predictions for rates and kinematic observables for any combination of Wilson coefficients. Measured values of f o /Λ 2 can easily be translated to the parameters of specific models through well-established matching procedures [65]. All in all, SMEFT measurements will likely be a key part of the legacy of the LHC experiments [66].
Let us briefly comment on the question of EFT validity. A hierarchy of energy scales E Λ is the key assumption behind the EFT construction, but in a bottom-up approach the cutoff scale Λ cannot be known without additional model assumptions. From a measurement f o /Λ 2 = 0 we can estimate the new physics scale Λ only by assuming a characteristic size of the new physics couplings √ f o , and compare it to the energy scale E of the experiment. It has been found that dimension-six operators often capture the dominant effects of new physics even when there is only a moderate scale separation E Λ [67]. All these concerns are not primarily of interest for the measurement of Wilson coefficients, but rather important for the interpretation of the results in specific UV theories.

B. Physics challenges and traditional methods
EFT measurements at the LHC face three fundamental challenges: 1. Individual scattering processes at the LHC are sensitive to several operators and require simultaneous inference over a multi-dimensional parameter space. While a naive parameter scan works well for one or two dimensions, it becomes prohibitively expensive for more than a few parameters.
2. Most operators introduce new coupling structures and predict non-trivial kinematic features. These do not translate one-to-one to traditional kinematic observables such as transverse momenta, invariant masses or angular correlations. An analysis based on only one kinematic variable typically cannot constrain the full parameter space efficiently. Instead, most of the operator effects only become fully apparent when multiple such variables including their correlations are analysed [4,68]. 3. The likelihood function of the observables is intractable, making this the setting of "likelihoodfree inference" or "simulator-based inference". There are simulators for the high-energy interactions, the parton shower, and detector effects that can generate events samples for any theory parameter values, but they can only be run in the forward mode. Given a set of reconstruction-level observables, it is not possible to evaluate the likelihood of this observation given different theory parameters. The reason is that this likelihood includes the integral over all possible different parton shower histories and particle trajectories through the detector as a normalizing constant, which is infeasible to calculate in realistic situations. We will discuss this property in more detail in the following section.
The last two issues are typically addressed in one of three ways. Most commonly, a small set of discriminating variables (also referred to as summary statistics or engineered features) is handpicked for a given problem. The likelihood in this low-dimensional space is then estimated, for instance, by filling histograms from simulations. While well-chosen variables may lead to good constraints along individual directions of the parameter space, there are typically directions in the parameter space with limited sensitivity [4,68].
The Matrix Element Method [5,6,[8][9][10][11][12][13][14][15][17][18][19] or Optimal Observables [20][21][22] go beyond a few specific discriminating variables and use the matrix element for a particular process to estimate the likelihood ratio. While these techniques can be very powerful, they suffer from two serious limitations. The parton shower and detector response are either entirely neglected or approximated through ad-hoc transfer function. Shower and event deconstruction [23][24][25][26] allow for the calculation of likelihood ratios at the level of the parton shower, but still rely on transfer functions to describe the detector response. Finally, even with such a simple description of the shower and detector, the evaluation of the likelihood ratio estimator requires the numerically expensive computation of large integrals for each observed event.
Finally, there is a class of generic methods for likelihood-free inference. For Bayesian inference, the best-known approach is Approximate Bayesian Computation (Abc) [27][28][29][30][31][32]. Similar to the histogram approach, it relies on the choice of appropriate low-dimensional summary statistics, which can severely limit the sensitivity of the analysis. Different techniques based on machine learning have been developed recently. In particle physics, the most common example are discriminative classifiers between two discrete hypotheses, such as a signal and a background process. This approach has recently been extended to parameter measurements [34,35]. More generally, many techniques based on the idea of using a classification model, such as neural networks, for inference in the absence of a tractable likelihood function have been introduced in the machine learning community [36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51]. All of these methods only require samples of events trained according to different parameter points. They do not make use of the structure of the particle physics processes, and thus do not use all available information.
All of these methods come with a price. We develop new techniques that • are tailored to particle physics measurements and leverage their structural properties, • scale well to high-dimensional parameter spaces, • can accommodate many observables, • capture the information in the fully differential cross sections, including all correlations between observables, • fully support state-of-the art simulators with parton showers and full detector simulations, and • are very efficient to evaluate after an upfront training phase.

Particle-physics structure
One essential step to finding the optimal measurement strategy is identifying the structures and symmetries of the problem. Particle physics processes, in particular those described by effective field theories, typically have two key properties that we can exploit.
First, any high-energy particle physics process factorizes into the parton-level process, which contains the matrix element and in it the entire dependence on the EFT coefficients, and a residual part describing the parton shower and detector effects. In many plausible scenarios of new physics neither the strong interactions in the parton shower nor the electromagnetic and strong interactions in the detector are affected by the parameters of interest. The likelihood function can then be written as Here and in the following x are the actual observables after the shower, detector, and reconstruction; θ are the theory parameters of interest; and z are the parton-level momenta (a subset of the latent variables). Table I provides a dictionary of these and other important symbols that we use.
The first ingredient to this likelihood function is the distribution of parton-level four-momenta where σ(θ) and dσ(θ)/dz are the total and differential cross sections, respectively. Crucially, this function is tractable: the matrix element and the parton density functions can be evaluated for arbitrary four-momenta z and parameter values θ. In practice this means that matrix-element codes such as MadGraph [64] can not only be run in a forward, generative mode, but also define functions that return the squared matrix element for a given phase-space point z. Unfortunately, there is typically no user-friendly interface to these functions, so evaluating it requires some work. Second, the conditional density p(x|z) describes the probabilistic evolution from the parton-level four-momenta to observable particle properties. While this symbol looks innocuous, it represents the full parton shower, the interaction of particles with the detector material, the sensor response and readout, and the reconstruction of observables. Different simulators such as Pythia [1], Geant4 [2], or Delphes [69] are often used to generate samples {x} ∼ p(x|z) for given parton-level momenta z. This sampling involves the Monte-Carlo integration over the possible shower histories and detector interactions, This enormous latent space can easily involve many millions of random numbers, and these integrals are clearly intractable, which we denote with the red symbol p. In other words, given a set of reconstruction-level observables x, we cannot calculate the likelihood function p(x|z) that describes the compatibility of parton-level momenta z with the observation. By extension, we also cannot evaluate p(x|θ), the likelihood function of the theory parameters given the observation. The intractable integrals in Eq. (4) are the crux of the EFT measurement problem. The factorization of Eq. 2 together with the tractability of the parton-level likelihood p(z|θ) is immensely important. We will refer to the combination of these two properties as particle-physics structure. The far-reaching consequences of this structure for EFT measurements will be the topic of Sec. III B. Many (but not all) of the inference strategies we discuss will rely on this condition.

t(x|θ)
Estimator for score Morphing basis points, coefficients, densities, see Eq. (6). Table I: Dictionary defining many symbols that appear in this paper. Red symbols denote intractable likelihood functions. The last three rows explain our conventions for indices.
Note that this Markov property holds even with reducible and irreducible backgrounds and when a matching scheme is used to combine different parton-level multiplicities. In these situations there may be different disjoint parts of z space, even with different dimensionalities, for instance when events with n and n + 1 partons in the final state can lead to the same configuration of observed jets. The integral over z then has to be replaced with a sum over "z n spaces" and an integral over each z n , but the logic remains unchanged.

Operator morphing
Effective field theories (and other parameterisations of indirect signatures of new physics) typically contribute a finite number of amplitudes to a given process, each of which is multiplied by a function of the Wilson coefficients. 1 In this case the likelihood can be written as where c labels the different amplitude components, and the functions f c (z) are not necessarily properly positive definite or normalized.
The simplest example is a process in which one SM amplitude M 0 (z) interferes with one new physics amplitude M BSM (z|θ) = θM 1 (z), which scales linearly with a new physics parameter θ. The differential cross section, proportional to the squared matrix element, is then dσ(z) ∝ |M 0 (z)| 2 + 2θ Re M 0 (z) † M 1 (z) + θ 2 |M 1 (z)| 2 . There are three components, representing the SM, interference, and pure BSM terms, each with their own parameter dependencew c (z) and momentum dependence f c (z).
We can then pick a number of basis 2 parameter points θ c equal to the number of components c in Eq. (5). They can always be chosen such that the matrix W cc =w c (θ c ) is invertible, which allows us to rewrite (5) as a mixture model with weights w c (θ) = c w c (θ) W −1 cc and (now properly normalized) basis densities p c (z) = p(z|θ c ). The weights w c (θ) depend on the choice of basis points and are analytically known. This "morphing" procedure therefore allows us to extract the full likelihood function p(z|θ) from a finite set of evaluations of basis densities p c (z).
Calculating the full statistical model through morphing requires the likelihood p(z|θ) to be tractable, which is true for parton-level momenta as argued above. However, the same trick can be applied even when the exact likelihood is intractable, but we can estimate it. For instance, the marginal distribution of any individual kinematic variable v(x) can be reliably estimated through histograms or other density estimation techniques, even when shower and detector effects are taken into account. The morphing procedure then lets us evaluate the full conditional distribution p(v|θ) based on a finite number of Monte-Carlo simulations [70].
Finally, note that Eq. (6) together with Eq. (2) imply even if the likelihood function p(x|θ) and the components p c (x) are intractable. This will later allow us to impose the morphing structure on likelihood ratio estimators. Not all EFT amplitudes satisfy the morphing structure in Eq. (5), so we discuss both measurement strategies that rely on and make use of this property as well as more general ones that do not require it to hold.

Weak-boson-fusion Higgs to four leptons
As an explicit example LHC process we consider Higgs production in weak boson fusion (WBF) with a decay of the Higgs into four leptons, with = e, µ, as shown in Fig. 1. While this process is rare and is likely to only be observed during the high-luminosity run of the LHC, it has a few compelling features that make it a prime candidate to study the efficient extraction of information. First, the two jets from the quarks and in particular the four leptons can Figure 1: Feynman diagram for Higgs production in weak boson fusion in the 4 mode. The red dots show the Higgs-gauge interactions affected by the dimension-six operators of our analysis.
be reconstructed quite precisely in the LHC detectors. Even when assuming on-shell conditions and energy-momentum conservation, the final-state momenta span a 16-dimensional phase space, giving rise to many potentially informative observables. Second, both the production of the Higgs boson in weak boson fusion as well as its decay into four leptons are highly sensitive to the effects of new physics in the Higgs-gauge sector. We parameterize these with dimension-six operators in the SMEFT, following the conventions of the Hagiwara-Ishihara-Szalapski-Zeppenfeld basis [62]. For simplicity, we limit our analysis to the two particularly relevant operators For convenience, we rescale the Wilson coefficients to the dimensionless parameters of interest where v = 246 GeV is the electroweak vacuum expectation value. As alluded to above, the validity range of the EFT cannot be determined in a model-independent way. For moderately weakly to moderately strongly coupled underlying new physics models, one would naively expect |f o | O (1) and the EFT description to be useful in the range E ≈ v Λ, or −1 θ o 1. This is the parameter range we analyse in this paper. The interference between the Standard Model amplitudes and the dimension-six operators leads to an intricate relation between the observables and parameters in this process, which has been studied extensively. The precise measurement of the momenta of the four leptons provides access to a range of angular correlations that fully characterize the h → ZZ decay [10,71]. These variables are sensitive to the effects of dimension-six operators. But the momentum flow p through the decay vertex is limited by the Higgs mass, and the relative effects of these dimension-six operators are suppressed by a factor p 2 /Λ 2 . On the other hand, the Higgs production through two off-shell gauge bosons with potentially high virtuality does not suffer from this suppression. The properties of the two jets recoiling against them are highly sensitive to operator effects in this vertex [72][73][74][75].
In Fig. 2 we show example distributions of two particularly informative observables, the transverse momentum of the leading (higher-p T ) jet p T,j1 , and the azimuthal angle between the two jets, ∆φ jj . The two quantities are sensitive to different directions in parameter space. Note also that the interference between the different amplitudes can give rise to non-trivial effects. The size of the dimension-six amplitudes grows with momentum transfer, which is strongly correlated with the transverse momentum of the leading jet. If the interference of new-physics amplitudes with the SM diagrams is destructive, this can drive the total amplitude through zero [67]. The jet momentum   : Kinematic distributions in our example process for three example parameter points. We assume an idealized detector response to be discussed in Sec. II D 2. Left: transverse momentum of the leading (higher-p T ) jet, a variable strongly correlated with the momentum transfer in the process. The dip around 350 GeVis a consequence of the amplitude being driven through zero, as discussed in the text. Right: separation in azimuthal angle between the two jets.
distribution then dips and rises again with higher energies, as seen in the red curve in the left panel of Fig. 2. Such depleted regions of low probability can lead to very small or large likelihood ratios and potentially pose a challenge to inference methods. By analysing the Fisher information in these distributions, it is possible to compare the discrimination power in these two observables to the information contained in the full multivariate distribution or to the information in the total rate. It turns out that the full multivariate distribution p(z|θ) contains significantly more information than the one-dimensional and two-dimensional marginal distributions of any standard kinematic variables [4]. The total rate is found to carry much less information on the two operators, in particular when systematic uncertainties on the cross sections are taken into account. In this study we therefore only analyse the kinematic distributions for a fixed number of observed events.

Sample generation
Already in the sample generation we can make use of the structural properties of the process discussed in Sec. II C. The amplitude of this process factorizes into a sum of parameter-dependent factors times phase-space-dependent amplitudes, as given in Eq. (5). The effect of the operators O W and O W W on the total Higgs width breaks this decomposition, but this effect is tiny and in practice irrelevant when compared to the experimental resolution. The likelihood function of this process therefore follows the mixture model in Eq. (6) to good approximation, and the weights w c (θ) can be calculated. Since the parton-level likelihood function is tractable, we can reconstruct the entire likelihood function p(z|θ) based on a finite number of simulator runs, as described in Sec. II C 2.
To this end, we first generate a parton-level sample {z e } of 5.5·10 6 events with MadGraph 5 [64] and its add-on MadMax [76][77][78], using the setup described in Ref. [4]. With MadMax we can evaluate the likelihood p(z e |θ c ) for all events z e and for 15 different basis parameter points θ c . Calculating the morphing weights w c (θ) finally gives us the true parton-level likelihood function p(z e |θ) for each generated phase-space point z e . In Fig. 3 we show the basis points θ c and two of the morphing weights w c (θ) with their dependence on θ. In some corners of parameter space the weights easily reach up to |w c | O (100), and there are large cancellations between positive and negative weights. This will pose a challenge for the numerical stability of every inference algorithm that directly uses the morphing structure of the process, as we will discuss later. Other basis choices have led to comparable or larger morphing weights.
Parton shower and detector effects smear the observed particle properties x with respect to the parton-level momenta z and make the likelihood function in Eq. (2) intractable. We develop inference methods that can be applied exactly in this case and that do not require any simplifying assumptions on the shower and detector response. However, in this realistic scenario we cannot evaluate their performance by comparing them to the true likelihood ratio. We therefore test them first on an idealized scenario in which the four-momenta, flavor, and charges of the leptons, and the momenta of the partons, can be measured exactly, p(x|z) ≈ δ(x − z). In this approximation we can evaluate the likelihood p(x|θ).
After establishing the performance of the various algorithms in this idealized setup, we will analyse the effect of parton shower and detector simulation on the results. We generate an approximate detector-level sample by drawing events from a smearing distribution p(x|z) conditional on the parton-level momenta z. This smearing function is loosely motivated by the performance of the LHC experiments and is defined in Appendix A 1.

III. LIKELIHOOD RATIO ESTIMATION
According to the Neyman-Pearson lemma, the likelihood ratio is the most powerful test statistic to discriminate between two hypotheses θ 0 and θ 1 . Unfortunately, the integral over the latent space z makes the likelihood function p(x|θ) as well as the likelihood ratio r(x|θ 0 , θ 1 ) intractable. The first and crucial stage of all our EFT measurement strategies is therefore the construction of a likelihood ratio estimatorr(x|θ 0 , θ 1 ) that is as close to the true r(x|θ 0 , θ 1 ) as possible and thus maximizes the discrimination power between θ 0 and θ 1 . This estimation problem has several different aspects that we try to disentangle as much as possible. The first choice is the overall structure of the likelihood ratio estimator and its dependence on the theory parameters θ. We discuss this in Sec. III A. Section III B analyses what information is available and useful to construct (train) the estimators for a given process. Here we will introduce the main ideas that harness the structure of the EFT to increase the information that is used in the training process.
These basic concepts are combined into concrete strategies for the estimation of the likelihood ratio in Sec. III C. After training the estimators, there is an optional additional calibration stage, which we introduce in Sec. III D. Section III E describes the technical implementation of these strategies in terms of neural networks. Finally, we discuss the challenges that the different algorithms face in Sec. III F and introduce diagnostic tools for the uncertainties.
A. Modeling likelihood ratios

Likelihood ratios
There are different approaches to the structure of this estimator, in particular to the dependence on the theory parameters θ: Point by point (PbP): A common strategy is to scan the parameter space, randomly or in a grid. To reduce the complexity of the scan one can keep the denominator θ 1 fixed, while scanning only θ 0 . Likelihood ratios with other denominators can be extracted trivially aŝ r(x|θ 0 , θ 2 ) =r(x|θ 0 , θ 1 )/r(x|θ 2 , θ 1 ). Instead of a single reference value θ 1 , we can also use a composite reference hypothesis p(x|θ 1 ) → p ref (x) = dθ 1 π(θ 1 ) p(x|θ 1 ) with some prior π(θ 1 ). This can reduce the regions in feature space with small reference likelihood p(x|θ 1 ) and improve the numerical stability.
Only the final results are interpolated between the scanned values of θ 0 .
This approach is particularly simple, but discards all information about the structure and smoothness of the parameter space. For high-dimensional parameter spaces, the parameter scan can become prohibitively expensive. The final interpolation may introduce additional uncertainties.
Agnostic parameterized estimators: Alternatively we can train one estimator as the full model r(x|θ 0 , θ 1 ) as a function of both x and the parameter combination (θ 0 , θ 1 ) [34,79]. A modification is again to leave θ 1 at a fixed reference value (or fixed composite reference hypothesis with a prior π(θ 1 )) and only learn the dependence on x and θ 0 .
This parameterized approach leaves it to the estimator to learn the typically smooth dependence of the likelihood ratio on the physics parameters and does not require any interpolation in the end. There are no assumptions on the form of the dependence of the likelihood on the ratios.
Morphing-aware estimators: For problems that satisfy the morphing condition of Eq. (6) and thus also Eq. (7), we can impose this structure and the explicit knowledge of the weights w c (θ) onto the estimator. Again, one option is to keep the denominator fixed at a reference value (or composite reference hypothesis), leading tô where the basis estimatorsr c (x) =r(x|θ c , θ 1 ) only depend on x.
Alternatively, we can decompose both the numerator and denominator distributions to find [34,80]r with pairwise estimatorsr c ,c (x) =r(x|θ c , θ c ).

Score and local model
One remarkably powerful quantity is the score, defined as the relative tangent vector It quantifies the relative change of the likelihood under infinitesimal changes in parameter space and can be seen as a local equivalent of the likelihood ratio. In a small patch around θ 0 in which we can approximate t(x|θ) as independent of θ, Eq. (14) is solved by the local model with a normalisation factor Z(θ). The local model is in the exponential family. Note that the t(x|θ 0 ) are the sufficient statistics for p local (x|θ). This is significant: if we can estimate the vector-valued function t(x|θ 0 ) (with one component per parameter of interest) of the high-dimensional x, we can reduce the dimensionality of our space dramatically without losing any information, at least in the local model approximation [81]. In fact, ignoring the normalization factors and in the local model the likelihood ratio between θ 0 and θ 1 only depends on the scalar product between the score and θ 0 − θ 1 , which will allow us to take this dimensionality reduction one step further and compress high-dimensional data x into a scalar without loss of power.
In our example process, we are interested in the Wilson coefficients of two dimension-six operators. The score vector therefore has two components. In Fig. 4 we show the relation between these two score components and two informative kinematic variables, the jet p T and the azimuthal angle between the two jets, ∆φ. We find that the score vector is very closely related with these two kinematic quantities, but the relation is not quite one-to-one. Larger energy transfer, measured as larger jet p T , increases the typical size of the score vector. The O W W component of the score is particularly sensitive to the angular correlation variable, in agreement with detailed studies of this process [4]. In both panels, the axes show two important kinematic variables. We find that the score vector is clearly correlated with these two variables.
B. Available information and its usefulness

General likelihood-free case
All measurement strategies have in common that the estimatorr(x|θ 0 , θ 1 ) is learned from data provided by Monte-Carlo simulations (the stochastic generative process). In the most general likelihood-free scenario, we can only generate samples of events {x e } with x e ∼ p(x|θ) through the simulator, and base an estimatorr(x|θ 0 , θ 1 ) on these generated samples.
One strategy [34] is based on training a classifier with decision functionŝ(x) between two equal-sized samples {x e } ∼ p(x|θ 0 ), labelled y e = 0, and {x e } ∼ p(x|θ 1 ), labelled y e = 1. The cross-entropy loss functional is minimized by the optimal decision function .
From the decision functionŝ(x) of a classifier we can therefore extract an estimator for the likelihood ratio asr This idea, sometimes called the likelihood ratio trick, is visualized in the left panel of Fig. 5.
As pointed out in Ref. [34], we can use the weaker assumption of any loss functional that is minimized by a decision function s(x) that is a strictly monotonic function of the likelihood ratio. The underlying reason is that the likelihood ratio is invariant under any transformation s(x) with this property. In practice, the output of any such classifier can be brought closer to the form of Eq. (17) through a calibration procedure, which we will discuss in Sec. III D. r(x| 0 , 1 ) r(x, z| 0 , 1 ), x p(x| = 0 ) r(x, z| 0 , 1 ), x p(x| = 1 ) Figure 5: Illustration of some key concepts with a one-dimensional Gaussian toy example. Left: classifiers trained to distinguish two sets of events generated from different hypotheses (green dots) converge to an optimal decision function s(x|θ 0 , θ 1 ) (in red) given in Eq. (17). This lets us extract the likelihood ratio. Right: regression on the joint likelihood ratios r(x e , z e |θ 0 , θ 1 ) of the simulated events (green dots) converges to the likelihood ratio r(x|θ 0 , θ 1 ) (red line).

Particle-physics structure
As we have argued in Sec. II C, particle physics processes have a specific structure that allow us to extract additional information. Most processes satisfy the factorization of Eq. (2) with a tractable parton-level likelihood p(z|θ). The generators do not only provide samples {x e }, but also the corresponding parton-level momenta (latent variables) {z e } with (x e , z e ) ∼ p(x, z|θ 0 ). By evaluating the matrix elements at the generated momenta z e for different hypotheses θ 0 and θ 1 , we can extract the parton-level likelihood ratio p(z e |θ 0 )/p(z e |θ 1 ). Since the distribution of x is conditionally independent of the theory parameters, this is the same as the joint likelihood ratio So while we cannot directly evaluate the likelihood ratio at the level of measured observables r(x|θ 0 , θ 1 ), we can calculate the likelihood ratio for a generated event conditional on the latent parton-level momenta.
The same is true for the score, i. e. the tangent vectors or relative change of the (log) likelihood under infinitesimal changes of the parameters of interest. While the score t(x e |θ 0 ) = ∇ θ log p(x|θ)| θ 0 from the simulator. Again, all intractable parts of the likelihood cancel. We visualize the score in Fig. 6 and all available information on the generated samples in Fig. 7. It is worth repeating that we are not making any simplifying approximations about the process here, these statements are valid with reducible backgrounds, for state-of-the-art generators including higher-order matrix elements, matching of matrix element and parton shower, and with full detector simulations. But how does the availability of the joint likelihood ratio r(x, z|θ) and score t(x, z|θ) (which depend on the latent parton-level momenta z) help us to estimate the likelihood ratio r(x|θ), which is the one we are interested in?
Consider the L 2 squared loss functional for functionsĝ(x) that only depend on x, but which are trying to approximate a function g(x, z), Left: full statistical model log r(x|θ, θ 1 ) that we are trying to estimate. Right: available information at the generated events (x e , z e ). The dots mark the joint likelihood ratios log r(x e , z e |θ 0 , θ 1 ), the arrows the scores t(x e , z e |θ 0 , θ 1 ).
Via calculus of variations we find that the function g * (x) that extremizes L[ĝ] is given by [53] therefore We can make use of this general property in our problem in two ways. Identifying g(x e , z e ) with the joint likelihood ratios r(x e , z all,e |θ 0 , θ 1 ) (which we can calculate!) and θ = θ 1 , we find By minimizing the squared loss of a sufficiently expressive functionr(x|θ 0 , θ 1 ), we can therefore regress on the true likelihood ratio [53]! This is illustrated in the right panel of Fig. 5. Note that to get the correct minimum, the events (x e , z e ) have to be sampled according to the denominator hypothesis θ 1 . We can also identify g(x e , z e ) in Eq. (22) with the scores t(x e , z all,e |θ), which can also be extracted from the generator. In this case, Quantity General likelihood-free Particle physics p(x e , z all,e |θ) Joint likelihood ratio r(x e , z all,e |θ 0 , θ 1 ) Joint score t(x e , z all,e |θ) Table II: Availability of different quantities from the generative process in the most general likelihood-free setup vs. in the particle-physics scenario with the structure given in Eq. (2). Asterisks ( * ) denote quantities that are not immediately available, but can be regressed from the corresponding joint quantity, as shown in Sec. III B.
Thus minimizing of a sufficiently expressive functiont(x|θ) allows us to regress on the score t(x|θ) [53]. 3 Now the (x e , z e ) have to be sampled according to θ. We summarize the availability of the (joint) likelihood, likelihood ratio, and score in the most general likelihood-free setup and in particle physics processes in Table II. This is one of our key results and opens the door for powerful new inference methods. Particle physics processes involve the highly complex effects of parton shower, detector, and reconstruction, modelled by a generative process with a huge latent space and an intractable likelihood. Still, the specific structure of this class of processes allows us to calculate how much more or less likely a generated event becomes when we move in the parameter space of the theory. We have shown that by regressing on the joint likelihood ratios or scores extracted in this way, we can recover the actual likelihood ratio or score as a function of the observables!

C. Strategies
Let us now combine the estimator structure discussed in Sec. III A with the different quantities available during training discussed in Sec. III B and define our strategies to estimate the likelihood ratio. Here we restrict ourselves to an overview over the main ideas of the different approaches. A more detailed explanation and technical details can be found in Appendix A 2.

General likelihood-free case
Some approaches are designed for the most general likelihood-free scenario and only require the samples {x e } from the generator: Histograms of observables: The traditional approach to kinematic analyses relies on one or two kinematic variables v(x), manually chosen for a given process and set of parameters. Densitieŝ p(v(x)|θ) are estimated by filling histograms with generated samples, leading to the likelihood ratior .
We use this algorithm point by point in θ 0 , but a morphing-based setup is also possible (see Sec. II C 2). We discuss the histogram approach in more detail in Appendix A 2 a.
Approximate Frequentist Computation (Afc): Approximate Bayesian Computation (Abc) is currently the most widely used method for likelihood-free inference in a Bayesian setup. It allows to sample parameters from the intractable posterior, θ ∼ p(θ|x) = p(x|θ)p(θ)/p(x). Essentially, Abc relies on the approximation of the likelihood function through a rejection probability with x e ∼ p(x|θ), a kernel K that depends on a bandwidth , and a sufficiently low-dimensional summary statistics v(x).
Inference in particle physics is usually performed in a frequentist setup, so this sampling mechanism is not immediately useful. But we can define a frequentist analogue, which we call "Approximate Frequentist Computation" (Afc). In analogy to the rejection probability in Eq. 29, we can define a kernel density estimate for the likelihood function aŝ The corresponding likelihood ratio estimator iŝ .
We use this approach point by point in θ 0 with a fixed reference θ 1 . As summary statistics, we use subsets of kinematic variables, similar to the histogram approach. We give more details in Appendix A 2 b.
Calibrated classifiers (Carl 4 ): As discussed in Sec. III B 1, the decision functionŝ(x|θ 0 , θ 1 ) of a classifier trained to discriminate between samples generated according to θ 0 from θ 1 can be turned into an estimator for the likelihood ratiô This is illustrated in the left panel of Fig. 5.
If the classifier does not learn the optimal decision function of Eq. (17), but any monotonic function of the likelihood ratio, a calibration procedure can improve the performance significantly. We will discuss this in Sec. III D below.
We implement this strategy point by point in θ 0 , as an agnostic parameterized classifier r(x|θ 0 , θ 1 ) that learns the dependence on both x and θ 0 , as well as a morphing-aware parameterized classifier. More details are given in Appendix A 2 c.
Neural conditional density estimators (Nde): Several other methods for conditional density estimation have been proposed, often based on neural networks [36][37][38][39][40][41][42]. One particularly interesting class of methods for density estimation is based on the idea of expressing the target density as a sequence of invertible transformations applied to a simple initial density, such as a Gaussian [43][44][45][46]51]. The density in the target space is then given by the Jacobian determinant of the transformation and the base density. A closely related and successful alternative are neural autoregressive models [47][48][49][50], which factorize the target density as a sequence of simpler conditional densities. Both classes of estimators are trained by maximizing the log likelihood.
We leave a detailed discussion of these techniques for particle physics problems as well as an implementation in our example process for future work.

Particle-physics structure
As we argued in Sec. III B, particle physics simulations let us extract the joint likelihood ratio r(x e , z e |θ 0 , θ 1 ) and the joint score t(x e , z e |θ 0 , θ 1 ), giving rise to strategies tailored to this class of problems: Ratio regression (Rolr 5 ): We can directly regress the likelihood ratior(x|θ 0 , θ 1 ). As shown in the previous section, the squared error loss between a functionr(x e |θ 0 , θ 1 ) and the available joint likelihood ratio r(x e , z e |θ 0 , θ 1 ) is minimized by the likelihood ratio r(x|θ 0 , θ 1 ), provided that the samples (x e , z e ) are drawn according to θ 1 . Conversely, the squared error of 1/r(x e , z e |θ 0 , θ 1 ) with (x e , z e ) ∼ p(x, z|θ 0 ) is also minimized by the likelihood ratio. We can combine these two terms into a combined loss function y e |r(x e , z e |θ 0 , θ 1 ) −r(x|θ 0 , θ 1 )| 2 with y e = 0 for events generated according to (x e , z e ) ∼ p(x, z|θ 0 ) and y e = 1 for (x e , z e ) ∼ p(x, z|θ 1 ). The factors of y e and (1 − y e ) ensure the correct sampling for each part of the loss functional. We illustrate this approach in the right panel of Fig. 5.
This strategy is again implemented point by point in θ 0 , in an agnostic parameterized setup, as well as in a morphing-aware parameterized setup. We describe it in more detail in Appendix A 2 d.
Carl + score regression (Cascal 6 ): The parameterized Carl strategy outlined above learns a classifier decision functionŝ(x|θ 0 , θ 1 ) as a function of θ 0 . If the classifier is realized with a differentiable architecture such as a neural network, we can calculate the gradient of this function and of the corresponding estimator for the likelihood ratior(x|θ 0 , θ 1 ) with respect to θ 0 and derive the estimated scorê 5 Regression on likelihood ratio 6 CARL and score approximate likelihood ratio If the estimator is perfect, we expect this estimated score to minimize the squared error with respect to the joint score data available from the simulator, following the arguments in Sec. III B.
We can turn this argument around and use the available score data during the training. Instead of training the classifier just by minimizing the cross-entropy, we can instead simultaneously minimize the squared error on this derived score with respect to the true joint score t(x, z|θ 0 , θ 1 ). The combined loss function is given by witht(x|θ 0 ) defined in Eq. (34) and a hyperparameter α that weights the two pieces of the loss function relative to each other. Again, y e = 0 for events generated according to (x e , z e ) ∼ p(x, z|θ 0 ) and y e = 1 for (x e , z e ) ∼ p(x, z|θ 1 ), and the factors of y e and (1 − y e ) ensure the correct sampling for each part of the loss functional.
This strategy relies on the parameterized modeling of the likelihood ratio. We implement both an agnostic version as well as a morphing-aware model. See Appendix A 2 e for more details.
Neural conditional density estimators + score (Scandal 7 ): In the same spirit as the Cascal method, neural density estimators such as autoregressive flows can be augmented with score information. We have started to explore this class of algorithms in Ref. [53], but leave a detailed study and the application to particle physics for future work.
Ratio + score regression (Rascal 8 ): The same trick works for the parameterized Rolr approach. If the regressor is implemented as a differentiable architecture such as a neural network, we can calculate the gradient of the parameterized estimatorr(x|θ 0 , θ 1 ) with respect to θ 0 and calculate the scoret Instead of training just on the squared likelihood ratio error, we can minimize the combined loss witht(x|θ 0 ) defined in Eq. (36) and a hyperparameter α. The likelihood ratios and scores again provide complementary information as shown in the Fig. 7. 7 Score and neural density approximate likelihood 8 Ratio and score approximate likelihood ratio Once more we experiment with both an agnostic parameterized model as well as a morphingaware version.
This technique uses all the available data from the simulator that we discussed in Sec. III B to train an estimator of particularly high fidelity. It is essentially a machine-learning version of the Matrix Element Method. It replaces computationally expensive numerical integrals with an upfront regression phase, after which the likelihood ratio can be evaluated very efficiently. Instead of manually specifying simplified smearing functions, the effect of parton shower and detector is learned from full simulations. For more details on Rascal, see Appendix A 2 f.
Local score regression and density estimation (Sally 9 ): In the local model approximation discussed in Sec. III A 2, the score evaluated at some reference point θ score is the sufficient statistics, carrying all the information on θ. A precisely estimated score vector (with one component per parameter of interest) is therefore the ideal summary statistics, at least in the neighborhood of the Standard Model or any other reference parameter point.
In the last section we argued that we can extract the joint score t(x e , z e |θ score ) from the simulator. We showed that the squared error between a functiont(x|θ score ) and the joint score is minimized by the intractable score t(x|θ score ), as long as the events are sampled as (x e , z e ) ∼ p(x, z|θ score ). We can thus use the augmented data to train an estimatort(x|θ score ) for the score at the reference point.
In a second step, we can then estimate the likelihoodp(t(x|θ score )|θ) with histograms, KDE, or any other density estimation technique, yielding the likelihood ratio estimator This particularly straightforward strategy is a machine-learning analogue of Optimal Observables that learns the effect of parton shower and detector from data. After an upfront regression phase, the analysis of an event only requires the evaluation of one estimator to draw conclusions about all parameters. See Appendix A 2 g for more details.
Local score regression, compression to scalar, and density estimation (Sallino 10 ): The Sally technique compresses the information in a high-dimensional vector of observables x into a lower-dimensional estimated score vector. But for measurements in high-dimensional parameter spaces, density estimation in the estimated score space might still be computationally expensive. Fortunately, the local model of Eq. (15) motivates an even more dramatic dimensionality reduction to one dimension, independent of the number of parameters: Disregarding the normalization constants, the ratio r(x|θ 0 , θ 1 ) only depends on the scalar product between the score and θ 0 − θ 1 .
Given the same score estimatort(x|θ score ) developed for the Sally method, we can define the scalar functionĥ Assuming a precisely trained score estimator, this scalar encapsulates all information on the likelihood ratio between θ 0 and θ 1 , at least in the local model approximation. The likelihood 9 Score approximates likelihood locally 10 Score approximates likelihood locally in one dimension

Strategy
Estimator versions Loss function Asymptotically exact PbP Param Aware CE ML Ratio Score Table III: Overview over the discussed measurement strategies. The first three techniques can be applied in the general likelihood-free setup, they only require sets of generated samples {x e }. The remaining five methods are tailored to the particle physics structure and require the availability of r(x e , z e |θ 0 , θ 1 ) or t(x e , z e |θ 0 ) from the generator, as discussed in Sec. III B. Brackets denote possible variations that we have not implemented for our example process. In the Sally and Sallino strategies, "estimator versions" refers to the density estimation step. In the loss function columns, "CE" stands for the cross-entropy, "ML" for maximum likelihood, "ratio" for losses of the type |r(x, z) −r(x)| 2 , and "score" for terms such as |t( ratio can then be estimated aŝ where thep(ĥ) are simple univariate density estimators.
This method allows us to condense any high-dimensional observation x into a scalar function without losing sensitivity, at least in the local model approximation. It thus scales exceptionally well to problems with many theory parameters. We describe Sallino in more detail in Appendix A 2 h.
Several other variants are possible, including other combinations of the loss functionals discussed above. We leave this for future work. 11 We summarize the different techniques in Table III. With this plethora of well-motivated analysis methods, the remaining key question for this paper is how well they do in practice, and which of them (if any) should be used. This will be the focus of Sec. V. In the next sections, we will first discuss some important additional aspects of these methods.

D. Calibration
While the likelihood ratio estimators described above work well in many cases, their performance can be further improved by an additional calibration step. Calibration takes place after the "raw" or uncalibrated estimatorsr raw (x|θ 0 , θ 1 ) have been trained. In general, it defines a function C with the aim thatr cal = C(r raw ) provides a better estimator of the true likelihood ratio thanr raw . We consider two different approaches to defining this function C, which we call probability calibration and expectation calibration.

Probability calibration
Consider the Carl strategy, which trains a classifier with a decision function s(x) as the basis for the likelihood ratio estimation. Even if this classifier can separate the two classes of events from θ 0 and θ 1 well, its decision functionŝ(x|θ 0 , θ 1 ) might not have a direct probabilistic interpretation: it might be any approximately monotonic function of the likelihood ratio rather than the ideal solution given in Eq. (17). In this case, the Carl approach requires calibration, an additional transformation of the raw outputr raw =r(x|θ 0 , θ 1 ) into a calibrated decision function where the densitiesp(r raw |θ) are estimated through a univariate density estimation technique such as histograms. This calibration procedure does not only apply to classifiers, but to any other likelihood ratio estimation strategy.

Expectation calibration
Consider some likelihood ratio r(x|θ 0 , θ 1 ). The expectation value of this ratio assuming θ 1 to be true is given by A good estimator for the likelihood ratio should reproduce this property. We can numerically approximate this expectation value by evaluatingr(x|θ, θ 1 ) on a sample {x e } of N events drawn according to θ 1 ,R If a likelihood ratio estimatorr raw (x|θ, θ 1 ) (which might be entirely uncalibrated or already probability calibrated) does not satisfy this condition, we can calibrate it by rescaling it aŝ For a perfect estimator withr(x|θ 0 , θ 1 ) = r(x|θ 0 , θ 1 ), we can even calculate the variance of the numeric calculation of the expectation value in Eq. (43). We find where N is the number of events used to calculate the expectation value R(θ), and the expectation E [r(x|θ, θ 1 )|θ] (under the numerator hypothesis!) can be calculated numerically. This calibration strategy can easily improve classifiers that are off by some θ-dependent factor. However, a few rare events x e with larger(x e |θ, θ 1 ) can dominate the expectation value. If these are mis-estimated, the expectation calibration can actually degrade the performance of the estimator on the bulk of the distribution with smallerr(x|θ, θ 1 ).
E. Implementation

Neural networks
With the exception of the simple histogram and AFC methods, all strategies rely on a classifier s(x|θ 0 , θ 1 ), score regressort(x|θ 0 ), or ratio regressorr(x|θ 0 , θ 1 ) that is being learnt from training data. For our explicit example, we implement these functions as fully connected neural networks: • In the point-by-point setup, the neural networks take the features x as input and models logr(x|θ 0 , θ 1 ). For ratio regression, this is exponentiated to yield the final outputr(x|θ 0 , θ 1 ).
In the Carl strategy, the network output is transformed to a decision function • In the agnostic parameterized setup, the neural networks take both the features x as well as the numerator parameter θ 0 as input and model logr(x|θ 0 , θ 1 ). In addition to the same subsequent transformations as in the point-by-point case, taking the gradient of the network output gives the estimator score. • In the morphing-aware setup, the estimator takes both x and θ 0 as input. The features x are fed into a number of independent networks, one for each basis component c, that model the basis ratios log r c (x). From θ 0 the estimator calculates the component weights w c (θ 0 ) analytically. The components are then combined with Eq. (12). For the Carl approaches, the output is again transformed with Eq. (46), and for the score-based strategies the gradient of the output is calculated. We visualize these architectures in Fig. 8.
All networks are implemented in shallow, regular, and deep versions with 2, 3, and 5 hidden layers of 100 units each and tanh activation functions. They are trained with the Adam optimizer [83] over 50 epochs with early stopping and learning rate decay. We implement them in keras [84] with a TensorFlow [85] backend. Experiments modeling s rather than log r, with different activation functions, adding dropout layers, or using other optimizers and learning rate schedules have led to a worse performance.

Training samples
Starting from the weighted event sample described in Sec. II D 2, we draw events (x e , z e ) randomly with probabilities given by the corresponding p(x e , z e |θ). Due to the form of the likelihood p(x, z|θ) and due to technical limitations of our simulator, individual data points can carry large probabilities p(x e , z e |θ), leading to duplicate events in the training samples. However, we enforce that there is no duplication between training and evaluation samples, so this limitation can only degrade the performance.
For the point-by-point setup, we choose 100 values of θ 0 , 5 of which are fixed at the SM (θ 0 = (0, 0)) and at the corners of the considered parameter space, with the remaining 95 chosen For the parameterized strategies, we compare three different training samples, each consisting of 10 7 events: Baseline: For 1000 values of θ 0 chosen randomly in θ space, we draw 5000 events according to θ 0 and 5000 events according to θ 1 given in Eq. (47).
Random θ: In this sample, the value of θ 0 is drawn randomly independently for each event. Again we use a flat prior over θ 0 ∈ [−1, 1] 2 .
Morphing basis: For each of the 15 basis hypotheses θ i from the morphing procedure, we generate 333 000 events according to θ 0 = θ i and 333 000 according to θ 1 .
Finally, for the local score regression model we use a sample of 10 7 events drawn according to the SM.
Our evaluation sample consists of 50 000 events drawn according to the SM. We evaluate the likelihood ratio for these events for a total of 1016 values of θ 0 , 1000 of which are the same as those used in the baseline training sample. Again we fix θ 1 as in Equation (47).
Each event is characterized by 42 features: • the energies, transverse momenta, azimuthal angles, and pseudo-rapidities of all six particles in the final state; • the energies, transverse momenta, azimuthal angles, pseudo-rapidities, and invariant mass of the four-lepton system as well as the two-lepton systems that reconstruct the two Z bosons; and • the invariant mass, separation in pseudorapidity, and separation in azimuthal angle of the di-jet system. The derived variables in the feature set help the neural networks pick up the relevant features faster, though we did not find that their choice affects the performance significantly.

Calibration and density estimation
We calibrate the classifiers for our example process with probability calibration as described in Sec. III D 1. We determine the calibration function C(r) with isotonic regression [86], which constrains C(r) to be monotonic. Experiments with other regression techniques based on histograms, kernel density estimation, and logistic regression did not lead to a better performance. We apply the calibration point by point in θ 0 . It is based on an additional event sample that is independent of the training and evaluation data. The same events are used to calibrate each value of θ, with an appropriate reweighting. This strategy to minimize variance is based on the availability of the parton-level likelihood function p(z|θ).
The techniques based on local score regression require the choice of a reference point to evaluate the score. For the EFT problem, the natural choice is θ score = θ SM = (0, 0) T . In the Sally approach, we perform the density estimation based on two-dimensional histograms of the estimated score at  F. Challenges and diagnostics

Uncertainties
Even the most evolved and robust estimators will have some deviations from the true likelihood ratio, which should be taken into account in an analysis as an additional modeling uncertainty. Most of the estimators developed above converge to the true likelihood ratio in the limit of infinite training and calibration samples. But with finite statistics, there are different sources of variance that affect some strategies more than others.
Consider the traditional histogram approach. In the point-by-point version, each separate estimatorr(x|θ 0 , θ 1 ) is trained on the small subset of the data generated from a specific value of θ 0 , so the variance from the finite size of the training data, i. e. the statistical uncertainty from the Monte-Carlo simulation, is large. At θ 0 values between the training points, there are additional sources of uncertainty from the interpolation. On the other hand, morphing-aware histograms use all of the training data to make predictions at all points, and since the dependence on θ 0 is known, the interpolation is exact. But the large morphing weights w c (θ 0 ) and the cancellations between them mean that even small fluctuations in the individual basis histograms can lead to huge errors on the combined estimator.
Similar patterns hold for the ML-based inference strategies. The point-by-point versions suffer from a larger variance due to small training samples at each point and interpolation uncertainties. The agnostic parameterized models have more statistics available, but have to learn the more complex full statistical model including the dependence on θ 0 . The morphing-aware versions make maximal use of the physics structure of the process and all the training data, but large morphing weights can dramatically increase the errors of the individual component estimators logr c (x). We demonstrate this for our example process in Fig. 9: in some regions of parameter space, in particular far away from the basis points, the errors on a morphing-aware estimator logr(x|θ, θ 1 ) can easily be 100 times larger than the individual errors on the component estimators logr c (x). The θ 0 dependence on this error depends on the choice of the basis points, this uncertainty can thus be somewhat mitigated by optimizing the basis points or by combining several different bases.

Diagnostics
After discussing the sources of variance, let us now turn towards diagnostic tools that can help quantify the size of estimator errors and to assign a modeling uncertainty for the statistical analysis. These methods are generally closure tests: we can check the likelihood ratio estimators for some expected behaviour, and use deviations either to correct the results (as in the calibration procedures described in Sec. III D), to define uncertainty bands, or to discard estimators altogether. We suggest the following tests: Ensemble variance: Repeatedly generating new training data (or bootstrapping the same training sample) and training the estimators again gives us an ensemble of predictions {r r (x|θ 0 , θ 1 )}.
We can use the ensemble variance as a measure of uncertainty of the prediction that is due to the variance in the training data and random seeds used during the training.

Reference hypothesis variation: Any estimated likelihood ratio between two hypotheses θ
should be independent of the choice of the reference hypothesis θ 1 used in the estimatorr. Training several independent estimators with different values of θ 1 thus provides another check of the stability of the results [34].
Much like the renormalization and factorization scale variations that are ubiquitous in particle physics calculations, this technique does not have a proper statistical interpretation in terms of a likelihood function. We can still use it to qualitatively indicate the stability of the estimator under this hyperparameter change.
Ratio expectation: As discussed above, the expectation value of the estimated likelihood ratio assuming the denominator hypothesis should be very close to one. We can numerically calculate this expectation valueR(θ), see Eq. (43). In Sec. III D 2 we argued that this expectation value can be used to calibrate the estimators, but that this calibration can actually decrease the performance in certain situations.
If expectation calibration is used, the calibration itself has a non-zero variance from the finite sample size used to calculate the expectation valueR(θ). As we pointed out in Sec. III D 2, we can calculate this source of statistical uncertainty, at least under the assumption of a perfect estimator withr(x|θ 0 , θ 1 ) = r(x|θ 0 , θ 1 ). The result given in Eq. (45) provides us with a handle to calculate the statistical uncertainty of this calibration procedure from the finite size of the calibration sample. Note that for imperfect estimators, the variance of R[R] may be significantly larger.
Independent of whether expectation calibration is part of the estimator, the deviation of the expectationR(θ) from one can serve as a diagnostic tool to check for mismodelling of the estimator. We can take logR(θ) as a measure of the uncertainty of logr(x|θ, θ 1 ). As in the case of the reference hypothesis variation, there is no consistent statistical interpretation of this uncertainty, but this does not mean that it is useless as a closure test.
We cannot evaluate the p(x|θ 0 ) to check this relation explicitly. However, we can sample events {x e } from them. This provides another diagnostic tool [34]: we can draw a first sample as x e ∼ p(x e |θ 0 ), and draw a second sample as x e ∼ p(x e |θ 1 ) and reweight it withr(x e |θ 0 , θ 1 ). For a good likelihood ratio estimator, the two samples should have similar distributions. This can easily be tested by training a discriminative classifier between the samples. If a classifier can distinguish between the sample from the first hypothesis and the sample drawn from the second hypothesis and reweighted with the estimated likelihood ratio, thenr(x|θ 0 , θ 1 ) is not a good approximation of the true likelihood ratio r(x|θ 0 , θ 1 ). Conversely, if the classifier cannot separate the two classes, the classifier is either not efficient, or the likelihood ratio is estimated well.
Note that passing these closure tests is not a guarantee for a good estimator of the likelihood ratio. In Sec. IV B we will discuss how we can nevertheless derive exclusion limits that are guaranteed to be statistically correct, i. e. that might not be optimal, but are never wrong.
In our example process, we will use a combination of the first two ideas of this list: we will create copies of estimators with independent training samples and random seeds during training, as well as with different choices of the reference hypothesis θ 1 , and analyse the median and envelope of the predictions.

IV. LIMIT SETTING
The final objective of any EFT analysis are exclusion limits on the parameters of interest at a given confidence level. These can be derived in one of two ways. The Neyman construction based on toy experiments provides a generic and fail-safe method, we will discuss it in Sec. IV B. But since the techniques developed in the previous section directly provide an estimate for the likelihood ratio, we can alternatively apply existing statistical methods for likelihood ratios as test statistics. This much more efficient approach will be the topic of the following section.

A. Asymptotics
Consider the test statistics In the asymptotic limit, the distribution according to the null hypothesis, p(q(θ)|θ), is given by a chi-squared distribution. The number of degrees of freedom k is equal to the number of parameters θ. This result by Wilks [87] allows us to translate an observed value q obs (θ) directly to a p-value that measures the confidence with which θ can be excluded: where F χ 2 (x|k) is the cumulative distribution function of the chi-squared distribution with k degrees of freedom. In our example process k = 2, for which this simplifies to In particle physics it is common practice to calculate "expected exclusion contours" by calculating the expected value of q obs (θ) based on a large "Asimov" data set generated according to some θ [88]. With Eq. (52) this value is then translated into an expected p-value. 12 In practice we cannot access the true likelihood ratio defined on the full observable space and thus also not q(θ). But if the error of an estimatorr(x|θ 0 , θ 1 ) compared to the true likelihood ratio is negligible, we can simply calculatê with maximum likelihood estimatorθ also based on the estimated likelihood ratio. The p-value can then be read off directly from the estimator output, substitutingq for q in Eq. (52). Under this assumption and in the asymptotic limit, constructing confidence intervals is thus remarkably simple and computationally cheap: after training an estimatorr(x|θ, θ 1 ) as discussed in the previous section, the observed events {x e } are fed into the estimator for each value of θ on some parameter grid. From the results we can read off the maximum likelihood estimatorq(θ) and calculate the observed value of the test statisticsq(θ) for each θ. Equation (52) then translates these values to p-values, which can then be interpolated between the tested θ points to yield the final contours.
To check whether these asymptotic properties apply to a likelihood ratio estimator, we can use the diagnostic tools discussed in Sec. III F 2. In addition, we can explicitly check whether the distribution of q(θ) actually follows a chi-squared distribution by generating toy experiments for a few θ points. If it does, the asymptotic results are likely to apply at other points in parameter space as well. If the variance of the toy experiments is larger than expected from the chi-squared distribution, the residual variance may be taken as an error estimate on the estimator prediction.

B. Neyman construction
Rather than relying on the asymptotic properties of the likelihood ratio test, we can construct the distribution of a test statistic with toy experiments. This is computationally more expensive, but useful if the number of events is not in the asymptotic regime or if the uncertainty of the estimators cannot be reliably quantified. Constraints derived in this way are conservative: even if the likelihood ratio is estimated poorly, the resulting contours might not be optimal, but they are never wrong (at a specified confidence level). 12 For k = 1, this standard procedure reproduces the median expected p-value. Note however that this is not true anymore for more than one parameter of interest. In this case, the median expected p-value can be calculated based on a different, but not commonly used, procedure. It is based on the fact that the distribution of q according to an alternate hypothesis, p(q(θ)|θ ), is given by a non-central chi-squared distribution [89]. In the asymptotic limit, the non-centrality parameter is equal to the expectation value E[q(θ)|θ ] [88]. This allows us to calculate for instance the median expected q(θ) assuming some value θ based on the Asimov data. Combining all the pieces, the median expected p-value p θ with which θ can be excluded under the assumption that θ is true is then given by p expected from θ θ = 1 − F χ 2 (F −1 is the cumulative distribution function for the chi-squared distribution with k degrees of freedom and F −1 χ 2 nc (p|k, Λ) is the inverse cumulative distribution function for the non-central chi-squared distribution with k degrees of freedom and non-centrality parameter Λ.
A good choice for the test statistics is the estimated profile log likelihood ratioq(θ) given in Eq. (54), which allows us to compare the distribution of the toy experiments directly to the asymptotic properties discussed in the previous section. However, its construction requires finding the maximum likelihood estimator for every toy experiment. This increases the necessary computation time substantially, especially in high-dimensional parameter spaces. An alternative test statistics is the estimated log likelihood ratio with respect to some fixed hypothesis, which need not be identical to the reference denominator θ 1 used in the likelihood ratio estimators. In the EFT approach, the natural choice is the estimated likelihood ratio with respect to the SM, Using this test statistic rather than the profile likelihood ratio defined in Eq. (50) is expected to lead to stronger constraints if the true value of θ is close to the SM point, as expected in the EFT approach, and less powerful bounds if the true value is substantially different from the SM. In practice we can efficiently calculate the distribution ofq ( ) (θ) after n events by first calculating the distribution ofq ( ) (θ) for one event and convolving the result with itself (n − 1) times.

C. Nuisance parameters
The tools developed above also support nuisance parameters, for instance to model systematic uncertainties in the theory calculation or the detector model. One strategy is to train parameterized estimators on samples generated with different values of the nuisance parameters ν and let them learn the likelihood ratior with its dependence on the nuisance parameters. As test statistics we can then use the estimator version of the usual profile log likelihood ratio, with constraint terms q(ν), θ ,ν = arg max (θ,ν) e log r(x e |θ, θ 1 ; ν, ν 1 ) q(ν) q(ν 1 ) .
The profile log likelihood ratio has two advantages: it is pivotal, i. e. its value and its distribution do not depend on the value of the nuisance parameter, and it has the asymptotic properties discussed in Sec. IV A. Similarly, we can train the score including nuisance parameters, If the constraints q(ν) limit the nuisance parameters to a relatively small region around some ν 0 , i. e. a range in which the shape of the likelihood function does not change significantly, the Sally and Sallino methods seem particularly appropriate. Finally, an adversarial component in the training procedure lets us directly train pivotal estimatorŝ r(x|θ 0 , θ 1 ), i. e. that do not depend on the value of the nuisance parameters [90]. Compared to learning the explicit dependence on ν, this can dramatically reduce the dimensionality of the parameter space as early as possible, and does not require manual profiling. However, the estimators will generally not converge to the profile likelihood ratio, so its asymptotic properties do not apply and limit setting requires the Neyman construction.

V. RESULTS
We now apply the analysis techniques to our example process of WBF Higgs production in the 4 decay mode. We first study the idealized setup discussed in Sec. II D 2, in which we can assess the techniques by comparing their predictions to the true likelihood ratio. In Sec. V B we then calculate limits in a more realistic setup.
A. Idealized setup

Quality of likelihood ratio estimators
The 1000 tested values of θ 0 are weighted with a Gaussian prior with normalization factor Z such that θ 0 π(θ) = 1. In addition we show the expected trimmed mean squared error, which truncates the top 5% and bottom 5% of events for each θ. This allows us to analyse the quality of the estimators for the bulk of the phase space without being dominated by a few outliers. In The best results come from parameterized estimators that combine either a classifier decision function or ratio regression with regression on the score: the Cascal and Rascal strategies provide very accurate estimates of the log likelihood ratio. Sally, parameterized Rolr, and parameterized Carl perform somewhat worse. For Carl and Rolr, parameterized estimators consistently perform better then the corresponding point-by-point versions. All these ML-based strategies significantly outperform the traditional one-or two-dimensional histograms and the Approximate Frequentist Computation.
The morphing-aware versions of the parameterized estimators lead to a poor performance, comparable or worse than the two-dimensional histogram approach. As anticipated in Sec. III F,  Table IV: Performance of the different likelihood ratio estimation techniques in our example process. The metrics shown are the expected mean squared error on the log likelihood ratio with and without trimming, as defined in the text. Checkmarks in the last column denotes estimators shown in the following figures. Here we only give results based on default settings, which are defined in Appendix A 2. An extended list of results that covers more estimators is given in Appendix A 3. the large weight factors and the sizable cancellations between them blow up small errors on the estimation of the individual basis estimatorsr i (x) to large errors on the combined estimator.
We find that probability calibration as discussed in Sec. III D 1 improves the results in almost all cases, in particular for the Carl method. An additional step of expectation calibration (see Sec. III D 2) after the probability calibration does not lead to a further improvement, and in fact often increases the variance of the estimator predictions. We therefore only use probability calibration for the results presented here.
The choice of the training sample is less critical, with nearly identical results between the baseline and random θ samples. For the Carl approach, shallow networks with two hidden layers perform better, while Rolr works best for three hidden layers and the score-based strategies benefit from a deeper network with five hidden layers. In Fig. 10 we show scatter plots between the true and estimated likelihood ratios for a fixed hypothesis θ 0 . The likelihood ratio estimate from histograms of observables is widely spread around the true likelihood ratio, reflecting the loss of information from ignoring most directions in the observable space. Carl performs clearly better. Rolr and Sally offer a further improvement. Again, the best results come from the Cascal and Rascal strategies, both giving predictions that are virtually one-to-one with the true likelihood ratio.
We go beyond a single benchmark point θ 0 in Fig. 11. This scatter plot compares true and estimated likelihood ratios for different values of θ 0 , taking the expectation value over x. We find that the Carl, Rolr, Cascal, and Rascal approaches converge to the correct likelihood ratio in this expectation value. For the Sally and Sallino techniques we find larger deviations, pointing towards the breakdown of the local model approximation. Much more obvious is the loss of information in the traditional histogram approach, which is clearly not asymptotically exact. The point-by-point, agnostic parameterized, and morphing-aware versions of the Carl strategy are compared in Fig. 12. As expected from Table IV, the parameterized strategy performs better than the point-by-point version, and both are clearly superior to the morphing-aware estimator.

Efficiency and speed
With infinite training data, many of the algorithms should converge to the true likelihood ratio. But generating training samples can be expensive, especially when a full detector simulation is used. An important question is therefore how much training data the different techniques require to perform well. In Fig. 13 we show the performance as a function of the training sample size.
The Sally approach performs very well even with very little data. Its precision stops improving eventually, showing the limitations of the local model approximation. For the other methods we find that the more information a technique uses, the less training data points it requires. The Rascal technique utilizes the most information from the simulator, leading to an exceptional performance with training samples of approximately 100 000 events. This is in contrast to the most general Carl method, which does not use any of the extra information from the simulator and requires a two orders of magnitude larger training sample for a comparable performance.
In Fig. 14 we show the evolution of the likelihood estimation error and the cross-entropy of the classification problem during the training of the parameterized estimators. For comparison, we also show the optimal metrics based on the true likelihood ratio, and the results of the two-dimensional histogram approach. Once again we see that either Cascal or Rascal leads to the best results. This result also holds true for the cross entropy, hinting that the techniques we use to measure continuous parameters might also improve the power of estimators in discrete classification problems. Note that the Carl approach is more prone to overfitting than the others, visible as a significant  Figure 13: Performance of the techniques as a function of the training sample size. As a metric, we show the mean squared error (left) and trimmed mean squared error on log r(r|θ 0 , θ 1 ) weighted with a Gaussian prior, as discussed in the text. Note that we do not vary the size of the calibration data samples. The number of epochs are increased such that the number of epochs times the training sample size is constant, all other hyperparameters are kept constant. The Sally method works well even with very little data, but plateaus eventually due to the limitations of the local model approximation. The other algorithms learn faster the more information from the simulator is used.  Table V: Computation times of evaluatingr(x|θ 0 , θ 1 ) in the different algorithms. We distinguish between steps that have to be calculated once per x and and those which have to be repeated for every evaluated value of θ 0 . These numbers are from one run of our algorithms with default settings on the NYU HPC cluster on machines equipped with Intel Xeon E5-2690v4 2.6GHz CPUs and NVIDIA P40 GPUs with 24 GB RAM, using a batch of 50 000 events {x e }, and taking the mean over 1 017 values of θ 0 . The local score regression method and the traditional histogram method are particularly fast. But all techniques are many orders of magnitude faster to evaluate than the matrix element method or optimal observables. difference between the metrics evaluated on the training and validation samples. Equally important to the training efficiency is the computation time taken up by evaluating the likelihood ratio estimatorsr(x e |θ 0 , θ 1 ). We compare example evaluation times in Table V. The traditional histogram approach takes the shortest time. But all tested algorithms are very fast: the likelihood ratio for fixed hypotheses (θ 0 , θ 1 ) for 50 000 events {x e } can always be estimated in around one second or less. The local score regression method is particularly efficient, since the estimatort(x|θ score , θ 1 ) has to be evaluated only once to estimate the likelihood ratio for any value of θ 0 . Only the comparably fast step of density estimation has to be repeated for each tested value of θ 0 .
So after investing some training time upfront, all the measurement strategies developed here can be evaluated on any events with very little computational cost and amortize quickly. While this is not the focus of our paper, note that this distinguishes our approaches from the Matrix Element Method and Optimal Observable techniques. These well-established methods require the computationally expensive evaluation of complicated numerical integrals for every evaluation of the likelihood ratio estimator.

Physics results
The most important result of an EFT measurement are observed and expected exclusion contours, either based on asymptotics or toy experiments. In the asymptotic approach, the expected contours are determined just by the likelihood ratio evaluated on a large "Asimov" data set, as described in Sec. IV A. Figure 15 shows this expected log likelihood ratio in the SM after 36 events over a onedimensional slice of the parameter space. In Fig. 16 we show the corresponding expected exclusion limits on the two Wilson coefficients. To estimate the robustness of the likelihood ratio estimators, each algorithm is run five times with different choices of the reference hypothesis; independent training, calibration, and evaluation samples; and independent random seeds during training. The lines show the median of the five replicas, while the shaded bands show the envelope. While this error band does not have a clear statistic interpretation, it does provide a diagnostic tool for the variance of the estimators.
A traditional histogram-based analysis of jet p T and ∆φ jj leads to overly conservative results. It is interesting to note that this simple analysis works reasonably well in the region of parameter space with f W > 0 and f W W > 0, which is exactly the part of parameter space where informative high-energy events interfere mostly constructively with the SM amplitude. In the f W < 0 region of parameter space, destructive interference dominates in the important regions of phase space with large momentum transfer. An extreme example is the "amplitude-through-zero" effect shown in the left panel of Fig. 2. Simple histograms with a rough binning generally lead to a poor estimation of the likelihood ratio in such complicated kinematic signatures.
We find that the new ML-based strategies allow us to place visibly tighter constraints on the Wilson coefficients than the doubly differential histogram. In particular the Carl + score and regression + score estimators lead to exclusion contours that are close to the contours based on the true likelihood ratio. In this analysis based on asymptotics, however, it is possible for the estimated contours to be slightly too tight, wrongly marking parameter regions as excluded at a given confidence level. This problem can be mitigated by profiling over systematic uncertainties assigned to the likelihood ratio estimates.
Exclusion limits based on the Neyman construction do not suffer from this issue: contours derived in this way might be not optimal, but they are never wrong. We generate toy experiments to estimate the distribution of the likelihood ratio with respect to the SM for individual events. Repeatedly convolving this single-event distribution with itself, we find the distribution of the  Table IV and Fig. 13, which are weighted with the prior in Eq. (62). These results also do not include calibration. Left: Mean squared error of logr(x|θ 0 , θ 1 ). Right: binary cross-entropy of the classification based onŝ(x|θ 0 , θ 1 ) between the numerator and denominator samples. The solid grey line shows the "optimal" performance based on the true likelihood ratio. The Cascal and Rascal techniques converge to a performance close to the theoretic optimum. The Carl approach (green), based on minimizing the cross entropy, shows signs of overfitting. All machine-learning-based methods outperform traditional histograms (dashed orange). Truth CASCAL RASCAL Figure 16: Expected exclusion contours based on asymptotics at 68% CL (innermost lines), 95% CL, and 99.7% CL (outermost lines). We assume 36 observed events and the SM to be true. As test statistics, we use the profile likelihood ratio with respect to the maximum-likelihood estimator. For each estimator, we generate five sets of predictions with different reference hypotheses, independent data samples, and different random seeds. The lines show the median of this ensemble, the shaded error bands the envelope. The new techniques based on machine learning, in particular the Cascal and Rascal techniques, lead to expected exclusion contours very close to those based on the true likelihood ratio. An analysis of a doubly differential histogram leads to much weaker bounds.  Figure 18: Ratio of the estimated likelihood ratior(x|θ 0 , θ 1 ) to the joint likelihood ratio r(x, z|θ 0 , θ 1 ), which is conditional on the parton-level momenta and other latent variables. As a benchmark hypothesis we use θ 0 = (−0.5, −0.5) T , the events are drawn according to the SM. The spread common to all methods shows the effect of the smearing on the likelihood ratio. The additional spread in the histogram, Carl, and Rolr methods is due to a poorer performance of these techniques.

CASCAL RASCAL
likelihood ratio after 36 observed events.
The expected corresponding expected exclusion limits are shown in Fig. 17. Indeed, errors in the likelihood ratio estimation never lead to undercoverage, i. e. the exclusion of points that should not be excluded based on the true likelihood ratio. Again we find that histograms of kinematic observables only allow us to place rather weak bounds on the Wilson coefficients. Sally performs clearly better, with excellent performance close to the SM. Deviations from the optimal bounds become visible at the 2σ level, hinting at the breakdown of the local model approximation there. The best results come once more from the Cascal and Rascal methods. Both of these strategies yield exclusion bounds that are virtually indistinguishable from those based on the true likelihood ratio.
As a side note, a comparison of the expected contours based on asymptotics to those based on the Neyman construction shows the Neyman results to be tighter. This reflects the different test statistics used in the two figures: in the asymptotics case, we use the profile likelihood ratio with respect to the maximum likelihood estimator, which itself fluctuates around the true value of θ (which in our case is assumed to be the SM). In the Neyman construction we use the likelihood ratio with respect to the SM, leading to tighter contours if the true value is in fact close to the SM, and weaker constraints if it is very different.

B. Detector effects
We have now established that the measurement strategy work very well in an idealized setup, where we can compare them to the true likelihood ratio. In a next step, we turn towards a setup with a rudimentary smearing function that models the effect of the parton shower and the detector response on the observables. In this setting, the true likelihood is intractable, so we cannot use it as a baseline to validate the predictions any more. But we can still discuss the relative ordering of the exclusion contours predicted by the different estimators. Figure 18 shows the relation between the true joint likelihood ratio r(x, z|θ 0 , θ 1 ), which is  Figure 19: Expected exclusion contours based on the Neyman construction with toy experiments at 68% CL, 95% CL, and 99.7% CL with smearing. We assume 36 observed events and the SM to be true. As test statistics, we use the likelihood ratio with respect to the SM. In the setup with smearing we cannot these results to the true likelihood contours. But since the Neyman construction is guaranteed to cover, these expected limits are correct. The new techniques, in particular Cascal (right, dashed red) and Rascal (right, dash-dotted orange), allow us to set much tighter bounds on the Wilson coefficients than a traditional histogram analysis (left, dotted orange). conditional on the parton-level momenta z and other latent variables, to the estimated likelihood ratior(x|θ 0 , θ 1 ), which only depends on the observables x. We see that this relation is stochastically smeared out around 1. Recall that in the idealized scenario the best estimators described the true likelihood ratio perfectly, as shown in Fig. 10. This strongly suggests that the spread visible here is not due to errors of the likelihood ratio estimators, but rather shows the difference between the joint and true likelihood ratios, as illustrated in Fig. 5.
In Fig. 19 we show the expected exclusion contours based on the Neyman construction, which guarantees statistically correct results. The conclusions from the idealized setting are confirmed: a measurement based on the likelihood ratio estimators leads to robust bounds that are clearly more powerful than those based on a histogram. Once again, the Cascal and Rascal algorithms lead to the strongest limits.

VI. CONCLUSIONS
We have developed and analysed a suite of new analysis techniques for measurements of continuous parameters in LHC experiments based on simulations and machine learning. Exploiting the structure of particle physics processes, they extract additional information from the event generators, and use this information to train precise estimators for likelihood ratios.
Our approach is designed for problems with large numbers of observables, where the likelihood function is not tractable and traditional methods based on individual kinematic variables often perform poorly. It scales well to high-dimensional parameter spaces such as that of effective field theories. The new methods do not require any approximations on the hard process, parton shower, or detector effects, and the likelihood ratio for any event and hypothesis pair can be evaluated in microseconds. These two properties set it apart from the Matrix Element Method or Optimal Observables, which rely on crude approximations for the shower and detector and require the evaluation of typically very expensive integrals.
Using Higgs production in weak boson fusion in the four-lepton mode as a specific example process, we have evaluated the performance of the different methods and compared them to a classical analysis of the jet momentum and azimuthal angle between the tagging jets. We find that the new algorithms provide very precise estimates of arbitrary likelihood ratios. Using them as a test statistics allows us to impose significantly tighter constraints on the EFT coefficients than the traditional kinematic histograms.
Out of the several methods introduced and discussed in this paper, two stand out. The first, which we call Sally, is designed for parameter regions close to the Standard Model: 1. As training data, the algorithm requires a sample of fully simulated events, each accompanied by the corresponding joint score at the SM: the relative change of the parton-level likelihood function of the parton-level momenta associated with this event under small changes of the theory parameters away from the SM. This can be calculated by evaluating the squared matrix element at the same phase-space points for different theory parameters. We can thus extract this quantity from Monte-Carlo generators such as MadGraph.
2. Regressing on this data, we train an estimator (for instance realized as a neural network) that takes as input an observation and returns the score at the SM. This function compresses the high-dimensional observable space into a vector with as many components as parameters of interest. If the parameter space is high-dimensional, this can be even further compressed into the scalar product between the score vector and the difference between two parameter points.
3. The estimated score (or the scalar product between score and parameter difference) can then be treated like any set of observables in a traditional analysis. We can fill histograms of this quantity for different hypotheses, and calculate likelihood ratios from them.
There are two key ideas that underlie this strategy. First, note that the training data only consists of the joint score, which depends on the parton-level four-momenta of an event. But during the training the estimator converges to the actual score of the distribution of the observables, i. e. the relative change of the actual likelihood function under infinitesimal changes of the parameters. We have proven this powerful, yet surprisingly simple relation in this paper.
Second, close to the Standard Model (or any other reference parameter point), the score provides the sufficient statistics: it encapsulates all information on the local approximation of the statistical model. In other words, if the score is estimated well, the dimensionality reduction from high-dimensional observables into a low-dimensional vector does not lose any information on the parameters. The estimated score is a machine-learning version of the Optimal Observable idea, but requires neither approximations of the parton shower or detector treatment nor numerically expensive integrals.
As a matter of fact, the dimensionality reduction can be taken one step further. We have introduced the Sallino technique that compresses the estimated score vector to a single scalar function, again without loss of power in the local approximation, and independent of the number of theory parameters.
In our example process, these simple and robust analysis strategies work remarkably well, especially close to the Standard Model. Deviations appear at the 2σ level, but even there it allows for much stronger constraints than a traditional analysis of kinematic variables. It requires significantly less data to train than the other discussed methods. Since the Sallino method can compress any observation into a single number without losing much sensitivity, even for hundreds of theory parameters, this approach scales exceptionally well to high-dimensional parameter spaces, as in the case of the SMEFT.
The second algorithm we want to highlight here is the Rascal technique. Using even more information available from the simulator, it learns a parameterized likelihood ratio estimator: one function that takes both the observation and a theory parameter point as input and returns an estimate for the likelihood ratio between this point and a reference hypothesis given the observation. This estimator is constructed as follows: 1. Training this parameterized estimator requires data for many different values of the tested parameter point (the numerator in the likelihood ratio). For simplicity, the reference hypothesis (the denominator in the likelihood ratio) can be kept fixed. For each of these hypothesis pairs, event samples are generated according to the numerator and denominator hypothesis.
In addition, we extract the joint likelihood ratio from the simulator: essentially the squared matrix element according to the numerator theory parameters divided by the squared matrix element according to the denominator hypothesis, evaluated at the generated parton-level momenta. Again, we also need the joint score, i. e. the relative change of the parton-level likelihood function under infinitesimal changes of the theory parameters. Both quantities can be extracted from matrix element codes.

2.
A neural network models the estimated likelihood ratio as a function of both the observables and the value of the theory parameters (of the numerator in the likelihood ratio). We can calculate the gradient of the network output with respect to the theory parameter and thus also the estimated score. The network is trained by minimizing the squared error of the likelihood ratio plus the squared error of the score, in both cases with respect to the joint quantities extracted from the simulator.
3. After the training phase, the likelihood ratio can optionally be calibrated, for instance through isotonic regression.
This technique relies on a similar trick as the local score regression method: the likelihood ratio learned during the training converges to the true likelihood ratio, even though the joint ratio information in the training data is conditional on the parton-level momenta. The Rascal method is among the best-performing methods of all analysed techniques. It requires significantly smaller training samples than all other approaches, with the exception of Sally and Sallino. Expected exclusion limits derived in this way are virtually indistinguishable from those based on the true likelihood ratio.
On top of these two approaches, we have developed, analysed, and compared several other methods. We refer the reader to the main part and the appendices of this document, where all these algorithms are discussed in depth.
All tools developed here are suitable for large-scale LHC analyses. On the software side, only few modifications of existing tools are necessary. Most importantly, matrix-element generators should provide a user-friendly interface to calculate the squared matrix element for a given configuration of four-momenta and a given set of physics parameters. With such an interface, one could easily calculate the joint score and joint likelihood ratio data that is needed for the new algorithms. The training of the estimators is then straightforward, in particular for the Sally and Sallino methods. The limit setting follows established procedures, either based on the Neyman construction with toy experiments, or (since the tools provide direct estimates for the likelihood ratio) using asymptotic formulae.
While we have focussed on the example of effective field theory measurements, these techniques equally apply to other measurements of continuous parameter in collider experiments as well as to a large class of problems outside of particle physics [53]. Some of the techniques can also be applied to improve the training of machine-learning-based classifiers. Finally, while we restricted our analysis to frequentist confidence intervals, as is common in particle physics, the same ideas can be used in a Bayesian setting.
All in all, we have presented a range of new inference techniques based on machine learning, which exploit the structure of particle physics processes to augment training data. They scale well to large-scale LHC analyses with many observables and high-dimensional parameter spaces. They do not require any approximations of the hard process, parton shower, or detector effects, and the likelihood ratio can be evaluated in microseconds. In an example analysis, these new techniques have demonstrated the potential to substantially improve the precision and new physics reach of the LHC legacy results.   (35) and (37).
while the distribution of the jet properties depending on the quark momenta is given by Here N x µ, σ 2 is the Gaussian distribution with mean µ and variance σ 2 . The jet energy resolution parameters a i , b i , c, d i , and e i are based on the default settings of the jet transfer function in MadWeight [91]. The functions E 0 (p T , η, m) and p T 0 (E, η, m) refer to the energy and transverse momentum corresponding to an on-shell particle with mass m.

Model almanac
In Sec. III A we developed different estimators for the likelihood ratio, focussing on the key ideas over technical details. Here we fill in the gaps, explain all strategies in a self-contained way, and document the settings we use for our example process. To facilitate their comparison, we describe all models in terms of a "training" and an "evaluation" part, even if this language is not typically used e. g. for histograms. one or at most a few kinematic observables v. Typical choices are the reconstructed energies, momenta, angles, or invariant masses of particles. Choosing the right set of observables for a given measurement problem is all but trivial, but many processes have been studied extensively in the literature. Once this choice is made, this strategy is simple, fast, and intuitive. We illustrate the information used by this approach in the top panels of Fig. 20.
Requirements: The histogram approach can be used in the general likelihood-free setting: it only requires a simulator that can generate samples {x} ∼ p(x|θ).
Structure: Histograms are most commonly used point by point in θ. If the problem has the morphing structure discussed in Sec. II C 2, they can also be applied in a morphing-aware parameterized way (we have not implemented this for our example process).
Training: After generating samples for both the numerator and denominator hypotheses, the values of the chosen kinematic variables v(x) are extracted, and binned into separate histograms for the two hypotheses.
Calibration: With sufficient training data, histograms should be well calibrated, so we do not experiment with an additional calibration stage.
Evaluation: To estimate the likelihood ratio between two hypotheses θ 0 and θ 1 for a given set of observables x, one has to extract the kinematic variables v(x) and look up the corresponding bin contents in the histograms for θ 0 and θ 1 . Assuming equal binning for both histograms, the likelihood ratio is simply estimated as the ratio of bin contents.
Parameters: The only parameters of this approach are the choices of kinematic observables and the histogram binning.
In our example process, we consider six different variants: • A one-dimensional histogram of the transverse momentum p T,j1 of the leading (higher-p T ) jet with 80 bins. • A one-dimensional histogram of the absolute value of the azimuthal angle ∆φ jj between the two jets with 20 bins. • A "coarse" two-dimensional histogram of these two variables with 10 bins in the p T,j1 direction and 5 bins along ∆φ jj . • A "medium" two-dimensional histogram of these two variables with 20 bins in the p T,j1 direction and 10 bins along ∆φ jj . • A "fine" two-dimensional histogram of these two variables with 30 bins in the p T,j1 direction and 15 bins along ∆φ jj . • A "very fine" two-dimensional histogram of these two variables with 50 bins in the p T,j1 direction and 20 bins along ∆φ jj . • An "asymmetric" two-dimensional histogram of these two variables with 50 bins in the p T,j1 direction and 5 bins along ∆φ jj .
For each pair (θ 0 , θ 1 ) and each observable, the bin edges are chosen such that the same expected number of events according to θ 0 plus the expected number of events according to θ 1 is the same in each bin.

b. Approximate Frequentist Computation ( Afc)
Idea: Approximate Bayesian Computation is a very common technique for likelihood-free inference in a Bayesian setup. In its simplest form it keeps samples according to the rejection probability of Eq. (29). This amounts to an approximation of the likelihood function through kernel density estimation, which we can isolate from the Abc sampling mechanism and use in a frequentist setting. We call it Approximate Frequentist Computation (AFC) to stress the relation to Abc. Just as Abc or the histogram approach, it requires the choice of a summary statistics v(x), which in our example process we take to be a two-dimensional or five-dimensional subset of the kinematic variables.
Requirements: AFC can be used in the general likelihood-free setting: it only requires a simulator that can generate samples {x} ∼ p(x|θ).
Structure: We use AFC point by point in θ. If the problem has the morphing structure discussed in Sec. II C 2, it can also be applied in a morphing-aware parameterized way.
Training: For each event in the numerator and denominator training samples, the summary statistics v(x) are calculated and saved.
Calibration: AFC can be calibrated as any other technique on this list, but we left this for future work.
Evaluation: The summary statistics v(x) is extracted from the observation. For numerator and denominator hypothesis separately, the likelihood at this point is estimated with Eq. (30). The likelihood ratio estimate is then simply given by the ratio between the estimated numerator and denominator densities.
Parameters: Just as for histograms, the choice of the summary statistics is the most important parameter. The performance of AFC also crucially depends on the kernel and bandwidth ε.
Too small values for the bandwidth make large training samples necessary, too large values lead to an oversmoothening and loss of information.
In our example process, we consider two different variants: • A two-dimensional summary statistics space of the leading jet p T and ∆φ jj (see above). Both variables are rescaled to zero mean and unit variance. • A five-dimensional summary statistics space of the leading jet p T , ∆φ jj , the dijet invariant mass m jj , the separation in pseudorapidity between the jets ∆η jj , and the invariant mass of the lighter (off-shell) reconstructed Z boson m Z2 . All variables are rescaled to zero mean and unit variance.
We use Gaussian kernels with bandwidths between 0.01 and 0.5.

c. Calibrated classifiers ( Carl)
Idea: Carl was developed in Ref. [34]. The authors showed that the likelihood ratio is invariant under any transformation that is monotonic with the likelihood ratio. In practice, this means that we can train a classifier between two samples generated from the numerator and denominator hypotheses and turn the classifier decision functionŝ(x) into an estimator for the likelihood ratior(x). This relation betweenŝ(x) andr(x) can follow the ideal relation in Eqs. (18) and (46). But even if this relation does not hold, we can still extract a likelihood ratio estimator from the classifier output through probability calibration.
Requirements: Carl can be used in the general likelihood-free setting: it only requires a simulator that can generate samples {x} ∼ p(x|θ).
Structure: Carl can be used either point by point, in an agnostic parameterized version, or (if the morphing condition in Eq. (6) holds) in a morphing-aware version. Figure 8 illustrates the structure of the estimator in these three cases.
Training: A classifier with decision functionŝ(x|θ 0 , θ 1 ) is trained to discriminate between numerator (label y = 0) and denominator (label y = 1) samples by minimizing the binary cross-entropy given in Eq. (16) (other loss functions are possible, but we have not experimented with them).
In the point-by-point version, the inputs to the classifiers are just the observables x, and the events in the numerator sample are generated according to one specific value θ 0 . In the parameterized versions of the estimator, the numerator training samples do not come from a single parameter θ 0 , but rather a combination of many different subsamples. In the agnostic parameterized setup, the value of θ 0 used in each event is then one of the inputs to the neural network. In the morphing-aware versions, it is used to calculate the weights w c (θ 0 ) that multiply the different component networksr c (x), as visualized in the bottom panel of Fig. 8.
Calibration: In a next step, the classifier output is optionally calibrated as discussed in Sec. III D 1 using isotonic regression. The calibration curve is shown in the left panel of Fig. 21. We have also experimented with an additional step of expectation calibration, see Sec. III D 2.
Evaluation: For a given x (and in the parameterized versions θ 0 ), the classifier decision function s(x|θ 0 , θ 1 ) is evaluated. This is turned into a likelihood ratio estimator with the relation given in Eq. (18), and optionally calibrated.
Parameters: The parameters of this approach are the architecture of the neural network, i. e. the number of layers and elements, the activation function, the optimizer used for training, its parameters, and optionally regularization terms.
For our example process we consider fully connected neural networks with two ("shallow"), three, or five ("deep") layers of 100 neurons each and tanh activation functions. Idea: Particle physics generators do not only provide sets of observables {x e }, but also the corresponding parton-level momenta {z e }. From matrix element codes such as MadGraph we can extract the squared matrix element |M| 2 (z|θ) given parton-level momenta z and theory parameter points θ. This allows us to calculate the joint likelihood ratio r(x e , z e |θ 0 , θ 1 ) = p(z e |θ 0 ) p(z e |θ 1 ) = |M| 2 (z e |θ 0 ) |M| 2 (z e |θ 1 ) for any of the generated events.
In Sec. III B we have shown that regressing a functionr(x) on the generated events {x e } and the corresponding joint likelihood ratios r(x e , z e |θ 0 , θ 1 ) will converge tô provided that the events are sampled according to x e ∼ p(x|θ 1 ).

Requirements:
The Rolr technique requires a generator with access to the joint likelihood ratios r(x e , z e |θ 0 , θ 1 ). In the particle physics case, this means we have to be able to evaluate the squared matrix elements for given phase-space points and theory parameters.
Structure: Rolr can be used either point by point, in an agnostic parameterized version, or (if the morphing condition in Eq. (6) holds) in a morphing-aware version. Figure 8 illustrates the structure of the estimator in these three cases.
Training: The training phase is straightforward regression. It consists of minimizing the squared error loss between a flexible functionr(x|θ 0 , θ 1 ) (for instance a neural network) and the training data {x e , r(x e , z e |θ 0 , θ 1 )}, which was generated according to θ 1 .
In the point-by-point version, the input to the regressor are just the observables x, and the ratio is between two fixed hypotheses θ 0 and θ 1 . In the parameterized versions of the estimator, the ratios are based on various values θ 0 , while θ 1 is still kept fixed. In the agnostic parameterized setup, the value of θ 0 used in each event is then one of the inputs to the neural network. In the morphing-aware versions, it us used to calculate the weights w c (θ 0 ) that multiply the different component networksr c (x), as visualized in the bottom panel of Fig. 8.
In all cases we can slightly improve the structure by adding samples generated according to θ 0 to the training samples, regressing on 1/r instead of r on these events. The full loss functional is given in Eq. (33).
Calibration: In a next step, the classifier output is optionally calibrated as discussed in Sec. III D 1 using isotonic regression. The calibration curve is shown in the middle panel of Fig. 21. We have also experimented with an additional step of expectation calibration, see Sec. III D 2.
Evaluation: For a given x (and in the parameterized versions θ 0 ), the regressorr(x|θ 0 , θ 1 ) is evaluated. The result is optionally calibrated.
Parameters: The parameters of this approach are the architecture of the neural network, i. e. the number of layers and elements, the activation function, the optimizer used for training, its parameters, and optionally regularization terms.
For our example process we consider fully connected neural networks with two ("shallow"), three, or five ("deep") layers of 100 neurons each and tanh activation functions. Idea: The parameterized Carl technique learns the full statistical modelr(x|θ 0 , θ 1 ), including the dependency on θ 0 . If it is realized as a differentiable classifier (such as a neural network), we can calculate the gradient ofr(x|θ 0 , θ 1 ) with respect to θ 0 , and thus the estimated score of this model. If the estimator is perfect, we expect this estimated score to minimize the squared error with respect to the joint score data available for the training data. This is based on the same argument as the local score regression technique, see Sec. III B for the proof.
We can turn this argument around and use the available score information during the training.
To this end, we combine two terms in a combined loss function: the Carl-style cross-entropy and the squared error between the estimated score and the joint score of the training data. These two pieces contain complementary information: the Carl part contains the information of the likelihood ratio for a fixed hypothesis comparison (θ 0 , θ 1 ), while the score part describes the relative change of the likelihood ratio under changes in θ 0 .
Requirements: The Cascal technique requires a generator with access to the joint score t(x e , z e |θ 0 ). In the particle physics case, this means we have to be able to evaluate the squared matrix elements for given phase-space points and theory parameters.
Structure: Since the Cascal method relies on the extraction of the estimated score from the estimator, it can only be used for parameterized estimators, either in an agnostic or morphingaware version. The middle and bottom panels of Fig. 8 illustrate the structure of the estimator in these two cases.
Training: A differentiable classifier with decision functionŝ(x|θ 0 , θ 1 ) is trained to discriminate between numerator (label y = 0) and denominator (label y = 1) samples, while the derived estimated scoret(x|θ 0 ) is compared to the joint score on the training samples generated from y = 0. The loss function that is minimized is thus a combination of the Carl-style cross-entropy and the squared error on the score, weighted by a hyperparameter α. It is given in Eq. (35).
The numerator (y = 0) training samples do not come from a single parameter θ 0 , but rather a combination of many different subsamples. In the agnostic parameterized setup, the value of θ 0 used in each event is then one of the inputs to the neural network. In the morphing-aware versions, it is used to calculate the weights w c (θ 0 ) that multiply the different component networksr c (x), as visualized in the bottom panel of Fig. 8.
Calibration: In a next step, the classifier output is optionally calibrated as discussed in Sec. III D 1 using isotonic regression. The calibration curve is shown in the right panel of Fig. 21. We have also experimented with an additional step of expectation calibration, see Sec. III D 2.
Evaluation: For a given x and θ 0 , the classifier decision functionŝ(x|θ 0 , θ 1 ) is evaluated. This is turned into a likelihood ratio estimator with the relation given in Eq. (18), and optionally calibrated.

Parameters:
The key hyperparameter of this technique is the factor α that weights the two terms in the loss function. Additional parameters set the architecture of the neural network, i. e. the number of layers and elements, the activation function, the optimizer used for training, its parameters, and optionally regularization terms.
For our example process we consider fully connected neural networks with two ("shallow"), three, or five ("deep") layers of 100 neurons each and tanh activation functions. Idea: The parameterized Rolr technique learns the full statistical modelr(x|θ 0 , θ 1 ), including the dependency on θ 0 . If it is realized as a differentiable regressor, we can calculate the gradient ofr(x|θ 0 , θ 1 ) with respect to θ 0 , and thus the score of this model. If the estimator is perfect, we expect this estimated score to minimize the squared error with respect to the joint score data available for the training data.
We can turn this argument around and use the available likelihood ratio and score information during the training. To this end, we combine two terms in a combined loss function: the squared errors on the ratio and the score. These two pieces contain complementary information: the ratio regression part contains the information of the likelihood ratio for a fixed hypothesis comparison (θ 0 , θ 1 ), while the score part describes the relative change of the likelihood ratio under changes in θ 0 .

Requirements:
The Rascal technique requires a generator with access to the joint likelihood ratio r(x e , z e |θ 0 , θ 1 ) and score t(x e , z e |θ 0 ). In the particle physics case, this means we have to be able to evaluate the squared matrix elements for given phase-space points and theory parameters.
Structure: Since the Rascal method relies on the extraction of the estimated score from the estimator, it can only be used for parameterized estimators, either in an agnostic or morphingaware version. The middle and bottom panels of Fig. 8 illustrate the structure of the estimator in these two cases.
Training: An estimatorr(x|θ 0 , θ 1 ) is trained through regression on the joint likelihood ratio, while the derived estimated scoret(x|θ 0 ) is compared to the joint score on the training samples generated from y = 0. The loss function that is minimized is thus a combination of the squared error on the ratio and the squared error on the score, weighted by a hyperparameter α. It is given in Eq. (35).
The numerator (y = 0) training samples do not come from a single parameter θ 0 , but rather a combination of many different subsamples. In the agnostic parameterized setup, the value of θ 0 used in each event is then one of the inputs to the neural network. In the morphing-aware versions, it is used to calculate the weights w c (θ 0 ) that multiply the different component networksr c (x), as visualized in the bottom panel of Fig. 8.
Calibration: In a next step, the classifier output is optionally calibrated as discussed in Sec. III D 1 using isotonic regression. The calibration curve is shown in the right panel of Fig. 21. We have also experimented with an additional step of expectation calibration, see Sec. III D 2.

Parameters:
The key hyperparameter of this technique is the factor α that weighs the two terms in the loss function. Additional parameters set the architecture of the neural network, i. e. the number of layers and elements, the activation function, the optimizer used for training, its parameters, and optionally regularization terms.
For our example process we consider fully connected neural networks with two ("shallow"), three, or five ("deep") layers of 100 neurons each and tanh activation functions. They are trained with the Adam optimizer [83] over 50 epochs with early stopping and learning rate decay. Our default settings are given in Table VI. Experiments with different architectures, other activation functions, additional dropout layers, other optimizers, and different learning rate schedules yielded a worse performance.
g. Local score regression and density estimation ( Sally) Idea: In Sec. III A 2 we introduced the score, the relative gradient of the likelihood with respect to the theory parameters. The score evaluated at some reference parameter point is the sufficient statistics of the local approximation of the likelihood given in Eq. (15). In other words, we expect the score vector to be a set of "optimal observables" that includes all the information on the theory parameters, at least in the vicinity of the reference parameter point. If we can estimate the score from an observation, we can use it like any other set of observables. In particular, we can fill histograms of the score for any parameter point and thus estimate the likelihood ratio in score space.
To estimate the score, we again make use of the particle physics structure. Particle physics generators do not only provide sets of observables {x e }, but also the corresponding partonlevel momenta {z e }. From matrix element codes such as MadGraph we can extract the squared matrix element |M| 2 (z|θ) given parton-level momenta z and theory parameter points θ. This allows us to calculate the joint score t(x e , z e |θ 0 ) = ∇ θ log p(z e |θ) for any of the generated events. The derivatives in Eq. (A6) can always be evaluated numerically. If the process has the morphing structure of Eq. (6), one can alternatively calculate it from the morphing weights.
In Sec. III B we have shown that regressing a functiont(x) on the generated events {x e } and the corresponding joint scores t(x e , z e |θ) will converge tô provided that the events are sampled according to x e ∼ p(x|θ 0 ).
This technique is illustrated in the middle panels of Fig. 20. The middle panel of Fig. 21 shows the relation between the scalar product of estimated score and θ 0 − θ 1 and the estimated likelihood ratio.

Requirements:
The Sally technique requires a generator with access to the joint score t(x e , z e |θ 0 ).
In the particle physics case, this means we have to be able to evaluate the squared matrix elements for given phase-space points and theory parameters.
Structure: The technique consists of two separate steps: the score regression and the density estimation in the estimated score space. The score regression step is independent of the tested hypothesis and realized as a simple fully connected neural network. The subsequent density estimation is realized through multi-dimensional histograms, point by point in parameter space (if the morphing condition in Eq. (6) holds, a morphing-aware version is also possible).
Training: The first part of the training is regression on the score (evaluated at some reference hypothesis). It consists of minimizing the squared error loss between a flexible vector-valued functiont(x|θ score ) (implemented for instance as a neural network) and the training data {x e , t(x e , z e |θ score )}, which was sampled according to θ score .
The second step is density estimation in the estimated score space. We only consider histograms, but other density estimation techniques are also possible. For each value of θ 0 or θ 1 that is tested, we generate samples of events, estimate the corresponding score vectors, and fill a multidimensional histogram of the estimated score.
Calibration: The density estimation step already calibrates the results, so we do not experiment with an additional calibration step.
Evaluation: For a given observation x, the score regressort(x) is evaluated. For each tested (θ 0 , θ 1 ) pair, we then extract the corresponding bin contents from the numerator and denominator histograms, and calculate the estimated likelihood ratio with Eq. (38).
Parameters: Both the score regression part and the subsequent density estimation have parameters. The first and most important choice is the reference hypothesis θ score , at which the score is evaluated. For effective field theories the Standard Model is the natural choice, and we use it in our example process.
The score regression also depends on the hyperparameters of the neural network, i. e. the number of layers and elements, the activation function, the optimizer used for training, its parameters, and optionally regularization terms. For our example process we consider fully connected neural networks with two ("shallow"), three, or five ("deep") layers of 100 neurons each and tanh activation functions. They are trained with the Adam optimizer [83] over 50 epochs with early stopping and learning rate decay. Our default settings are given in Table VI. Experiments with different architectures, other activation functions, additional dropout layers, other optimizers, and different learning rate schedules yielded a worse performance.
The only parameter of the density estimation stage is the histogram binning. For our example process we consider two different variations: • Density estimation with a "fixed" binning, where the bin axes are aligned with the score components. We use 40 bins for each of the two score components. • Density estimation with a "dynamic" binning, in which the bin axes are aligned with the θ 0 − θ 1 direction and the orthogonal one. We use 80 bins along the ∆θ direction, which carries the relevant information in the local model approximation, and 10 along the orthogonal vector.
For each pair (θ 0 , θ 1 ) and each dimension, the bin edges are chosen such that the expected number of events according to θ 0 plus the expected number of events according to θ 1 is the same in each bin.
h. Local score regression, compression to scalar, and density estimation ( Sallino) Idea: In the proximity of the Standard Model (or any other reference parameter point), likelihood ratios only depend on the scalar product between the score and the difference between the numerator and denominator parameter points. If we can estimate the score from an observation, we can calculate this scalar productĥ(x|θ 0 , θ 1 ), defined in Eq. (39), and use it like any other observable. In particular, we can fill histograms ofĥ for any parameter point and thus estimate the likelihood ratio inĥ space.
To estimate the score, we once again exploit particle physics structure. Particle physics generators do not only provide sets of observables {x e }, but also the corresponding partonlevel momenta {z e }. From matrix element codes such as MadGraph we can extract the squared matrix element |M| 2 (z|θ) given parton-level momenta z and theory parameter points θ. This allows us to calculate the joint score with Eq. (A6) for any of the generated events.
In Sec. III B we have shown that regressing a functiont(x) on the generated events {x e } and the corresponding joint scores t(x e , z e |θ) will converge to t(x), provided that the events are sampled according to x e ∼ p(x|θ 0 ).

Requirements:
The Sallino technique requires a generator with access to the joint score t(x e , z e |θ 0 ). In the particle physics case, this means we have to be able to evaluate the squared matrix elements for given phase-space points and theory parameters.
Structure: The technique consists of two separate steps: the score regression, and the density estimation inĥ space. The score regression step is independent of the tested hypothesis and realized as a simple fully connected neural network. The subsequent density estimation inĥ space is realized through one-dimensional histograms, point by point in parameter space (if the morphing condition in Eq. (6) holds, a morphing-aware version is also possible).
Training: The first part of the training is regression on the score (evaluated at some reference hypothesis). It consists of minimizing the squared error loss between a flexible vector-valued functiont(x|θ score ) (implemented for instance as a neural network) and the training data {x e , t(x e , z e |θ score )}, which was sampled according to θ score .
The second step is density estimation inĥ space. We only consider histograms, but other density estimation techniques are also possible. For each value of θ 0 or θ 1 that is tested, we generate samples of events, estimate the corresponding score vectors, calculate the scalar product in Eq. (39) to getĥ(x|θ 0 , θ 1 ), and fill a one-dimensional histogram of this quantity.
Calibration: The density estimation step already calibrates the results, so we do not experiment with an additional calibration step.
Evaluation: For a given observation x, the score regressort(x) is evaluated. For each tested (θ 0 , θ 1 ) pair, we multiply it with θ 0 − θ 1 to getĥ(x|θ 0 , θ 1 ), extract the corresponding bin contents from the numerator and denominator histograms, and calculate the estimated likelihood ratio with Eq. (40).
Parameters: Both the score regression part and the subsequent density estimation have parameters. The first and most important choice is the reference hypothesis θ score , at which the score is evaluated. For effective field theories the Standard Model is the natural choice, and we use it in our example process.
The score regression also depends on the hyperparameters of the neural network, i. e. the number of layers and elements, the activation function, the optimizer used for training, its parameters, and optionally regularization terms. For our example process we consider fully connected neural networks with two ("shallow"), three, or five ("deep") layers of 100 neurons each and tanh activation functions. They are trained with the Adam optimizer [83] over 50 epochs with early stopping and learning rate decay. Our default settings are given in Table VI.
Experiments with different architectures, other activation functions, additional dropout layers, other optimizers, and different learning rate schedules yielded a worse performance.
The only parameter of the density estimation stage is the histogram binning. We use 100 bins. For each pair (θ 0 , θ 1 ) and each dimension, the bin edges are chosen such that the same expected number of events according to θ 0 plus the expected number of events according to θ 1 is the same in each bin.

Additional results
In Tbls. VII to XII we compare the performance of different versions of the likelihood ratio estimators. As metric we use the expected mean squared error on log r(x|θ 0 , θ 1 ) as well as a trimmed version, as defined in Sec. V A. The estimators are an extended list of those given in Table IV, adding variations with different hyperparameter choices and the results for uncalibrated ("raw") estimators. By default, we use neural networks with 3 hidden layers, the labels "shallow" and "deep" refer to 2 and 5 hidden layers, respectively. We highlight the versions of the estimators that were shown in the main part of this paper.
Because of the duality between density estimation and probabilistic classification (see Eqs. (18) and (46)), we can use all techniques to define classifiers. In Fig. 22 we show the ROC curves for two benchmark parameter points. Note how badly the two scenarios can be separated. This is not a shortcoming of the discrimination power of the classifiers, but due to the genuine overlap of the probability distributions, as can be seen from the identical ROC curve based on the true likelihood ratio.      Table X: Comparison of different versions of the Sally and Sallino techniques. The metrics shown are the expected mean squared error on the log likelihood ratio with and without trimming, as defined in the text. Checkmarks in the last two columns denote estimators included in Table IV and the figures in the main part of this paper, respectively.