Improving the Composition of Ultra High Energy Cosmic Rays with Ground Detector Data

We show that the maximum shower depth distributions of Ultra-High Energy Cosmic Rays (UHECRs), as measured by fluorescence telescopes, can be augmented by building a mapping to observables collected by surface detectors. The resulting statistical improvement of such augmented dataset depends in a universal way on the strength of the correlation exhibited by the mapping. Building upon the publicly available data on"golden hybrid"events from the Pierre Auger Observatory we project possible improvements in the inferred composition of UHECRs for a range of possible mappings with varying correlation strengths.


I. INTRODUCTION
Cosmic rays (CRs) with ultra-high energy, E ≥ 10 18 eV, are studied through the observation of the Extensive Air Showers (EAS) that are generated in the Earth atmosphere.The Pierre Auger Observatory [1] employs a "hybrid" detection method: Surface Detectors (SD) measure electrons and muons propagating to the surface, while Fluorescent Detectors (FD) capture the isotropic light emitted by excited nytrogen and can track the development of the EAS.In particular, the former can measure the atmospheric depth at which the number of charged particles is at its maximum, X max , which is in turn a key ingredient to infer the mass composition of CRs.
While the hybrid measurement technique provides detailed insights for the EAS study, the runtime of FDs is largely limited by background light noise.This results in ∼ 10% of the total number of showers observed to be reconstructed as "hybrid", i.e. with both FD and SD data, while the remaining part of the dataset is measured with the SD only.For example, in the Pierre Auger 2021 Open Data [2], there are 22731 SD measurements of EAS, which we refer to as Non-Hybrid (NH) showers, and 3156 "brass hybrid" (BH) events, that is showers that have been recorded simultaneously by the SD and the FD.Of these BH, 1602 are called "golden hybrids" (GH), with independent SD and FD reconstructions.The low statistics of BH and especially GH events compared to NH is currently a dominant limiting factor in the determination of the CR mass composition.
A possible solution is to infer X max from ground data, exploiting GH events to build a map between SD and FD observables.Unfortunately, linear and simple nonlinear fits based solely on measured data, such as the ∆ variable proposed in Ref. [3], exhibit only relatively low levels of predictive power.Recently, an alternative has been proposed in terms of a Deep Neural Network trained on simulated ground data for 4 different primaries to directly predict X max [4].Unfortunately, simulations currently cannot faithfully reproduce the observed ground data of Pierre Auger events [5].This exposes any approach directly relying on such simulations to potentially large systematic errors.In addition, the chosen composition of simulated events used to train the network can act as a confounding variable on the inferred X max .This can in turn lead to biased estimates of CR composition.
In this work we address these drawbacks and propose a general method to exploit any observed correlations between SD and FD data, to effectively increase the statistics of X max .After briefly reviewing possible maps in Section II, in Section III we show a rigorous way to combine the GH data with the inferred one, which leads to the enhancement of the dataset by an effective number of GH showers, N eff = f × N GH .The latter number monotonically increases with the level of correlation given by the map.Finally, in Section IV we employ this method to improve the results on the full mass composition obtained in Ref. [6].

II. CORRELATING GROUND DETECTOR DATA WITH Xmax
For our purpose the i-th GH event in the dataset can be characterized by one FD measurement, x (i) ≡ X (i) max , and a set of m SD observables, We can then consider functions of SD observables y({Q}) for which the GH set exhibits some correlation between x and y, as parametrized by where σ(x), σ(y) and Cov(x, y) are the standard deviations of x and y and their covariance, respectively.Possibly the simplest example of y is the weighted av-arXiv:2304.11197v2[astro-ph.HE] 25 Aug 2023 where the weights a j are fixed by maximizing ρ(x, y) 2 .
For simplicity, we assume that the PDF of each variable is a normal distribution described by its measurement and the respective uncertainty.Namely, we have and similarly for P (i) (y).Note that the fit has to be performed separately for each subset of GH events (i.e. in a particular CR energy bin or angular window) for which one wants to infer composition.This ensures an unbiased estimator, assuming the GH and NH events in the same subset are of the same composition and follow the same distribution of Q.
As a proof of concept, we use the set of observables 1 with their respective uncertainties.Here, θ is the azimuthal angle of the shower core, while T s = (t end − t start ) s is the duration of the signal in the s-th Cherenkov station hit by the shower, defined as the difference between the final and initial time bin.We take the mean, standard deviation and maximum values over the active stations for each event.For each shower, we generate multiple samples of the observables from their PDF, eq. ( 3).In such a way, we can also compute the uncertainty associated with y (i) .Uncertainties on the bins t start and t end are not provided in the Open Data, hence we assume the uncertainties on both quantities to be ± one bin.These samples are then utilized to maximize ρ 2 through the "Nelder-Mead" method (any similar optimization algorithm would work).Finally, we employ K-folding to check for overfitting.Following this procedure, we are able to achieve ρ = (30 ± 4)% correlation between x and y for GH in the energy bin [2,5] EeV, with 455 GH showers.We chose this bin as the only one where GH and NH data overlap in the Pierre Auger Open Data set.
In practice, more advanced approaches, such as nonlinear fits or neural networks, may be employed to obtain stronger correlations between y and x.As two explicit examples of such more complicated observables, we mention the ∆ variable [3] and the Deep Neural Network trained on simulated ground data [4].In the same energy bin, the former yields ρ = 0.26 ± 0.02 2 , while the 1 We have repeated this exercise with multiple combinations of available observables.The chosen set represents the one with the largest correlation achievable without overfitting of the data. 2 Uncertainties due to limited statistics and above stated systematics in both cases have been estimated using bootstrapping [7].
latter can reach ρ = 63%.We stress that for our purpose the strength of an observable is measured solely in terms its correlation with x as established on a particular measured GH dataset {(x, y)}.In this way we avoid potential systematic biases associated with direct inference of individual x (i) from non-equivalent datasets or simulations.

III. INFERENCE OF Xmax DISTRIBUTION
Following the previous Section, we can represent the GH dataset as a set of N pairs (x j , y j ), with j = 1, . . ., N , and we assume that there is some level of correlation between the two observables.The NH dataset is then represented by the M pairs (x k , ŷk ), with k = 1, . . ., M , where M ≫ N .In the latter set the xk entries, which are missing in the original data, can be inferred by exploiting the map built through the {(x, y)} set correlation, that is x = x(ŷ).
Let P (x, y) be the joint probability distribution of the {(x, y)} set.For simplicity, we start by assuming that such distribution follows a Bivariate Normal (BN) model, while we comment later for the general case.Each BN distribution is defined by the means µ x , µ y , the standard deviations σ x , σ y and the correlation ρ.The conditional distribution P (x|y) is then simply given by the normal distribution with mean µ x + ρ σx σy (y − µ y ) and standard deviation σ x 1 + ρ 2 .Therefore one can sample x ∼ P (x|y) following the formula where ŷ′ = ŷ + δ ŷ ϵ accounts for systematic uncertainty of ŷ, and ϵ is a random number drawn from a normal distribution, ϵ ∼ N (0, 1).In this way we obtain a set of inferred values, {x(ŷ 1 ), . . ., x(ŷ M )}.
Due to the finite number (N and M ) of events, we need to account for statistical uncertainties in the inference via bootstrapping [7].The latter method consists of inferring values x for each ŷ multiple times, each time starting with a different set of pairs {(x, ỹ)}, called the bootstrapped sample.The procedure is repeated B times, where each time the bootstrapped sample is given by sampling N pairs (x j , ỹj ) from the first set {(x, y)} with allowed repetitions.
Here δx j and δy j represent the systematic uncertainties on x j and y j , while ϵ j,ℓ ∼ N (0, 1).Finally, the N dimensional vector O ℓ is drawn from the Multinomial distribution with N events and N classes which are all equally probable, Mult(N, p = (1/N, ..., 1/N )).The values in this array, O j,ℓ , count how many times each pair j is chosen in the ℓ-th bootstrapped sample.It thus satisfies the relation j O ℓ,j = N .As a result of this procedure, we obtain at each step ℓ the set of N couples We can now estimate the ℓ-th joint distribution P ℓ (x, ỹ) from the bootstrapped samples.Similarly, we define where now the index k runs over the ŷ elements, k = 1, . . ., M , and Õk,ℓ ∼ Mult(M, p = (1/M, ..., 1/M )).The ỹk,ℓ account for both statistical and systematic uncertainties.
For each bootstrapped sample we can now obtain a set of M pairs where the first component of the pair is inferred by plugging ỹk,ℓ in Eq. ( 5).Finally, we can combine the two sets to obtain At this point, one can use the larger combined dataset to perform the desired analysis.As we obtain the x elements from the inference method described above, it is fair to ask what is the gain in terms of statistical power with the combined dataset.In order to estimate it, we can compute the variance of the distribution of the means, Var(x).That is, we compute the mean value of x for each X ℓ set, xℓ ; the resulting distribution defined by all xℓ is a Gaussian with variance Var(x).The ratio between the variances obtained with the initial dataset X ℓ and with the combined one, Var( X), can provide a measure of the improvement in the statistical significance obtain with this method.Indeed, the quantity can be interpreted as the effective increase of events in the combined dataset, with respect to the original one, N eff = f × N .This interpretation follows from the fact that the variance of the mean distribution is given by the variance divided by the number of events.Namely, if f = 1 it means no additional information is contained in the inferred set X inf .On the contrary, if f > 1 the effective statistical power of X comb amounts to have f × N total events.The functional dependence of f on the correlation ρ, for different values of M and N , is shown in Figs. 1 and 2. Points in these figures are obtained numerically, with the procedure described above, using B = 10 5 bootstrapped samples.Furthermore, we assumed that systematic uncertainties are small.The latter translates to have ρσ x /σ y δ ŷ ≪ σ x 1 − ρ 2 in Eq. ( 5).In this scenario, the ratio f increases monotonically with ρ and does not depend on the number of events M and N , but only on their ratio, see Fig. 1.Larger M/N ratios allow then for larger values of f for the same level of correlation, as shown in Fig. 2.
In the limit of ρ → 1, f tends to the maximal value f = (M + 1)/N .In practice, ρ = 1 cannot be obtained with a finite number of events N due to statistical uncertainties, thus the maximum value of f is smaller.In the general case, the joint distribution P (x, y) does not follow a BN distribution.One may then try to use Kernel Density Estimation (KDE) to obtain P (x, y); however, if the latter is computed inappropriately, we may introduce a bias in inferring x values (variancebias trade-off).For this reason we propose to transform bootstrapped samples using a Probability Integral Trans-formation in 2D [8], which has the advantage to preserve the correlation between x and y.
In the Auger Open Data, NH and GH events given in the Open Data in the energy interval E/EeV ∈ [2.5, 10] amount to M ∼ 15700 and N ∼ 300 respectively, thus with a ratio M/N ∼ 50.In the case of maps between y and x ≡ X max constructed with linear or non-linear fits (see previous Section), the correlation is ρ ≃ 0.3, which corresponds to f ≈ 1.06.On the other hand, the Deep Neural Network presented in Ref. [4] can reach ρ = 63%, resulting in f = 1.4 − 1.5 for M/N = 10 − 50.

IV. PROJECTIONS OF UHECR COMPOSITION FROM NON-HYBRID EVENTS
Using the X max distribution inferred from a combination of GH and NH events, we follow our previous analysis of the mass composition of UHECR [6].That is, we infer the posterior distribution of the composition w = (w p , . . ., w Fe ) using the moment decomposition of measured and simulated X max distributions in a fixed energy bin.However, due to the small number of events in the energy bin E/EeV ∈ [2.5, 5], we use here the larger E/EeV ∈ [0.65, 5] energy bin for projections.We use the same set of shower simulations and the same inference procedure of Ref. [6]; the improvements shown here come solely from considering the enhanced dataset, At each bootstrap step ℓ, we compute the moments of the X max distribution as for n = 2, 3, 4. We then calculate the mean of the distribution of moments µ and the covariance matrix Σ.The effect of the enhanced dataset can be included here by considering the new covariance matrix Σ eff = Σ/f .In Fig. 3 we show the composition obtained using a mixture model of 4 primaries, (p, He, N, Fe), using the EPOS hadronic model.The black line indicates the best fit, while the bars represent the allowed fractions at 95% CL.Different colors show the effect of including inferred values of X max , for different values of f : the grey band obtained for f = 2, that is by doubling the size of the GH dataset, down to the purple one, where f = 1.
Similarly, in Fig. 4 we show the result for the full cumulative composition, that is for each primary Z 0 we show the allowed fraction of heavier primaries, Z > Z 0 .We observe again an improvement of the grey region with respect to the purple.
We provide the same results for the hadronic models Sibyll, QGSJet01 and QGSJetII-04 in the Appendix.

V. CONCLUSIONS
In this Letter we presented a novel method to improve the statistical power of GH events by including X max values of UHECRs inferred from SD observables.The effective number of events obtained is quantified by the variance ratio f , which strongly depends on the level of correlation between ground and fluorescent data.While simple linear or non-linear fits can achieve at most 30% correlation levels, resulting in rather modest values of f , more advanced approaches can reach above 60% correlation [4], leading to f ≳ 1.5.
We then project the CR composition inference with such enhanced datasets, as a function of the ratio f .The resulting projected improvements in the composition CL for the Auger Open Data are shown in Figs 3 and 4 and in the Appendix.Although the latter results seems meager, this study shows that, as a proof-of-concept, these improvements are possible and can be exploited by the experimental collaboration on their full dataset.Furthermore, it underlines the potential of probing deeper the correlation of SD and FD data, where the advancement in artificial intelligence tools can lead to interesting and powerful results.
Finally, we point out that the procedure to enhance the X max dataset by SD observables is very general and based on statistics arguments only.That is, any analysis based on correlated observables can make use of this method to augment a dataset containing measurements of both observables with (preferably a much larger) dataset of measurements of only a single one.

FIG. 1 :FIG. 2 :
FIG. 1: Effective increase f as a function of correlation ρ, for three values of GH dataset size N , when fixing the ratio M/N = 50.

FIG. 3 :FIG. 4 :
FIG. 3: Fraction of primaries p, He, Li, Fe shown as 2σ confidence intervals for various values of f and corresponding ρ for hadronic model EPOS.Black solid line represent the most probable composition.

FIG. 6 :
FIG. 6: Fraction of primaries shown as 2σ confidence intervals for various values of f and corresponding ρ for hadronic models EPOS, Sybill, QGSJetII-04 and QGSJet01 in energy interval E/EeV ∈ [0.65, 5] .Black solid line represent the most probable composition.