Current Nuclear Data Needs for Applications

,


I. INTRODUCTION
Nuclear data are critical inputs for predictive modeling and simulations in various applied science and engineering disciplines.Nuclear power and associated fuel cycle operations, national security and non-proliferation applications, shielding studies, materials analysis, medical radioisotope production, diagnosis and radiotherapy, and space applications are only a handful of applications that rely on accurate and precise nuclear data.In many cases, the nuclear data provide cross-cutting support to a number of different applications.
Extensive experimental campaigns to measure nuclear data were made from the 1950s to the 1980s.Since then, computational modeling and simulations of nuclear systems have undergone a period of rapid expansion.The computational power available for detailed modeling of physical systems has grown by several orders of magnitude.Consequently, the predictive power of simulations such as radiation transport codes is effectively limited by the fidelity of the input nuclear data.The limits of this predictive power have economic, safety, and security consequences that must be addressed.For example, safeguards and homeland security applications rely on hybrid methods of radiation detection and computational solutions of the inverse radiation transport problem.If modeling of these systems is limited by nuclear data, the ability to detect smuggled nuclear materials, for example, is also limited.

A. The Nuclear Data Pipeline
The nuclear data pipeline, shown in Fig. 1, is a term used to describe the many interconnected steps required to prepare nuclear measurement results for use in enduser applications.While this pipeline has been described in numerous ways, there are, in general, six essential steps: measurements, compilation, evaluation, processing, validation and applications.Measurements are made, both for fundamental science and for specific userrelated requests.Compilation involves collecting the data from new measurements and historical literature and inserting these data and related information from measurements into both bibliographic databases (NSR [1], CINDA [2]) and numerical databases (XUNDL [3], EX-FOR [4]).The next step, evaluation, is critical to provide a recommended "best" value for all pieces of nuclear data by expertly combining new measurements with previous measurements and nuclear model predictions.Evaluated nuclear structure and decay data is inserted into the Evaluated Nuclear Data File (ENDF) [5] and disseminated online by a variety of tools including Sigma [6] from the US National Nuclear Data Center (NNDC) [7] and ZVVIEW [8] from the IAEA Nuclear Data Services (IAEA-NDS) [9].Evaluated reaction data is inserted into the Evaluated Nuclear Structure Data File (ENSDF) [10] and disseminated online via NuDat [11] from the NNDC and LiveChart [12] from the IAEA-NDS.Processing is the fourth step of the pipeline, wherein evaluated data sets are converted to formats required by specific end-user applications.In some cases, these processed data sets are distributed to the community, such as the Nuclear Wallet Cards [13] and the Medical Internal Radiation Dose (MIRD) database [14].In other cases, evaluated files are processed and stored on local computers and serve as input files for end-user simulations.For example, NJOY [15] and other codes (e.g., NECP-Atlas [16]) are used to process the ENDF evaluated data file into the ACE format [17] for input in transport codes such as MCNP [18].Validation, the next step in the pipeline for reaction databases, involves quantitative model comparisons [19][20][21][22] with independentlymeasured values from benchmark-quality experiments such as for criticality safety [24], employing the newlyprocessed evaluated data as input.Iterative adjustments are made to reaction evaluations on the basis of this validation process.Finally, the processed and validated nuclear data files are disseminated for use in applications.New applications or more stringent requirements for existing applications could require new data, starting the flow of the pipeline again.
The lengthy passage of data through the full pipeline, from new experimental measurements through evalua-tion, processing, and validation, requires expertise at each step.For nuclear structure data, all nuclides of a particular mass number are often evaluated simultaneously, because their levels are interconnected by beta decays.Such "mass chain" evaluations can take half a year to two years to complete, containing all properties and decays of ∼10 5 levels, and then up to another year or two for critical peer review and quality assurance checks.Additionally, many nuclides are evaluated individually.The average time between evaluations, currently approximately 7 years, is limited by the available evaluation workforce.Upon completion, evaluations are entered into the ENSDF database in a process of continual updates.For nuclear reaction data, the ENDF database [5] is organized into 15 sub-libraries (e.g., "Neutron" for neutroninduced reactions), and further subdivided into evaluations of each isotope where all reaction channels are simultaneously evaluated.In some cases, individual reaction channels (e.g., partial cross sections) with over ∼10 6 data points require months to years to complete, and a full evaluation for the nuclide in a sub-library can take significantly longer.Some of the processing and validation steps have been recently automated [5], as well as a new release of the full ENDF library, that includes all evaluations completed since the last release, is made approximately every 5 years.
Because the expertise of researchers is often limited to one or two sections of the pipeline, collaborations, often across different organizations and international borders, are essential to the operation and evolution of the pipeline.For this reason, and because of the importance of nuclear data across both basic and applied fields, numerous organizations have been formed to coordinate activities, increase communication, and launch collaborative efforts between evaluators.In the U.S., these include the US Nuclear Data Program [25] and the Cross Section Evaluation Working Group (CSEWG) [26]; abroad, these include the International Network of Nuclear Structure and Decay Data Evaluators [27], the International Network of Nuclear Reaction Data Centers [28], the International Nuclear Data Evaluator Network [29], and the OECD Nuclear Energy Agency (NEA) Working Party on International Nuclear Data Evaluation Cooperation (WPEC) [30].These organizations have, for example, helped address differences between reaction data sets in the US (ENDF [5]), Japan (JENDL [31]), Europe (JEFF [32], TENDL [33]), Russia (BROND [34]), and China (CENDL [35]).Additionally, WPEC subgroups [30] and IAEA Nuclear Data Section Coordinated Research Projects (CRPs) [36] have worked to improve data evaluation techniques, data formats, and general and user-specific evaluated data sets.
It is critical that the evaluated databases evolve to accommodate new requirements of end-user applications [37].The Workshop for Applied Nuclear Data Activities (WANDA) [38,39] is a series of meetings created to better communicate the constantly changing needs of end-user applications with nuclear data evaluators.The The WANDA 2021 meeting featured six topical sessions selected collectively by the nuclear data producer and user communities.The six areas best reflect nuclear data topics that exhibit deficiencies or opportunities relevant for current and emerging applications with crosscutting themes that enable support of the data pipeline for multiple programs.In the remainder of this section, each of the six topics are briefly introduced.The following sections will discuss the highlights and outcomes of each topical session in more detail, with specific recommendations highlighting the most urgent nuclear data needs in each area.

Advanced Computing for Nuclear Data
Computing plays a critical role in applied nuclear data, ranging from execution of high-fidelity physics models that form the backbone of data evaluations and experimental analysis and interpretation, propagating uncertainties through a complex chain of heterogeneous codes, to processing large training datasets through supervised machine learning algorithms.Resources for these activities include hardware from clusters to supercomputers, scalable algorithms, and extensive efforts in coding, applied mathematics, and domain-specific applications.This topic covers recent computing developments and highlights the challenges of adapting complex, legacy, or mission-critical codes to the latest, and next, generation of rapidly evolving architectures.Machine learning methods for emulating computationally-expensive physics models, validation, and uncertainty quantification are also discussed.Developments needed to realize the potential of quantum computing (QC) for nuclear data, far beyond the bounds of classical computing, were also presented.

Predictive Codes for Isotope Production
In situations and energies where well-characterized experimental data on cross sections or isotopic yields are unavailable, the isotope production community, as well as other users of these data, relies upon predictive codes to provide estimates of needed data.Unfortunately, accurate modeling of even moderately high-energy reactions is notoriously difficult.The lack of an acceptable predictive capability in modern reaction codes presents a cross-cutting need for the nuclear data community, as it impacts the casual user of these codes, the data evaluation pipeline, and applications such as isotope production, neutronics, shielding, and detection.With a broad range of applications and an impact on multiple programs, this topic is of great interest.This session focused on how to improve the predictive capabilities of these codes to benefit the breadth of the data community.

Expanded Benchmarks and Validation for Nuclear Data
Because much of nuclear science and engineering relies on predictive computational modeling and simulation, many areas of the community would benefit from the development of well-characterized and documented benchmarks for code validation.While critical assembly benchmarks are very useful for validating some aspects of nuclear data, a broader suite of benchmarks are needed to provide more complete validation of nuclear data and physics important for other applications.There are many different applications that can leverage the framework used by the criticality safety and reactor physics communities to develop benchmarks needed to validate the nuclear data they depend on.New and historical experiments that could be turned into benchmarks to strengthen nuclear data validation in cross-cutting application areas was a major focus of this discussion.

Nuclear Data for Space Applications
The space radiation environment is a complex mix of photons, electrons, protons, and heavy ions with energies ranging from several eV to several TeV per nucleon.Characterizing interactions in the environment of space is important in a number of areas critical for space research and exploration due to the secondary radiation fields they create.For example, creating effective shielding for crew and electronics requires fundamental cross section data on high-energy heavy-ion interactions that produce complex secondary radiation fields.Similarly, the secondary neutrons and gamma rays produced by interactions of cosmic rays with the surfaces of planets, moons, and asteroids enable their chemical composition to be characterized through the use of nuclear spectroscopy.Converting measurements to elemental information requires knowledge of relevant neutron inelastic and capture cross sections and gamma-decay intensities.As space agencies around the world prepare for human exploration beyond low-Earth orbit, there is renewed interest in fission power and radioisotope systems.These systems introduce an additional source of radiation that can impact instrument response and crew health.Nuclear data relevant to the performance of man-made radiation environments and their interaction with surrounding materials is necessary to understand their impacts on these missions.
Nuclear data impacts design, efficiency and operation of advanced reactors and security applications.With new advanced reactors and micro-reactors being designed using different fuels, coolants, and moderators than the current fleet, there is a potential need for improved nuclear data, including new differential and integral measurements, as well as new evaluations.Security applications are even more diverse, covering a large range of detectors, systems, and interactions.There is also a large overlap in the nuclear data needs of these two areas, especially for microreactors.The essential questions to address in this topical area are where refined nuclear data can increase safety, reliability, and economic viability.

The Human Pipeline for Nuclear Data
Humans play key roles in every aspect of the nuclear data pipeline from measurements through calculations, evaluations, validation, processing and dissemination.Specialized training is needed to prepare researchers to work at each stage of the pipeline.However, because the importance and significance of nuclear data to the scientific community can be overlooked, it is important to engage the broader community at large to raise awareness and generate interest in workforce planning, in particular through university and laboratory engagement.While elements of the pipeline may benefit from automation, in particular through the high performance computing and machine learning approaches discussed in Sec.II, certain aspects of the pipeline will always require human intervention, either through direct input or oversight.

II. ADVANCED COMPUTING FOR NUCLEAR DATA
Computing plays a central role in the nuclear data pipeline, from the analysis of data collected through experiments to the production of evaluated data to the use of these data in applications.The collection and analysis of experimental data strongly leverages computing for data acquisition and to execute mathematical analyses including signal processing techniques, statistical methods, and much more.Evaluations rely on a set of theoretical models, implemented in nuclear physics codes, to simulate the structure, reactions, and decay of atomic nuclei.Nuclear data is then used by application-specific simulation codes, e.g., computer programs simulating the structure of a neutron star, the formation of elements in nucleosynthesis, critical assemblies, or reaction networks for active interrogation.Because of the inherent complexity of nuclear processes and the often multi-physics nature of nuclear data application codes, quantifying and propagating uncertainties of the data throughout the pipeline also plays an essential role in the nuclear data community.Many of the statistical methods used for this uncertainty quantification (UQ) require significant computing resources.
Thanks to advances in computing and in our understanding of the nuclear many-body problem, nuclear theory has become ever more sophisticated with descriptions of the structure and reactions of light nuclei [44,45], lowlying states in medium-mass nuclei [46,47], the mean field description of heavy nuclei [48], and improved theories of nuclear fission [49,50].A broad range of fundamental nuclear theory problems from neutrino physics to fission to neutron reactions that are highly relevant to the nuclear data community were in fact identified as priority research directions requiring the development of exascale computers [51,52].By integrating some of these theoretical developments into the nuclear data pipeline, there is a unique opportunity to increase the fidelity of evaluations.This approach anchors the calculation of nuclear observables on our best knowledge of nuclear forces and quantum many-body methods, thereby improving the underlying physical foundations of the data.However, such a task requires a long-term vision for code development to keep pace with hardware developments, robust software maintenance plans, and personnel with crosscutting skills in software engineering and nuclear science.Revising legacy codes to fully exploit new features of the latest hardware architectures, especially GPU-based ones, often requires expert assistance and collaboration with computer scientists.
Similar challenges are encountered in the development of popular transport codes such as, e.g., MCNP [53] or TRIPOLI [54], that are used to simulate many nuclear systems including reactors, non-destructive assays, and isotope production.In contrast to nuclear physics models, the linear Boltzman transport equation is well understood, so the primary computational challenges involve system geometry, numerical precision, or the need to calculate sensitivities to all integral quantities, all of which require susbtantial computational throughput.These observations also apply to computer programs implementing the reaction network simulations relevant for stockpile stewardship or nucleosynthesis, where the simulation uncertainties primarily arise from input nuclear physics uncertainties rather than the underlying thermodynamic conditions.The sensitivity of criticality calculations or astrophysics simulations to nuclear data inputs are examples of grand challenge problems that require leveraging high performance computing (HPC) techniques and resources.
In addition to nuclear theory, transport codes, and network simulations, artificial intelligence (AI) and machine learning (ML) are driving a significant expansion of the role of computing in nuclear data.AI/ML has already seen applications throughout the sciences in the areas of design, control, augmented simulations, science and math comprehension, generative models, inverse problems, multimodal learning, and decision making [55].In the nuclear data pipeline, it has been used for knowledge extraction, automation, surrogate models, and uncertainty quantification [43], and its use is anticipated to grow exponentially for a number of reasons.First, AI/ML enables new approaches, often originating in other fields, to address longstanding problems in nuclear data.Second, new open source software libraries are available that facilitate the use of AI/ML algorithms with both CPUs and GPUs.These Python-based software frameworks [56,57] include tools for classification, prediction, ML via deep, recursive, and/or convolutional neural nets, and natural language processing.These libraries are not, however, completely plug-and-play solutions, and collaborations with AI/ML experts and statisticians are often needed to exploit their full potential for nuclear data applications.Third, there is an intense interest of (especially early-career) researchers to apply AI/ML approaches to challenging data-intensive problems, providing an exceptional opportunity for AI/ML to serve as a recruiting gateway for the nuclear data field.These last two points are addressed further in Section VII.
Finally, simulation of quantum many-body systems, such as nuclear reactions, requires exponentially increasing classical computing resources as the number of particles increases.In theory, universal quantum computers can achieve the same exponential scaling, with the upshot that a quantum computer with thousands of qubits could simulate some nuclear reactions not possible even on future exascale classical supercomputers [58].Moreover, because quantum computers are unitary, they are ideal for simulating quantum real-time evolution such as in nuclear interactions.Quantum supremacy -performing a calculation on a quantum computer impossible on a classical supercomputer -has been demonstrated, albeit on carefully selected problems that are currently largely uninteresting other than for tractability on current quantum computing hardware [59,60].It is thus relevant to determine the potential of QC in the particular area of nuclear data.
This section addresses the state-of-the-art of advanced computing in three primary focus areas and the associated opportunities for the nuclear data community.Section II A provides an overview of current and emerging HPC technologies in the context of nuclear data needs and applications.Section II B addresses the ways in which AI/ML may be applied to advance capabilities at all stages of the nuclear data pipeline.Section II C explores the opportunities and limitations to address nuclear data problems.In Sec.II D, a summary is provided along with recommendations for a path forward.

A. High-Fidelity Modeling and Simulation with High Performance Computing
With the increasing sophistication of modeling and simulation approaches and the expanding number and size of available datasets, capabilities to address nuclear data needs and applications are increasingly reliant upon powerful HPC tools for efficient execution.HPC methods may be applied to advance computational nuclear structure and reactions by increasing the performance of existing nuclear physics codes and enabling more elaborate theoretical modeling including previously inaccessible complex multi-physics calculations [61].In fundamental nuclear theory research, novel methods to perform ab initio calculations of nuclei, such as coupled-cluster [62] or in-medium similarity renormalization group [63], have only become possible thanks to progress in HPC.New insights into the structure of neutron stars [64] or the formation of heavy elements in the universe [65][66][67] rely critically on complex simulations of nuclear properties on supercomputers [68,69].Multidisciplinary collaborations involving applied mathematicians, computer scientists and domain scientists are often key to enabling such progress [70].The Scientific Discovery Through Advanced Computing (SciDAC) program [71] and the Fission In R-process Elements (FIRE) topical collaboration in nuclear theory [72] are examples of how to organize and support such multidisciplinary collaborations.
HPC can also play important roles in the verification of methods and codes and in validation of commonly used approximations, by testing against more fundamental and predictive theories.Examples include ab initio calculations of thermonuclear reactions that can test the correctness of more phenomenological R-matrix fits [73], explanations of β-decay rate quenching with microscopic methods [74], or the quantum-mechanical simulation of quantities that are essential for simulating the deexcitaiton of fission fragments [75].By providing robust extrapolations where data are not available, establishing useful trends (as a function of Z, A, spin, energy, etc.), or validating empirical laws and systematics, such fundamental simulations play an important role in the nuclear data pipeline.
The scope of HPC tools extends beyond large-scale nuclear physics computations.For example, HPC resources can be leveraged to simulate nuclear reaction processes directly in transport simulations.While such an integrated capability is not always needed (and should be avoided in favor of more rapid approximations when appropriate), the integration of nuclear physics models and transport codes opens the opportunity to implement more realistic physics which is required for some applications (e.g., detector response, unique nuclear signatures).This capability could mitigate the increase in time and reduction in speed incurred by frequently accessing large nuclear datasets and also be used for a baseline against which to estimate corrections when employing more rudimentary models in transport simulations.
Another area where HPC provides a major opportunity for nuclear data is UQ and uncertainty propagation for applications.There is some evidence that Bayesian statistics, for example, provides more flexible and realistic estimates of uncertainties compared with frequentist approaches [76].However, the application of these methods relies directly on sampling the parameter space of the model.The number of samples can be extremely large for models with many parameters.In such cases, the absolute cost of running the model (in terms of CPU time, memory, I/O access, core-count, etc.) becomes critical.Recent examples from basic nuclear theory [77] show that code optimization capable of leveraging existing HPC resources can be key to generating sufficiently many samples.While the number of samples may not be sufficient to perform a full statistical analysis, they may be sufficient to build a realistic emulator of the physics model: a mathematical/computational model whose outputs are numerically equivalent to the ones of the physics models for a well-defined subspace of the parameter space; see Sec.II B, The propagation of nuclear data uncertainties from covariance matrices has been accomplished in some scenarios [78], but more precise simulations enabling the systematic quantification of uncertainties in simulations for both energy and non-energy applications is desired.The incorporation of cross-reaction and crossisotope covariances across the nuclear chart would represent a grand challenge in this regard.Finally, one should verify whether mean values and covariance matrices, which implicitly assume linearity, are sufficient to truly describe nuclear data uncertainties.
To fully harness the benefits of HPC technologies to advance the nuclear data pipeline, the nuclear data community must address key aspects and limitations of computational nuclear physics.First, focused efforts to improve the modeling of atomic nuclei are needed across the entire nuclear chart, a capability that is essential to our understanding of nuclear structure and properties.HPC resources can facilitate the execution of computations of atomic nuclei, but dedicated effort is required to enable high-throughput computing in an HPC environment [79].Second, a cost-benefit analysis of computing architectures is needed, including hybrid architectures, to ensure focused investments in large-scale computing facilities and related technologies for state-of-the-art nuclear data computations.Third, collaborations with the computational community should be prioritized to ensure optimal use of HPC architectures.Success stories in the area of basic nuclear theory suggest that such cooperation has the potential to greatly advance the nuclear data pipeline, in part because physics models and codes are applied in areas that have yet to be experimentally probed.

B. Artificial Intelligence and Machine Learning
As AI/ML approaches become more prevalent and refined, new promising capabilities are emerging relevant at all stages of the nuclear data pipeline with the potential to transform the compilation, evaluation, processing, and validation workflow.These include natural language processing (NLP) to search and assess nuclear science literature, physics-aware ML models to both guide evaluations and learn new parameterizations directly from the observables, and ML capabilities to guide experiment, theory, and evaluation.Some of the latest concepts and developments in these areas are briefly described below.
Container workflow solutions provide the opportunity to connect HPC, AI/ML, and cutting-edge software en-gineering to enable automatic updates of the Evaluated Nuclear Data File (ENDF) [80,81], a reaction library critical for basic and applied research.This approach, which would represent an overhaul of the decades-old workflow of library updates, is based upon the use of containers (lightweight virtualizations akin to virtual machines) that hold experimental results, reference parameter sets, theory codes, benchmark experiments, and evaluations of individual nuclides or reactions.When these containers are properly nested and interlinked, they can be treated as nodes in a Bayesian network.When any nodes in the network are updated (e.g., by the addition of new experimental data), Gaussian Process Regression can be used to update the network output, automatically yielding a new ENDF library [82].
Physics-aware ML models represent another exciting development for the nuclear data community.One category of such models involves adding a component to the loss function of a neural network that arises from the deviation of a physics model prediction with the data [83].In this way, the adjustments of the biases and weights of the underlying network are more grounded in physics.A complementary approach is to use ML simulations to "learn" underlying physics, such as predicting groundstate properties and excited state energies by learning the features of a theoretical model [84] or understanding the physics behind high energy particle collisions [85].By using ML algorithms to help design experiments that address specific nuclear data gaps (e.g., criticality experiments [86]), ML can become more tightly interwoven into data activities.ML can similarly be interwoven with theory by learning discrepancies from existing models [87][88][89] or averaging model predictions [90].ML can also guide evaluations in numerous ways such as taking detailed experimental conditions into account (as in an evaluation of the 239 Pu(n,f) reaction [91]) or by identifying data outliers (such as the problematic 19 F neutron inelastic cross section in ENDF validation studies [92]).These uses of physics-aware approaches will continue to grow, banishing the stigma of ML as a physics-free, uninterpretable "black box." ML also has tremendous potential to emulate the input-to-output mapping done by computationallyexpensive application models like transport simulations.ML-based emulators or surrogate models can enable studies that would otherwise be computationally prohibitive.In particular, surrogate models are now starting to facilitate large-scale UQ studies whereby nuclear data uncertainties are propagated via ML methods (e.g., for advanced nuclear reactor studies [93]).Such utilizations are expected to significantly expand in the future, to the level where ML-enhanced UQ at the nuclear chart scale becomes accessible, benefiting research in both basic nuclear science, including astrophysics and radioactive ion beam facilities, as well as in applications such as nuclear forensics.
AI/ML is also being applied to extract knowledge from published literature.Convolutional neural nets have been used with edge detection techniques to automate the extraction of data (tables, plots, numbers) from papers, reports, and other documents [94].ML-enhanced textual analytics or NLP is widely used to process text.Such a capability can greatly enhance the U.S. Nuclear Data Program databases; for example, through extracting keywords from documents as needed for the Nuclear Science References bibliographic database [95].Emerging NLP algorithms go beyond entity and phrase recognition to automate the extraction of meaning from documents, including distinguishing between synonyms and homonyms through semantic awareness.Such analyses enable automated generation of natural language answers to queries of published literature [96] as well as possible recommendations of theoretical and experimental investigations based on latent knowledge in the literature [97].
While ML approaches are now an enabling technology for a wide range of nuclear data activities, their full potential cannot be realized until nuclear data formats are modernized and fully machine-readable.Examples include utilizing the Generalised Nuclear Database Structure (GNDS) format [98] for the ENDF library and replacing text-based comments in the ENSDF library with modern equivalents that provide suitable extraction of "metadata"; see for example the work by the WPEC Subgroup 50 of the Nuclear Energy Agency [99].
Finally, ML algorithms development cannot be carried out in a vacuum but should be embedded with both theoretical developments and experimental measurements.This means that theorists must strive not only to provide estimates of the uncertainties of their calculations, but possibly functional relationships between the inputs and outputs of said calculations.Such relationships can be encoded in generic assumptions such as linearity, or encoded in a neural network, but they must be available for uncertainty propagation.A similar effort should be required from experimentalists: uncertainties on measurements are, of course, essential, as are estimates of the correlations between these uncertainties.These data form the backbone of many ML efforts.

C. Quantum Computing
The long-term potential impact of QC on nuclear data could be significant.Universal quantum computers exploit the entanglement between qubits to achieve the exponential state-space scaling that limits classical computers.In fact, "the simulation of highly entangled quantum matter is the natural arena where quantum computers seem to have a clear advantage over classical ones" [100].Opportunities for impactful nuclear physics simulations on near-future, so-called Noisy Intermediate-Scale Quantum (NISQ) [100] computers are currently limited.Aside from short coherence times and high error rates, these systems have limited numbers of qubits with limited connectivity.The small maximum number of entangled qubits in particular generally limits the scale of computa-tions to proof-of-principle demonstrations otherwise better solved on classical computers.
Two areas of short-term research investment with potential for high impact are identified.The first is to design and optimize the quantum circuits necessary to encode nuclear system Hamiltonians to perform nuclear physics simulations.Calculations run on current and near-future hardware must be optimized for resiliency to typical error sources.For example, because two-qubit gates are a dominant source of error in current systems, reducing several multi-qubit universal gates to a single custom operation can dramatically improve accuracy and enable more calculational steps, as was recently demonstrated in a calculation of the time propagation of two interacting neutrons [101].This technique and others that reduce the circuit depth, i.e. the minimum calculation time, also increase increase robustness against limited coherence times [102].The current state of these efforts is closely tied to specific quantum computing hardware, both in identifying error sources and designing appropriate robust circuits.To achieve widespread benefits from these techniques, hardware-independent generalizations must be developed, analogous to compiler optimization of high-level programming languages as opposed to assembly code.Looking forward, there is opportunity to co-design future QC capabilities, such as by standardizing hardware-independent implementations of custom gates commonly arising in nuclear physics calculations.
The point where QC transitions from a topic of research to a tool enabling new science is not well defined.To prioritize community research efforts, a series of grand challenge problems relevant to nuclear data should be identified.These challenges should be intractable on even exascale classical computers, and representative of or enabling a broad field of related research.In the near future of noisy quantum computers, it is essential that the result of these calculations be verifiable.For example, Shor's algorithm allows prime factoring of large numbers impossible on a classical computer, with trivially verifiable results.In the context of nuclear data, verification likely means comparison to empirical measurements.Posing grand challenge questions which could identify areas where more high quality measurements (or data) are required for verification.

D. Summary and Recommendations
Porting existing software bases to advanced architectures, including GPU-enabled ones, should be a priority for code developers-whether for nuclear models, applications or production codes.This could upgrade could enable integration of at least some nuclear physics capabilities directly into transport codes.When experimental data are missing or inconsistent, such a capability could help generate data on-the-fly by simulating nuclear reaction processes, either with the actual physics model or an emulator of that model built with ML tools.This trad-ing of memory (stored data) for flops (on-the-fly data generation) in application codes may soon be necessary to cope with the future trends of supercomputer architectures.Increased computational capabilities, together with high-fidelity emulators of physics models, would also greatly facilitate the quantification and propagation of uncertainties throughout the nuclear data pipeline, which has been identified as an urgent priority.
There is a consensus that ML algorithms could aid the extraction of physics from nuclear data, thereby helping design experiments to address specific nuclear data gaps or identify critical modeling needs that could have the largest impact on evaluations.AI/ML also offers a unique opportunity to automate some tasks currently perfomed by humans, especially parsing and processing data from literature.One. could exploit the application of NLP tools to extract data from tables and perform semantic analyses of research papers.To unleash the full potential of AI/ML for automation, we will need to ensure data are machine-readable throughout the nuclear data pipeline (from EXFOR to validation experiments and uncertainties).This could take the form of well-specified Application Protocol Interfaces (APIs), ideally in a variety of programming languages to maximize portability and interoperability.Such APIs are key to develop fully containerized solutions to nuclear data evaluations.
In the longer term, progress in high-performance computing and increased dissemination ML techniques could pave the way to grand challenge problems such as uncertainty quantification at the scale of the entire chart of isotopes.Such grand challenges are extremely relevant for basic science research in areas such as astrophysics, especially with the ramping up of next-generation radioactive ion beam facilities like the Facility for Rare Isotope Beams (FRIB).Looking even further ahead, classical computing may soon hit its limits: QC may have the potential to revolutionize computing.While QC cannot now be a priority for the nuclear data community, it could be relevant to invest in some small scoping or feasibility studies to ensure that this future technology will be useful.

III. PREDICTIVE CODES FOR ISOTOPE PRODUCTION
Radioisotopes, with unique nuclear properties and decay signatures, are broadly used in medicine, industry, and research.Large-scale production of radioisotopes in the 20 th century was a monumental achievement, leading to life-altering therapeutic and diagnostic medicines, materials interrogation and characterization techniques, long-lived carbon-free power sources, and the discovery of new elements to push our understanding of the structure, properties, and behavior of atomic nuclei.Radioisotopes are produced through bombardment of a target material with a flux of particles or gamma rays to induce nu-clear transmutations.Effective calculations of the reaction rates and isotope yields resulting from an irradiation are essential to experimental design, both to optimize the radioisotope production and to maintain the safety and radiological inventory of the target.The calculation of reaction rates and isotopic yields is performed through a combination of modeling and simulation, coupled with experimental validation and benchmarking.
There are extensive nuclear data needs for this work, in all portions of the nuclear data pipeline.Priorities include improving data for isotopes with established applications, developing energy-dependent cross-sections for isotopes of emerging importance, and ensuring that gaps in available data and predictive capabilities are addressed.Of particular importance are high-energy (i.e., E n 5 MeV) neutron-induced reaction cross sections in ENDF [103] including certain (n, p) reactions, as well as photonuclear reactions and proton-and deuteroninduced reactions up to 200 MeV.
The need for a robust predictive capability in modern reaction codes presents a cross-cutting need for the nuclear data community, as it impacts both the casual user of these codes, the data evaluation pipeline, and application spaces such as isotope production, neutronics, shielding, and detection.The intent of this session was to act as a conversation between code developers and users to explore the modeling and simulation tools available for prediction of interaction rates and isotope yields, the data needed for effective use of these codes, and the needs for further validation.Addressing the identified gaps from this discussion will improve the predictive capabilities of these codes and benefit both the field of isotope production as well as the breadth of the data and applications communities.

A. Prediction of Isotopic Yields
Predicting isotopic yields by modeling and simulation relies upon a wide range of computational tools, and may be categorized into a three-part process, each with its own set of predictive codes: 1. Estimating nuclear data for reaction channels: The first stage involves evaluating experimental and theoretical models for reaction channels to produce an energy-dependent cross section for each reaction channel, as well as associated secondary-particle spectra.
2. Modeling particle transport to determine reaction rates: The second stage involves simulating transport of the particle or gamma-ray flux through the materials in the experiment to determine the effective interaction rates for each reaction channel.
3. Simulating irradiation to calculate the activation and depletion of materials: The third stage involves calculating the activation and depletion of the material over the duration of the experiment and beyond.
In the case where the interaction rate (2) changes over the timeline of the experiment, due to transmutation of the material, calculation of the yields becomes an iterative process between (3) and (2), with different modeling and simulation tools employed in each stage.However, one open problem related to all three of these categories is that, while current predictive tools may generally be able to reproduce nuclear data and observables for known isotopic reactions and routine production activities, they often lack even a reasonable predictive capability when applied to emerging isotope production pathways.While experimental data and measurements are always considered the gold standard, this lack of a predictive power has created a situation where time, funding, and experimental capabilities are necessary to consider when exploring any new production pathway.Without reliable predictive tools, new production pathways must be explored experimentally, requiring significant effort even to show that one proposed pathway is inferior to another.To improve this situation, the following sections describe the current state of the art, the available codes used in each stage of the predictive process, and identify current gaps in knowledge and capabilities.

Determination of nuclear data for reaction channels
Because such data are used by the nuclear energy industry, nuclear data for neutron interactions near the regions of stability are generally quite robust.The standard format for these data is that used in the ENDF library, which includes evaluations of neutron cross sections and distributions, photon production from neutron reactions, a limited amount of charged-particle production from neutron reactions, photo-atomic interaction data, thermal neutron scattering data, and radionuclide production and decay data, including fission products [103].As reaction data beyond neutron-induced reactions are quite sparse in ENDF, further evaluated data for charged-particle and photon-induced reactions may be found in a number of application-specific databases coordinated by the International Atomic Energy Agency's Nuclear Data Section.However, due to both the time involved in nuclear data evaluation, as well as the inherently application-specific nature of many of these databases, on-demand access to unevaluated experimental nuclear data is needed by users.This information is compiled in the EXFOR database [104], which contains cross sections, differential data, particle spectra, and other nuclear reaction quantities induced by neutron, charged-particle and photon beams.There are nearly 24,000 experimental works which have been compiled in EXFOR, where approximately 46% are (n, x) reactions (approximately 95% of which are for E n < 14 MeV), 20% (p, x), 9% (d, x), and 6% (γ, x).While the data compiled in EXFOR represents a far broader swath of experimental nuclear data than the evaluated data contained in ENDF, there are still a wide number of reaction channels and residual products with limited or no available data.This is especially the case for the production of a number of radionuclides that are of critical importance to nuclear medicine and other communities.In situations and energies where well-characterized cross section data are unavailable, the isotope production community, as well as other application users, relies upon predictive codes to provide estimates.Unfortunately, accurate modeling of even moderately high-energy reactions is challenging.The current suite of predictive reaction-modeling codes is only accurate to within approximately 20% for (p, x) and (n, x) reaction channels where a large body of experimental measurements currently exist.In cases where few data exist, these codes often exhibit discrepancies anywhere within a factor of 2-50.
Four codes -TALYS, EMPIRE, CoH 3 , and ALICE -fall into the first category of codes capable of predicting nuclear physics cross sections.The calculation of energy-dependent cross sections for residual nuclei is generally accomplished employing various nuclear statistical models.The two most common approaches are the Weisskopf-Ewing formalism [105], which accounts for conservation of energy, charge, and mass, and the Hauser-Feshbach formalism [106], which additionally accounts for angular momentum and parity.
The TALYS code, using the Hauser-Feshbach statistical model, is employed for both fundamental nuclear physics research and other applications.It is streamlined so that all important nuclear reactions are incorporated into one code scheme [107,108,132].It currently covers incident neutrons, light ions (up to alpha particles), and photons, with energies up to 200 MeV (and, in some cases, up to 1 GeV).TALYS is used, along with a number of companion codes, to produce the TENDL reaction library [132], which includes (for incident neutrons) cross sections for total, elastic, non-elastic, capture, singleand multi-particle production, inelastic transitions to discrete levels and the continuum, fission, residual production, isomers, total particle production, angular distributions, double differential emission spectra, gamma production, and (critical for isotopes) particle production yields.TALYS has many adjustable parameters, which are optimized for the TENDL library using an extensive validation process.The predictive power of TALYS is numerically established for incident neutrons (above several keV), with charged-particle reactions to follow.Efficient access to all experimental data is essential to improve this code.Validation data for tuning multiple pre-equilibrium and level density models are needed to improve predictive power: specifically, a nuclide-by-nuclide TALYS parameter adjustment.Quality experimental data are essential for making these adjustments.
The EMPIRE-3.2 code, which also uses the Hauser-Feshbach statistical model, provides predictions for incident energies up to 150 MeV and projectiles up to alpha particles in addition to neutrons, photons, and heavy ions [109].It provides reaction cross sections, residual production cross sections, angular distributions, spectra, and angle-energy distributions of reaction products.Nuclear data needed to improve the predictive capability of EMPIRE include data for tuning level density models, information on pre-equilibrium emission at energies greater than 30 MeV, and reliable theoretical models for going off the line of stability and experimental data to calibrate phenomenological input parameters.
CoH 3 , the Coupled-Channels and Hauser-Feshbach Code, employs a statistical model for compound nuclear reactions.This code can calculate nuclear reactions for incident neutrons of greater than 1 keV and targets of masses A > 20 [110][111][112][113].This code provides complete information on nuclear reactions, including reaction cross sections as well as energy and angular distributions of secondary particles.The nuclear data needs of this code include information on pre-equilibrium particle emission because, though exciton models work when phenomenological parameters are well-tuned, crude approximations are always involved.Ongoing development of quantum mechanical models have the potential for large improvements in this area.Another identified need is information on nuclear level densities, as this is the most important quantity for predicting unknown isotope production cross sections and could have large uncertainties on high energy reactions.Specifically, experimental data on nuclides with masses close to target reactions of interest is essential.
ALICE is a Monte Carlo code using the Weisskopf-Ewing evaporation and Geometry Dependent Hybrid (GDH) pre-compound decay models [114,115].Required inputs include the mass and charge of the target and projectile as well as the projectile energy.In order to improve the predictive capability of this (more simplistic) code, benchmarking of the nuclear level density models near shell closure would be valuable for recommending best choices as a function of shell proximity and to indicate areas where more data may be needed.It is also recommended that recent codes based on the Hauser-Feshbach formulation be used both due to their improved physics, and because these newer codes are actively maintained.

Modeling particle transport to determine reaction rates
MCNP, LISE++, and FLUKA are three codes that fall into the second category of predictive tools, transport codes with some predictive physics models employing imported data libraries.
MCNP6 is a continuous-energy Monte Carlo radiation transport code that can be used for neutron, proton, photon, electron, or coupled neu-tron/proton/photon/electron transport [116,117].It has internal activation and depletion capabilities for some applications and can be coupled externally to provide this capability for other applications.MCNP's internal physics models are optimized for reactions at MeV energies.Improvements currently being implemented or planned for future work revolve around the modularization of the code components, as this will facilitate improved testing and correctness of the code, easier maintainability, and future ease of feature development and integration.The event record, currently in the form of a history file, will be deprecated in favor of a PTRACbased capability.There are currently ongoing developments, specifically code improvements related to chargedparticle transport, with data and physics model updates as necessary.To improve the predictive capabilities of MCNP6, validation is needed in the form of benchmark experiments and models that integrate collision physics data and models as well as residual nuclide and production/depletion calculations.
LISE++ is a code that predicts intensities and purities of rare isotope beams for the planning of future experiments with in-flight separators [118,119].This capability is essential for tuning rare isotope beams where results can be quickly compared with online data.This code is applicable for low, medium, and high-energy facilities including fragment-and recoil-separators with electrostatic and/or magnetic selection.This code has a strong reliance on databases for ionization energies, experimental production cross sections, compound materials, and fission barriers.The LISE++ internal physics models are optimized for reactions at MeV energies.In order to improve its predictive capabilities, a wide range of nuclear data on exotic isotopes is needed, especially an isomeric state database, production cross sections, and information on fission barriers and fragment momentum distributions.Additionally, detailed information on the excitation energy of fissile nuclei after abrasion is needed.
FLUKA is a general-purpose tool for calculation of particle transport and interactions with matter [120].It is capable of computing excitation functions from thermal energies to multi-GeV energies.It also has a built-in capability for evolution and buildup of induced activity, with up to five different decay channels per isotope.FLUKA's internal physics models are optimized for reactions at GeV energies.In order to improve the predictive capability of this code, reliable experimental data in the form of low energy neutron transport, charged-particle reactions, and nuclear reactions are needed.In addition, nuclear structure data is essential, particularly when populating residual nuclei near drip lines where mass, levels, spin, parity, and decay data for exotic isotopes are important.

Simulation of irradiation to calculate the activation and depletion of materials
Four codes in the third category in this session, activation and depletion codes, are FISPIN, ORIGEN (as used in HFIRCON), CINDER, which was tangentially covered in the MCNP6 discussion, and ISOTOPIA.
FISPIN is a standard code used in the UK over the last 60 years to calculate the composition and evolution of irradiated nuclear fuel and related waste streams [121].FISPIN11 has been in development for approximately four years and was a complete rewrite of the FISPIN solution method to include nuclear reaction data for accelerators.This code makes several assumptions, including thin targets and neutron-only sources.It is being pursued as a means of handling accelerator-based neutron energy spectra.Quality nuclear data are essential to improve the predictive capability of this code, as models are no longer limited by computational capabilities, but by the uncertainties and covariances in nuclear data.Decay data and neutron transmutation cross sections are of specific interest.
ORIGEN is a generalized activation and depletion code packaged as part of the SCALE code suite [122].ORI-GEN solves the system of ordinary differential equations that describe nuclide generation, depletion, and decay of all nuclides in the system, as well as computing the alpha, beta, neutron, and gamma emission spectra during decay.HFIRCON is a multi-cycle neutronics and depletion analysis toolkit to automate many irradiation calculations at the High-Flux Isotope Reactor (HFIR).It is used for materials testing, isotope production, and target and core design [123].HFIR-CON couples an enhanced version of MCNP5 to ORI-GEN with ADVANTAG variance reduction [124][125][126][127]. MCNP5 transport utilizes ENDF/B-VII.0 and ENDF/B-VII.1 cross sections supplemented with gamma production data from JEFF3.1.2[128,129], JENDL4.0u[130], CENDL3.1 [131], and TENDL-2013 [132].Depletion calculations use SCALE-ORIGEN data.In order to improve its predictive capability, reaction cross sections for isotopes which are not currently in the ENDF or JEFF libraries are needed, for example 187 W and 188 W .A full evaluation with scattering and secondary particle production is not needed for ORIGEN in this application space.Gamma production data is also extremely vital for these calculations, as predicted local heat generation rates are often significantly off.
CINDER is an activation and depletion code that can be used for both neutrons and protons [133].Discussion of planned MCNP6 development indicated that CINDER will be made a callable library for use in coupled calculations in MCNP6 and other codes.The current version of MCNP6 does include an embedded version of CIN-DER'90 that can be used for k-eigenvalue calculations only.Currently, MCNP6 can be coupled to CINDER as well as ORIGEN and FISPACT.
ISOTOPIA [134] is a code that predicts medical iso-tope production with charged-particle accelerators.The computational engine behind the IAEA Medical Isotope Browser [135], this code uses cross sections from the IAEA medical isotopes library [136] for 150 reactions combined with TENDL-2017 for all other reactions.Once the parameters for a production run are entered in the web browser, the buildup and depletion curves for the isotopes of interest (or all products) are plotted.As with all activation and depletion codes, the reliability of the predictions depends strongly on the input reaction cross sections.Thus, improved cross sections, as well as more reaction channels, will be of significant benefit for this easy-to-use package.Extensions of ISOTOPIA are in progress for reactor and photonuclear production of medical isotopes.

C. Recommendations
A strong and validated predictive code for reaction data is the single highest priority need for the isotope production community and presents a cross-cutting need for the entire nuclear data community, as many other applications rely upon these same codes.
All codes need a larger body of well-characterized experimental data to help tune and benchmark their capabilities.In particular, as limited isotope production activities take place in this energy region, the need for GeV-scale predictive codes may be considered a much lower priority than the urgent need for validated codes up to 200 MeV.However, this approach only creates local improvement for those measured reaction channels [137].To improve the predictive capabilities of all these codes, the consensus is that: global fits are needed, requiring experiments which report all possible measured reaction channels for a given target and beam interaction; improved level density and pre-equilibrium models are needed for global rather than local (single reaction channel) improvements; and the community needs to design a set of integral isotope production benchmarks for validation, similar to those developed by the nuclear criticality community.While required uncertainties in these data will reasonably vary based on the application, accuracies ≤ 10% have been considered as an acceptable target for the isotope community, as facilities often lack the time or capability to make iterative runs to meet production goals.As part of these global fits, evaluators need calibration points along the way, not necessarily only for the reactions of interest, as measurements of the competing channels will be valuable for placing constraints on calculations, especially at high energies and when fission barriers come into play.
There are several overarching observations and corresponding recommendations for isotope production.First, large-scale measurement campaigns for reaction data need to be continued, as these present quality data for use in local model improvements.Stacked-target experiments give much information on many channels across a wide range of energies, but this is just one class of experiments needed.Rather than only reporting production cross sections, other reaction observables are also needed including stable isotope production cross sections and secondary particle spectra.Historically, stable isotope production is often neglected in isotope production measurements.However, since these data are attainable, they are valuable, and provide another strong set of constraints on code performance.Measuring stable isotopes will, however, require the use of measurement techniques other than decay spectroscopy.These may include chemical and physical methods (such as ICP-MS and other chromatographic techniques), as well as the use of prompt gamma spectroscopy, which can give detailed information on angular momentum and level densities.Secondary particle spectra are, additionally, useful from a physics modeling perspective, but can be challenging to measure.However, as a function of angle, these spectra offer the ability to partially constrain level densities as well as contributions from compound and pre-equilibrium reaction mechanisms.For these reasons, establishing capabilities for their measurement would make significant contributions to model improvements.It is worth noting that obtaining these data at higher energies is a particular experimental challenge.
Second, the organization of measurement campaigns should be improved.Historically, many nuclear data measurement campaigns have been designed to measure production rates for particular reaction channels of interest to the isotope production community.However, having a working group of both theorists and experimentalists could be a more efficient way to identify which viable measurements could have the biggest impact on improving the predictive capabilities of codes.
Third, nuclear structure data are needed for tuning level density and pre-equilibrium models used in all production yield codes.Because the nuclear astrophysics community has established detector arrays and analysis codes for such measurements, collaborative partnering with them would be an efficient way to measure these data without establishing independent capabilities.Similarly, pre-equilibrium models could be improved by the development of quantum mechanical models for pre-equilibrium particle emission, rather than the phenomenological models currently employed.Collaboration with advanced computing for nuclear data could prove a beneficial partnership as high performance computing resources would likely be required.
Fourth, evaluations of charged particle-induced reactions are needed.Currently, the isotope production community uses a combination of modeling codes and EX-FOR when production data are needed to guide activities.With the exception of beam monitor reactions and a selected set of reactions for production of therapeutic or diagnostic isotopes, there is no ongoing effort to evaluate charged-particle production data, and many of the other production modalities employed lack proper evaluations as well.The isotope production community is in need of an evaluated database for the production data currently being measured and used.Predictive codes will play integral roles in these evaluations, thus their capabilities are needed here as well.It is recommended that a chargedparticle evaluation subcommittee be added to the Cross Section Evaluation Working Group (CSEWG) in order to keep a sustained focus on this effort.While the chargedparticle aspect is not necessarily a unique aspect of this evaluation work, the high-energy modeling required is unique.Compared to the majority of reaction evaluations, which focus on neutrons below 14 MeV, the reaction mechanisms and pre-equilibrium processes at these higher energies place unique and challenging constraints on models.The intent of such an evaluated database for isotope production is to function similarly to ENDF, a standardized resource that supports all the codes and applications.If predictive capabilities can globally improve, it is possible to reduce the number of unique, specific experimental measurements that have to be done every time a new idea or reaction is conceived.
Fifth, in conjunction with this evaluation effort, the isotope production community needs to design a set of integral benchmarks for validation of predictive codes, similar to those developed the nuclear criticality community.
With the exception of the quantum mechanical preequilibrium modeling, these are the highest outstanding needs for our community.As this area grows from year to year, the nuclear data workforce will need to be expanded to compile and evaluate new measurements for isotope production.

IV. EXPANDED BENCHMARKS & VALIDATION FOR NUCLEAR DATA
Nuclear applications that use computational models built on underlying nuclear data would benefit from the development of well-characterized and documented experimental benchmarks, both critical assemblies (configurations of nuclear material measured at the point of a self-sustaining nuclear chain reaction) and other classes of integral experiments (experiments that test multiple nuclear data types at once).While critical assembly benchmarks are very useful for validating nuclear data, a broader suite of benchmarks are needed to provide more complete validation of nuclear data and physics important for a broad range applications.Critical assembly benchmarks provide a measure of system criticality known as the effective multiplication factor, k eff , which is the ratio of the number of neutrons in one generation to the number of neutrons in the previous generation.There are many different applications that can leverage the framework used by the criticality safety and reactor physics communities to develop the additional benchmarks needed to validate the nuclear data they depend on.This session explored new and historical experiments that could be turned into benchmarks to strengthen nu-clear data validation in cross-cutting application areas.

A. Importance of Benchmark Models
Benchmarks are models of well-characterized experiments for which experimental uncertainties and the biases and uncertainties of any geometry and material simplifications have been assessed.In order to improve their accessibility to users, they should be well documented and provide sample input and calculation results.Benchmarks are then used to validate that the analytical methods used to model a particular application adequately represent reality.Ideally, they should provide an integral test of the evaluated nuclear data, data processing codes, and transport codes used to model the application.They can be designed to either test multiple data (isotopes, reactions, energies) at once or, in some cases, designed to be particularly sensitive to one piece of data (for example, a thermal neutron scattering law).When used properly, benchmarks are an essential part of the validation process for evaluated nuclear data and provide the applications feedback needed to improve the data.Examples of benchmarks used in nuclear data validation can be found within documentation for ENDF; one specific example is shown in Fig. 2, which shows the χ 2 improvement in calculated k eff for critical benchmarks for the ENDF/B-VIII.0[138] nuclear data library compared to ENDF/B-VII.1 [139].
FIG. 2: An example from the Collaborative International Evaluation Library Organization (CIELO) project which shows validation using ICSBEP criticality benchmarks.This shows that overall ENDF/B-VIII.0nuclear data performs better than ENDF/B-VII.1 for ICSBEP benchmarks (from Ref. [140]).
Validation is often understood to come at the end of the nuclear data pipeline, but it is actually fundamen-tal to ensuring the proper functioning of all parts of the pipeline and providing confidence in the predictive power of application models.Validation benchmarks specific to an application area can provide a way to systematically prioritize nuclear data needs and determine where funding is needed along the nuclear data pipeline.
The U.S. Department of Energy's Nuclear Criticality Safety Program (NCSP) funds research and technology relevant to Nuclear Criticality Safety (NCS) and can be considered a model of holistic nuclear data investment driven by validation data.An early focus of the NCSP was ensuring an adequate suite of integral benchmarks were available for nuclear data and code validation, and NCSP has been the main US contributor to the International Criticality Safety Benchmark Evaluation Project (ICSBEP) handbook [24] for thirty years.Validation testing against real experiments highlighted problems in underlying nuclear data, data processing, and codes.Therefore, NCSP actively funds the nuclear data pipeline to ensure the subcritical predictions are correct, and uses validation needs as a driver and prioritization tool.NCSP directly funds improvements to multiple radiation transport codes -important for code-to-code validation.The program is among the main sources of funding of US nuclear data evaluators (particularly resonance and thermal scattering evaluators), provides funding to the US National Nuclear Data Center (NNDC) at Brookhaven National Laboratory, and maintains its own Nuclear Data Advisory Group to prioritize funding of NCS data needs.NCSP funds and directs integral experiment research at National Criticality Experiments Research Center (NCERC) and Sandia National Laboratories and produces validation benchmarks for nuclear data and NCS, including critical and subcritical benchmarks.The NCSP can serve as a model for other programs who rely on code predictions to accomplish their missions.
As a direct result of the benchmarking efforts of the NCSP and international criticality safety community, critical experiments have come to dominate the current nuclear data validation scheme for all applications.Data analysis of the output of criticality benchmarks is also simple, as it is one number, k eff , but that one number is subject to a fortuitous cancellation of errors in the underlying nuclear data.Calculations of sensitivities to this one parameter are also straightforward compared to sensitivities for other types of experiments, and many codes exist to calculate these sensitivities.The critical assembly benchmarks do not adequately test data for all applications, including gamma emission, scattering data, and time history of fission.Validation using other types of integral or semi-integral experimental measurements could be used to provide a wider test of nuclear data and code predictions.The goal of an adequate validation should be to have overlapping coverage from multiple different kinds of benchmarks, analogous to sensor fusion for a self-driving car.Cameras, LIght Detection and Ranging (LIDAR), and RAdio Detection And Ranging (RADAR) signals combine such that the car can be safely driven in all scenarios.Similarly, it is important to test all the ways codes can employ nuclear data with multiple types of experiments, which will ultimately constrain the potential solutions and eliminate the hidden problem of fortuitous cancellation of errors.

B. Past and Present Benchmarking Efforts for Nuclear Data Validation
The most well known compilations of integral experiment benchmarks are international efforts coordinated and maintained by the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development.The ICSBEP [24] is the oldest and most trusted NEA compilation, and contains criticality, shielding, fundamental physics, and subcritical benchmarks, although the majority of the included benchmarks are critical experiments.The other three NEA managed compilations are the International Reactor Physics Experiment Project (IRPhEP) [141], the Shielding Integral Benchmark Archive and Database (SINBAD) [142], and Spent Fuel Composition (SFCOMPO) [143] databases.
A few suggested improvements for these benchmark compilations for nuclear data testing are to address the lack of experimental correlations in the ICSBEP Handbook (only approximately 2% of benchmarks have documented experimental correlations), improving usability, uncertainty analysis, trust of other experimental data resources (SINBAD, SFCOMPO), and incorporate legacy experiments that underpinned past validation campaigns (e.g., STEK [144]).Additionally, the expectations for benchmark quality (such as uncertainty analysis and acceptance of modelling simplifications) has evolved over time and it would be appropriate to reevaluate some of the earlier benchmarks and bring them up to modern standards.
Other sources of historical integral data include the Cross Section Evaluation Working Group (CSEWG) Benchmark Book [145], last updated in 1991, a research reactor database compiled by the IAEA [146], as well as a selection of electronic citations from the United States' Office of Scientific and Technical Information (OSTI) [147][148][149].While there are many existing experiments in these resources that could be useful for validation in other application areas, they are currently underutilized for validation.One of the main reasons is that these experiments are not necessarily evaluated as benchmarks and might have no uncertainty analysis at all beyond the experimentally reported uncertainty.Additionally, models of the experiments with modern codes may not exist, few tools exist to easily use these results for validation, and few tools exist to assess cross section sensitivities in the measured parameters.These compilations could provide an excellent starting point to find experiments that could be evaluated as validation benchmarks which would be useful for multiple application areas.

C. Experimental Measurements that Could
Become Benchmarks In addition to historical experiments, there are many experimental measurements that, if adequately vetted and documented, could become benchmarks, including quasi-integral experiments (experiments that are highly sensitive to a particular reaction, but might provide data as a function of time, energy, angle, etc.).This work uses the terms semi-integral, quasi-differential and quasiintegral interchangeably.The following section describes examples of these types of experiments, but is in no way an exhaustive list.

Quasi-integral Experiments
Neutron-induced neutron emission experiments are highly sensitive to neutron scattering and can be used to capture angular dependence information.In these experiments, a well-collimated pulsed neutron beam hits a thick sample of interest and detectors surrounding the sample detect neutrons which have undergone scatter or result from fission (for the case of fissionable materials).These experiments are usually conducted using neutron beams in time-of-flight facilities and the neutrons are detected as a function of their time-of-flight.Rensselaer Polytechnic Institute (RPI) has conducted these experiments using incident neutron energies from 1 keV to 20 MeV with a carbon sample as a reference to assist with data interpretation for many different materials [150][151][152][153][154][155][156].A picture of the RPI experimental set-up is shown in Fig. 3.The 238 U experiment was used to inform the physics and the ENDF/B-VIII.0evaluation of 238 U [157,158].Comparing the experimental results with detailed time dependent simulations of the experiments can provide information for nuclear data evaluations, but a detailed model of the experimental set-up could be completed to provide integral validation, as well.
A slightly different neutron-induced neutron emission experiments that could provide excellent integral data is Pulsed-Neutron Die-Away (PNDA) experiments.PNDA measurement techniques were used to characterize thermal neutron diffusion properties in water in a study by Nassar and Murphy [159].As shown in Fig. 4, a deuterium/tritium neutron generator was used to provide a pulsed source of 14 MeV neutrons incident upon spherical Pyrex flasks of water of various radii at room temperature.Large-radius spheres have low geometric buckling and are relatively insensitive to thermal scattering, allowing validation of the absorption cross sections employed.Small-radius spheres have high geometric buckling and are very sensitive to the integral and differential thermal neutron scattering cross sections employed.After establishing thermal and spatial equilibrium, the neutron flux was measured over time with a BF 3 detector immersed in the water.The apparatus was surrounded by a cadmium-shielded box to minimize room return.Fundamental-mode time-decay eigenvalues were calculated from the recorded count history.The Nassar and Murphy experiment could be evaluated as an ICS-BEP Fundamental Physics Experiment, with the experimental set-up modeled in a radiation transport code and predicting the neutron die-away, and additional experiments of this type would provide needed tests for thermal scattering laws.An example of this is the use of PNDA experiments to validate the ENDF/B-VIII.0hexagonal ice TSL evaluation [160].
Instead of detecting neutrons reacting with a target, a similar type of quasi-integral experiment that could become a benchmark measures gammas from inelastic scattering reactions.An example of these types of measurements are documented in the "Baghdad Atlas" a database of flux-averaged inelastic scattering gamma intensities measured at the Al Tuwaitha research facility outside of Baghdad in the 1970s [161] that has since been digitized and updated to reflect the current Evaluated Nuclear Structure Data File (ENSDF) [10] structure values [162].The database contains gammas from 105 different samples, of which 76 are natural abundance and 29 are isotopically enriched.Each gamma is presented as a flux-weighted intensity, relative to the 56 Fe 847 keV gamma, allowing for the conversion to flux-weighted cross sections.This database is unusual in its broad cover- age of elements across the periodic table, including many isotopes that do not have many differential measurements.The measurements were done consistently, with the same flux, detector, and experimental setup.The detector used was a single Ge(Li) detector placed 90 degrees from the beamline.Unfortunately, the flux was not wellcharacterized and the uncertainties on many data points are quite high compared to conventional benchmarks uncertainties.However, as many samples measured as part of the Baghdad Atlas have no other differential data measurements, these measurements can indicate where large discrepancies exist in evaluated inelastic scattering cross sections.More benchmark experiments should be performed that are similar to the Baghdad Atlas in purpose, but that have improved technology and characterization and that have fluxes similar to the application flux.
The study of neutron and gamma ray emissions from fission fragments applies to several application areas.These emissions are signatures for the detection and characterization of nuclear materials.To perform experiments in this area, University of Michigan recently developed Fission Sphere (FS-3), an array of forty organic stilbene detectors operated in time-coincidence [163,164].The FS-3 is used to measure the prompt emissions of neutrons and gamma rays from 252 Cf spontaneous fission.These new data will be used to validate physicsbased prediction codes, including CGMF [165] and FREYA [166], and will be useful in future ENDF and ENSDF evaluations.The first experiments using FS-3 and a 252 Cf spontaneous fission source recently took place.These measurements provide useful information on the correlations among energy, multiplicity, and angles of emitted particles.

Subcritical Experiments
Neutron multiplicity counting (NMC) is important for several application areas, including nonproliferation, criticality safety, and in-core reactor monitoring.NMC accumulates the frequency distribution of observing coincident neutron counts during a coincidence gate that is typically several hundred microseconds to a few milliseconds wide, depending on the neutron lifetime of the subcritical system and the time constant of the neutron multiplicity counter.For multiplying systems (i.e., those containing fissile or fissionable materials), the measured NMC distribution is broader than a Poisson distribution with the same mean because the "bursts" of coincident neutrons measured by the multiplicity counter are correlated across multiple generations of fission chain reactions sustained in the system.In general, as neutron multiplication increases (i.e., as fission chain reactions grow longer), the NMC distribution broadens further.Furthermore, the higher moments (e.g., the variance, skewness, kurtosis, etc.) of the NMC distribution are more sensitive than the first moment (i.e., the mean neutron count rate) to changes in nuclear cross sections (fission, capture, and scattering) and other parameters (probability of the number of neutrons emitted during fission, etc.).A great deal of NMC and other neutron noise research has been performed in recent years due to improved hardware and simulation capabilities [167][168][169][170][171][172][173][174][175][176][177].
NMC measurements have not previously been used for nuclear data evaluation because there was no computationally efficient method to estimate the sensitivity of the higher moments to energy-dependent cross sections and other transport parameters.Recently, North Carolina State University (NCSU) developed a new adjoint-based first-order sensitivity analysis for higher order NMC moments [178,179].
Other neutron noise methods can also be useful for nuclear data validation.A system based on stilbene organic scintillators (Oscar) has been developed by the University of Michigan.Oscar, shown in Fig. 5, is capable of pulse-shape-discrimination and digital acquisition and has been shown to yield accurate estimates of k eff for several subcritical SNM configurations [180][181][182][183][184].

D. Validation Needs from Application Areas
Not all application areas use the same specific nuclear data for computational predictions.A nuclear reaction data library contains hundreds or thousands of individual isotopes, each with multiple reaction cross sections and related data over many decades of energy.Ideally, the specific data used to predict an application observable should be identified and tested against an experimen-FIG.5: An example of subcritical neutron noise measurements utilizing both 3 He and organic scintillator detectors measuring a sphere of Pu [183].
tal benchmark measurement, which will help highlight data areas for improvement.The following section will describe integral needs for several application areas to allow adequate testing of relevant data.

Capture Gamma Benchmarks Needed for Multiple Application Areas
Despite being relevant to many application areas, the production of secondary gammas due to neutron capture is often overlooked.This type of data is needed for shielding design and analysis, but it is also important for reactor simulations to correctly model energy deposition due to gamma production (gamma heating) [185].Additionally, gamma emission from active neutron interrogation provides a physical mechanism for unambiguously assessing the isotopic composition of an object (i.e., material identification), invaluable for nonproliferation studies.An additional example of nuclear data issues involving secondary gammas has been previously illustrated for oil exploration applications [186].
As an example of validation demonstrating a shortfall in data, researchers at the European Spallation Source (ESS) found some important high energy gammas produced by neutron capture in nickel were missing from ENDF/B-VIII.0,although they were present in ENDF/B-VII.1.The application the ESS is interested in is shielding around a neutron scattering instrument that uses a neutron supermirror primarily comprised of layers of nickel and titanium at the end of a cold neutron beamline.The shielding design around this beamline and the scattering instrument can be dominated by gammas produced by neutron capture, especially in the layers of the neutron supermirror.The important capture gammas [187] missing have energies of 7.819 and 8.998 MeV.
The U.S. NNSA/DOE Office of Defense Nuclear Nonproliferation funded a study that produced a prioritized list of elements relevant to nonproliferation applications that require improved reaction cross sections, The major driving interest is related to secondary gamma emission from active neutron interrogation.This prioritized list comprises elements that make up structural and shielding materials, controlled or dangerous substances, and detector materials.While not all isotopes of elements on this list have known issues with reaction data in ENDF-B/VIII.0,there is a need to review identified existing gamma production cross section data for validity, assess any unvalidated existing data for acceptability to correct existing data, or fill in missing cross section data.Additionally, there should be a concerted effort to reconcile discrete gamma-ray energies, multipolarities, branching ratios, and primary/secondary gamma-ray spectral data between the ENDF/B-VIII.0and ENSDF libraries.
Benchmark experiments that primarily test radiative capture (n, γ) and inelastic scattering (n,n γ) reaction data would be the most useful for these varied applications.An additional consideration should be given to the usability of the resulting benchmarks, as benchmarks that measure integral quantities like dose can take more computational time to run and do not provide specific information about gamma emission as a function of energy.Measurements of gamma spectra would be ideal.

Benchmarking Needs for Advanced Reactors
The wide variety of advanced nuclear reactor concepts being considered also have additional nuclear data needs.Some of these nuclear data needs include fission product yield and decay data to more accurately predict isotopic inventories.More precise data needed to predict source terms and shielding requirements are also needed including prompt neutrons and gammas from fission, gamma emissions from fission products, material activation and decay, and neutron and gamma attenuation.Improvements to thermal neutron scattering laws for many moderators (YH x , FLiBe, reactor-grade graphite, etc) would also be desirable.HALEU (High-Assay Low Enriched Uranium) integral experiments are needed for validation.It would also be highly desirable for material damage cross sections to be evaluated and disseminated in the manner of ENDF.Critical experiments performed to support the design and development of these advanced nuclear reactor concepts should be benchmarked to drive improvements in the nuclear data relevant to these applications.
Engineering mock-up critical experiments have historically been used to support the validation of nuclear reactor designs.One recent example is the use of the VENUS-F zero-power reactor [188] to support the reactor physics design of the Multi-purpose hYbrid Research Reactor for High-tech Applications (MYRRHA) facility being designed at the Belgian Nuclear Research Centre (SCK CEN) in Mol, Belgium [189].MYRRHA has been conceived to operate in subcritical or critical mode, as an Accelerator Driven System (ADS) or as a fast reactor cooled by lead-bismuth eutectic, respectively.
To validate the nuclear data and codes for the MYRRHA design, several core configurations with four different compositions of fuel assemblies were studied in VENUS-F, shown in Fig. 6.This core combines metallic uranium fuel (30 wt.% enrichment) with aluminum oxide (for simulating oxide fuel) and includes lead and bismuth as coolant simulators.Global parameters (k eff , β eff , and Λ eff ) and local parameters (spectral indices, axial and radial fission rate distributions or differential control rod worth) were measured.These experiments could provide valuable data to support Pb-Bi cooled fast reactor, ADS, and shielding applications if they were turned into accessible benchmarks.Similar benchmarking efforts for other advanced reactor concepts would provide the necessary data to check computational models, nuclear data, and assumptions.
FIG. 6: The VENUS-F Zero Power Reactor [188], which was used to generate benchmark data for the design of the MYRRHA accelerator driven system.

E. Sensitivity-based Nuclear Data Validation
Another barrier to wider use of benchmarks to inform nuclear data is that some integral experiments require non-trivial and computationally intensive analysis that can only be analyzed by a select set of experts using specialized software.The utility of these benchmarks could be vastly improved by using sensitivity coefficients (response functions) to provide near instantaneous nuclear data feedback.With energy and reaction-dependent sensitivity profiles, data evaluators could quickly and easily predict the outcome of a cross section change to benchmark performance.Sensitivity methods to k eff are the most advanced (due to considerable investment from NCSP), but response functions to other benchmark values (calculated spectra, reactor physics observables, burnup, subcritical variables, etc.) would increase the usage of these complicated benchmarks by the nuclear data community and would assist in designing new experiments to have maximum impact on applications.Development of platforms for automated testing, both using traditional calculations and sensitivity feedback, are also important to data feedback.Example efforts in this area include ADVANCE (BNL) [190], NDaST (NEA) [191], as well as the recently developed CRATER (LANL) tool.
Sensitivity methods can be especially powerful when coupled with a machine learning (ML) algorithm at finding nuclear data issues.LANL recently used machine learning [192] to find issues in nuclear data using the LLNL pulsed sphere experiments [193][194][195], shown in Fig. 7. Pulsed spheres exist for many distinct materials containing, by careful choice, only few isotopes.This allows one to draw specific conclusions on how well nuclear data of specific isotopes perform when simulating pulsedsphere neutron leakage spectra.The pulsed spheres have distinctly different sensitivities to nuclear data than critical assemblies.For instance, they are distinctly more sensitive to angular distributions than critical assemblies.In addition, ratios of sensitivities to fission-source term observables differ compared to critical assemblies.These differences allow for disentanglement of the effect of spectra, fission cross sections, and multiplicities when both critical assemblies and pulsed-sphere neutronleakage spectra are used for nuclear-data validation with ML algorithms [192,196].
FIG. 7: Setup of the LLNL pulsed spheres experiment [193].These measurements have been used recently in modern validation efforts utilizing machine learning.

F. Summary of Nuclear Data Benchmarks
Accurate prediction of nuclear systems requires adequate testing of the codes and underlying nuclear data against real experiments.The current data validation scheme is known to have real deficiencies that actually impact applications and their associated calculations.As a first step, data users should ensure their application ar-eas have benchmarks to validate the nuclear data used in their simulations.If there are areas without adequate benchmarks, a next step could be to examine the historical benchmark collections and experiments to see if relevant ones exist.If not, it is recommended that new experiments be performed and evaluated and promulgated as benchmarks.For maximum utility in providing feedback to the nuclear data community, sensitivity methods to benchmark observables beyond k eff should be developed, and sensitivity profiles to cross sections calculated, which allows for efficient data testing without reliance on application experts.

V. NUCLEAR DATA FOR SPACE APPLICATIONS
As humanity works to extend its technological reach deeper and more resolutely into space, the sophistication of the missions and equipment being launched has also been accelerating.In turn, the engineering and scientific needs to support those missions have continued to grow and nuclear data is no exception.From anticipating effects due to the vast collection of cosmic rays that moves freely in the vacuum of space to humans sending sources of radiation into space to support their missions, utilizing nuclear data and models -generated mostly for terrestrial uses -for space applications is becoming more widespread.To that end, this session aimed to gather all the prominent users of nuclear data for space-related technology for the first time to summarize their work as well as what data needs they either have now or anticipate having in the future.These topics included protection/shielding from space radiation, planetary nuclear spectroscopy, space reactors, planetary defense, and detecting nuclear detonations in space.
As the impact of nuclear data to applications is recognized by a growing number of programs, it is important to examine the many cross-cutting nuclear data needs for the space mission.Enhancing outreach to relevant programs will enable more comprehensive discussions and collaboration among interagency partners.Future WANDA sessions related to space needs should: seek to build awareness of space applications in the nuclear data community; carve out a permanent place in WANDA for discussing their needs; document critical data gaps, especially those affecting multiple applications; and suggest steps to meet those data needs.Though only a starting point, the remainder of this section includes a brief introduction to each space-based research topic, the pertinent nuclear data, and what improvements would be most useful for that aspect of the field.

A. Space Radiation Protection
The radiation environment in space poses unique risks to humans and electronics, necessitating an understand-ing of the interactions of galactic cosmic rays (GCR), solar energetic particles (SEP), and trapped Van Allen belt radiation.The range of particle energies, species and materials included in those interactions is vast, spanning energies ranging from keV per nucleon to up to several tens of TeV per nucleon; ion species that span the naturally occurring isotopes in the periodic table; and materials composed of elements that also span the periodic table [197][198][199].The effort to understand those interactions includes measurements in space [200][201][202][203][204][205][206], measurements at particle accelerators [207], and modeling [208].
The free-space radiation environment is generally well understood [197].Except for cases where instruments and electronics are exposed to the free-space environment, the radiation environment for most operations in space will be composed of the particles and energies present after the primary radiation field has passed through varying thicknesses of materials that make up spacecraft and habitats.In shielded environments, the radiation environment is composed of primary, free-space ions that have slowed down due to electromagnetic interactions (stopping power), and a secondary radiation field created by nuclear interactions of primary ions with shielding materials.The secondary radiation field is complex and also includes particles not present in the freespace environment, such as neutrons.The calculated yields of secondary light ions (p, deuterons, tritons, 3 He, 4 He, and n) have been predicted to contribute 50% of the dose equivalent behind 5 g/cm 2 of Al and 80% of the dose equivalent behind 30 g/cm 2 of Al [209].The calculated secondary light ion yields are also responsible for most of the differences seen between the various codes [210] behind shielding thicknesses greater than 5-10 g/cm 2 and are the largest source of uncertainty in those calculations (see Fig. 8).As such, the secondary radiation field created by nuclear interactions within spacecraft, habitat, and other materials requires an accurate quantification of the electrons, protons, heavy charged particles, and neutrons that make up that field.
Radiation transport models, both Monte Carlo and deterministic, are the primary tools used for mission design and prediction of crew doses and electronic effects in space.Experimental nuclear data is needed for verification of code predictions, improvements in the physics models used in those codes, and reduction of the uncertainties in their predictions.A review of the doubledifferential and total reaction cross sections important to the understanding of GCR and SEP transport was conducted [207,211], and key gaps in the experimental data have been identified.For GCR transport, Heinduced inclusive double differential light ion (p, 2 H, 3 H, 3 He,4 He, n) cross sections at beam energies from 0.1 up to several GeV per nucleon and on targets of H, C, O, Al and Fe have been identified as a critical need, as well as total reaction cross sections for most GCR ion species and targets at beam energies above 1.5 GeV per nucleon.In some cases, such as Fe + O, no total reaction cross section data exists.Secondary particle produc-FIG.8: Predicted dose equivalent rates from neutrons and ions behind varying thicknesses of aluminum using several transport models.(From [210].)tion includes hadronic and electromagnetic particle showers which spread dose geometrically as well as impact the depth of particle penetration through some material thickness.Angular dependence in production cross sections is a critical need for understanding showers.These data needs for the planetary spectroscopy community are similar to needs of the isotope production and medical physics communities.

Background
Planetary nuclear spectroscopy is an established subfield of planetary science where measurements of gammaray and neutron emissions from planetary surfaces are used to characterize the chemical composition of the surface.First proposed as a means of characterizing the hydrogen [212] and major-element composition [213] of the Moon, the technique has now been applied to a wide variety of planetary objects.To date, nuclear spectroscopy experiments have been carried out from orbit around the Moon [214][215][216][217], Mars [218,219], Mercury [220][221][222], and the asteroids 433 Eros [223], 4 Vesta [224], and 1 Ceres [225].Although less common, in situ experiments by landed spacecraft have also been carried out on Venus [226], asteroid 433 Eros [227], and Mars [228].Missions are currently planned for asteroids 16 Psyche [229], moon Phobos [230], and Saturn's moon Titan [231].
Most planetary nuclear spectroscopy experiments rely on galactic cosmic rays to stimulate neutron and gammaray emissions from planetary surfaces, as shown in Fig. 9.In this scenario, high-energy primary cosmic-ray particles (>30 MeV), primarily protons, initiate nuclear spal-FIG.9: Schematic of cosmic ray interactions with planetary surfaces.Rendering by Veronica Chen [232].
lation reactions to depths of a few meters in the surface.Spallation neutrons can escape the surface and the energy-dependent shape of the neutron spectrum provides constraints on the bulk composition and hydrogen content of the surface.Moreover, the neutrons interact with subsurface materials and stimulate gamma-ray emission via inelastic scattering and neutron radiative capture reactions.The resulting gamma-rays provide element-diagnostic measurements of the surface composition to depths of tens of centimeters.NASA's upcoming Dragonfly mission to Titan will use a D-T neutron generator to stimulate gamma-ray emission from the surface.However, the underlying nuclear reactions of interest are neutron inelastic scattering and radiative capture.

Current Status of Nuclear Data
Although a number of benchmark experiments have been conducted [233,234], the wide variety of processes that are important for nuclear spectroscopy experiments means that data analysis efforts require intensive radiation transport simulations that rely on cross section libraries to provide the knowledge of the physics processes of interest.Relevant processes include: 1. Spallation cross sections for protons and alpha particles, on a wide variety of materials, from energies of a few tens of MeV to hundreds of GeV.
2. Neutron elastic scattering cross sections from energies of ∼ 50 MeV to thermal (∼ 0.2 eV).also for elements with concentrations of ∼ 0.1 wt% or higher.
In the case of items 3 and 4, both primary, (e.g., n, γ), and secondary cross sections for gamma-ray production are relevant as both contribute to the final measured gamma-ray environment.While exact detection limits vary based on the nature of the gamma-ray detectors, spacecraft orbit, and measurement time, typically gamma-ray spectroscopic investigations are sensitive to elements with > 0.1 wt% concentrations.For known planetary materials, this can include H, C, O, N, Na, Mg, Al, Si, P, S, Cl, Ca, Ti, Cr, Mn, Fe, Co, and Ni.Currently, uncertainties on the neutron interaction cross sections are the dominant source of systematic uncertainty.Planetary geochemists require measurements with less than 1% uncertainty while 5-25% uncertainties are currently the best that can be achieved.

Nuclear Data Needs
The highest priority nuclear data need for planetary nuclear spectroscopy is (n, n γ) for H, C, O, N, Na, Mg, Al, Si, P, S, Cl, Ca, Ti, Cr, Mn, Fe, Co, and Ni, from threshold (∼ 0.1 to ∼ 1 MeV) to ∼ 50 MeV, with less than 5% uncertainty.This overlaps with data needs from safeguards and stewardship applications, where neutrons are used for non-destructive characterization of nuclear waste materials and homeland security applications.The data must be provided to the community via cross section libraries, e.g.ENDF (US Evaluated Nuclear Data File) and JENDL (Japanese Evaluated Nuclear Data Library), that are compatible with the GEANT4 [235] and MCNP6 [236] transport codes, which are widely used by the planetary nuclear spectroscopy community.Comparisons of laboratory-measured gamma-ray production via neutron inelastic scattering to predictions based on ENDF/B-VI, ENDF/V-VII, and ENDFB/VIII reveal a significant degradation in the accuracy of the secondary gammaray energy distributions since the release of ENDF/B-VI [237].Additionally, cross sections for secondary gammageneration are also affected.
Nuclear spectroscopic investigations also require knowledge of spallation cross sections from energies of a few tens of MeV to hundreds of GeV in typical rockforming elements.The number of neutrons released in a spallation reaction is particularly important.Because of the wide variety of elements and energies in question, benchmarking experiments are particularly valuable [238] for guiding the decision of physics simulations for GEANT4 and MCNP6.This data need overlaps with the needs of the radiation shielding and isotope production communities.
Another important data need is (n, γ) cross sections.While these are generally known with better precision than the prior two examples [239], unexpectedly high cross sections are currently being identified [240] and high-capture cross section elements can be relevant for planetary nuclear spectroscopy measurements, even if the element is present at ∼ ppm concentrations and thus not directly detectable via nuclear spectroscopy measurements [241].

C. Space Reactors
With the U.S. returning to the Moon this decade (Fig. 10), along with crewed missions to Mars later this century, NASA has resumed looking at nuclear options for propulsion, surface, and on-board power.Past efforts in nuclear thermal propulsion (Project Rover), nuclear electric propulsion (Project Prometheus), and surface power (Kilopower [242], KRUSTY [243]) have been conducted and form the basis of current research efforts.In addition to the existing reactor designs from those projects, new reactor designs (gas, liquid, and solid) and fuels are being explored for space applications.One critical aspect of reactors that will be used in space is the need for autonomous control, a need that places additional emphasis on uncertainty quantification of the nuclear data used in the design of these systems.The data needs for many of the advanced reactor concepts for terrestrial use are very similar to the needs for space reactor development, such as: 1. Fission product inventories, with accurate data for individual and cumulative yields; 2. Secondary radiation generation and deposition; 3. Cross sections needed for the assessment of irradiation damage that are not currently available in the ENDF libraries; 4. Reduction of uncertainties on fast neutron reaction cross sections on uranium isotopes.
Though space and advanced terrestrial reactors share many common nuclear data interests, space reactors have unique size constraints and design criteria, and will operate in an entirely different radiation environment than their Earth-bound counterparts.These data needs address several areas of reactor development for space applications, including accident tolerant fuel forms, material effects under conditions of high temperature, shielding, and reliability.

D. Planetary Defense
Planetary defense is a field of research devoted solely to the purpose of preparing for a scenario where a near-Earth object, such as an asteroid, could potentially collide with the Earth.Though an asteroid impact similar to what caused the extinction of the dinosaurs is an extremely low-probability event, there are many other smaller asteroids that pose a threat and could cause extensive damage; a recent example is the 20 meter asteroid FIG.10: Illustration of a conceptual fission surface power system on the Moon which may potentially be used for the upcoming Artemis Mission [244] that exploded over Chelyabinsk, Russia in 2013.It is estimated that there are about 130,000 near-Earth asteroids that are greater than 100 m in diameter and only ∼ 20% have been accounted for and their orbits characterized [245].
In the event that the Earth did need defending from an asteroid impact, the preferred mitigation mission would be a kinetic impactor, which is both the simplest and currently the most developed option in terms of technology [246].However, in the event that a kinetic impactor would be insufficient to prevent an asteroid impact, either from the asteroid not being in the correct size range or there not being enough time for the asteroid's orbit to be deflected, sending a spacecraft carrying a nuclear device to intercept the asteroid is an alternate option.A nuclear mitigation mission could be utilized two different ways, depending on the need.Upon detonation, the device would emit mostly x-rays and neutrons that would heat up and vaporize the illuminated surface of the asteroid, causing material to expand and be ejected.If the intended mission was to deflect the asteroid, the ejected material would impart a push of momentum to the asteroid in the opposite direction, while keeping the bulk intact and altering the orbit enough to miss the Earth.If the intended mission was to disrupt the asteroid, the x-rays and neutrons would cause a shock wave to penetrate through the entire asteroid, breaking it into many small, fast moving fragments that would miss Earth by a large margin or vaporize in the atmosphere.

Simulations with Nuclear Data and Uncertainties
Correctly simulating the energy deposition from the device's radiation and the subsequent ejecta while designing a mitigation mission would be essential to its success.Such simulations would require accurate cross sections of all interactions and reactions for neutrons at the energies around the output of a nuclear device for the elements that make up asteroids.Though the output neutrons have a variety of energies, the most probable energies are 14.1 MeV (from the 2 H+ 3 H fusion reaction), 2.45 MeV (from the 2 H+ 2 H fusion reaction), and ∼ 1 MeV (peak value of the fission spectrum Watt distribution for 235 U) [247].Asteroids are roughly composed of various stone-line materials such as silicates, hydrocarbons, metals such as iron or nickel, and potentially some ice, depending on its particular type [248].Those compounds predominantly include the elements H, C, O, Mg, Si, S, Ca, Fe, and Ni, though others are possible (see Sec. V B).Chondrites and other meteorite samples can be used to provide insight into variations in initial particle (including photon) interactions and energy deposition with such astronomical bodies.
Currently, the most efficient way to simulate the nuclear deflection/disruption of an asteroid is to first generate an energy deposition function from the radiation (such as in Fig. 11), which in the case of neutrons, would utilize Monte Carlo transport codes such as MCNP [236] or Mercury [250].The energy deposition function could then be used to initialize a standard hydrodynamics code (which includes damage models) that would calculate the asteroid's reaction to the energy deposited from the radiation over longer time scales [249,251].The most recent versions of MCNP and Mercury get their neutron cross section data from the ENDF B-VII.1 Library and the Evaluated Nuclear Data Library (ENDL), respectively.An example of the type of nuclear cross sections used to calculate the deposition in Fig. 11 can be seen in Fig. 12.
In part because the choice of a nuclear mitigation mission will likely be made after locating an incoming asteroid with little warning time, the properties of the asteroid itself will contribute the largest uncertainties when formulating the mission.Key characteristics such as the material composition, structure, rotation, and even the mass/size will likely be poorly constrained before a launch if minimal data on the asteroid has been collected.Even if a full reconnaissance mission to the asteroid has been achieved beforehand and most properties are well characterized, simply changing which portion of the asteroid is illuminated by the device can still present uncertainty.Creating a full picture of the sensitivities and uncertainties associated with the asteroid properties for a nuclear mitigation mission is an active work in progress for the members of the planetary defense community.However, many of the properties listed above will likely contribute greater uncertainty than the ∼ 25% arising from the nuclear data models.Even so, the data needs of planetary defense overlap significantly with the needs of planetary spectroscopy, which requires less than 5% uncertainty for neutron-induced cross sections in the energy range of interest.It is also likely that the asteroid surface compositions resulting from measurement efforts by those in planetary spectroscopy will inform the material characteristics for mitigation mission simulations, providing a twofold benefit from more precise cross sections.

Space-Based Nuclear Detonation Detection
Another application of nuclear data that is highly relevant to national and global security is the employment of satellites to detect nuclear weapons detonation either on Earth, in the atmosphere, or in space.This continuous monitoring serves to verify that the countries party to the Limited Test Ban Treaty of 1963 and, later on, the Threshold Test Band Treaty of 1974 are in compliance.This particular area represents a key nuclear data interest for the Defense Threat Reduction Agency (DTRA), which funds research for the purpose of countering weapons of mass destruction, as well as AF-TAC, which hosts the USNDS treaty-monitoring mission.There are currently two different space-based platforms that the detection systems occupy: the Space and At-mospheric Burst Reporting System (SABRS) and systems that ride along with our Global Positioning System (GPS) satellites in medium Earth orbit.
Depending on where the detonation occurred, the emissions that can be picked up will vary.If the detonation was in the air or on the Earth's surface, then the x-ray output from the resulting hot plasma of the nuclear detonation expands the air in a hot enough regime to create optical light.In addition, the prompt gammas emitted from the nuclear reactions free some electrons, which rotate in the Earth's magnetic field and emanate pulses in the radiofrequency domain.If the detonation happens at high altitude or in space, then all of the xrays, gamma rays (prompt and delayed), and neutrons can travel freely to the space-based detectors.If the detonation happens somewhere in the upper atmosphere, the resulting signals will probably feature some radiation from both categories, depending on where it happened.
The applicable energy and time domains for detecting the gamma rays and neutrons from a detonation via satellite cover a fairly large range.The gamma ray energies are in a range from ∼ 100 keV to ∼ 8 MeV.The prompt gammas arrive at early times (100 ns to 1 ms), whereas delayed gammas can arrive at up to 100 s.Neutrons are emitted with energies between ∼ 1 and ∼ 20 MeV and arrive roughly within the same time frame as the delayed gamma rays [247].
The early time-delayed gamma rays that arrive within 100 µs to 100 ms and result from short-lived isomeric decays have significant uncertainties associated with their energies and half-lives.In particular, production estimates from 235 U, 238 U, and 239 Pu are important calculating predicted fluxes of delayed gammas.There are also significant uncertainties on fission product yields (FPYs).There is a need for more incident neutron energies and more precise isotopic decay half-lives that are shorter than ∼ 0.5 s.Some experiments have been completed and are underway with the hope of eventually measuring FPYs with decay times of order 1 s [253][254][255].In the case of a nuclear detonation in air, knowing the neutron cross sections with elements in the air, such as H, O, N, and C, may also be important for understanding the light output of the detonation.
In general, implementing an approach that better quantifies uncertainty (which is required for these studies) is of great interest.Two techniques under consideration are using uncertainties reported in ENDF or sampling the half-life and energy uncertainties via Monte Carlo methods.

E. Summary of Space-Based Needs
The range of nuclear data users whose work is based in space is a varied one.While the largest research areas are represented here, it is likely that some research areas within the field were left out.As this marks the first time the nuclear data community has extensively explored to space applications at WANDA, there is still abundant need for further discussion.In the meantime, some key overlaps have already been noted.

Recommendations and Cross-Cutting Nuclear Data Needs
For the purposes of space radiation protection, Heinduced inclusive double differential light ion (p, 2 H, 3 H, 3 He, 4 He, neutron) cross sections at beam energies from 0.1 to several GeV per nucleon on targets of H, C, O, Al and Fe as well as total reaction cross sections for most GCR ion species on targets at beam energies above 1.5 GeV per nucleon are critical needs.These nuclear data weaknesses overlap with those of the isotope production and medical physics communities as well as the planetary spectroscopy community, which requires spallation cross sections from energies ranging from ∼ 10 MeV to hundreds of GeV in elements that form planetary surfaces.
The planetary spectroscopy community also needs precise (n, n γ) cross sections for rock-forming elements between ∼ 0.1 and ∼ 50 MeV with less than 5% uncertainty.Though that is a significant request for the experimenters that generate nuclear data, these cross sections are also needed in safeguards and stewardship applications, homeland security applications, and most notably for this report, planetary defense.
In terms of the other research areas, there is less overlap between the other space-application users.The needs of the space reactor community will in many cases follow the needs of the nuclear energy, and advanced reactor communities (see Sec. VI).The nuclear data needs from the satellite-based nuclear detonation detection community overlap with many applications in their need for improved fission product yields and fission product decay data.

VI. NUCLEAR DATA FOR ADVANCED REACTORS AND SECURITY APPLICATIONS
A great diversity of advanced reactor designs were presented at WANDA 2021 in neutron spectra (thermal or fast), moderating materials, coolants, fuels, cladding.and structural components.Most importantly, the advanced reactor designs proposed today differ significantly from the majority of nuclear reactors which have been operating for the last half century and thus also differ in their nuclear data needs.Specific reactions and isotopes have been identified for advanced reactors and security applications in this workshop and will be summarized below.It would be advantageous if a centralized database of nuclear data needs for the US nuclear industry could be created, similar to but more specialized than the NEA OECD High Priority Request List (HPRL) [256].
The diverse nuclear data needs and the natural economically competitive nature of advanced reactor companies makes it difficult for national funding agencies to establish a completely prioritized nuclear data needs list in support of advanced reactor development in the US.Challenges include: combining disparate nuclear data needs for different reactor types in an equitable manner; adding considerations of cost-benefit analyses; and weighing the need for missing data such as damage cross sections or thermal scattering uncertainty data against the need to improve existing data.
This section on advanced reactor and security applications first addresses nuclear data needs for advanced reactor development in the US as discussed at WANDA 2021.Next, covariance data and uncertainty quantification are discussed in a broader sense, as common requirement across all applications.Then, improvements of the ENDF/B-VIII.0nuclear data library for advanced reactors are discussed, in comparison to the preceding ENDF/B-VII.1.Finally, we conclude with a summary of ideas to address competing nuclear data needs among advanced reactor design, security applications, isotope production, criticality safety, and nuclear physics.

A. Summary of specific Advanced Reactor Nuclear
Data Needs In the case of advanced reactor design, accurate reaction rate calculations are necessary for many of the materials in the core in order to be able to determine power distributions, the reactivity-worth of control mechanisms, shutdown margins, and the sign and magnitude of different dynamic feedback coefficients, such as Doppler and void reactivity coefficients [257].These calculations use a substantial fraction of the nuclear data library content, far beyond what is present in criticality benchmark experiments traditionally used to test the evaluated nuclear data libraries.Especially when considering reactor operation, with many advanced reactors achieving high fuel utilization and building up considerable fission product inventories, previous nuclear data library validation efforts may be missing many relevant cases.Therefore, individual and cumulative fission product yields may be of increased importance, as they play a central role in many transients, decay heat, and severe accident source terms.Evaluation of reactor kinetics parameters are also necessary to accurately predict the performance of designs under normal and accident conditions.
Secondary radiation generation and deposition is also important for predictive modeling and simulation of advanced reactor performance.These data include prompt neutrons and gammas from fission, gamma emissions from fission product decay, neutron capture and gamma emission data, material activation and decay, neutron and gamma attenuation, and energy deposition in all materials.Secondary radiation generation and deposition data are primarily required for advanced reactors studies, as are irradiation damage cross section information for a wide range of materials.Because damage cross sections are specialized and outside the scope of a general library like ENDF, it would be beneficial to create a dedicated library for them so that reactor designers can assess material lifetimes under actual operating conditions which will most likely not be duplicated in a prototype system.
Thermal scattering law data is also important for reactor designs operating with a thermal spectrum.One of the challenges regarding thermal scattering law nuclear data is the abundance of compounds that can be used in a nuclear reactor.At kinetic energies above 10 eV, neutron-induced reactions can safely be approximated (for reactor applications) as collisions with an unbound nucleus and only "free-atom" nuclide-specific cross sections are needed.However, for neutron energies below 10 eV, the molecular binding forces on the target atom play a significant role in the collision kinematics and can have a measurable effect on the predicted reactor behavior.The thermal scattering law data introduces additional data sets for specific nuclides in each moderating compound.This is introduces modeling choices and code complexity, such as how to handle the introduction of isotopes without scattering law data during irradiation in moderating compounds or the selection of a "nearest" thermal scatterer when the one that is needed does not exist in the nuclear data library, instead of resorting to the "free-atom" treatment, which is most likely a worse approximation.Yet another challenge that has been brought up by the community of nuclear data users is that certain thermal scattering law evaluations appear to give good predictive performance only when the nuclear data for the other materials in the system come from the same nuclear data library.Combining new thermal scattering law evaluations with nuclear data from older libraries does not provide consistent results.This implies error cancellation within an evaluation or a specific campaign, such as the Collaborative International Evaluated Library Organization (CIELO) Pilot Project [258] with continuation of those principles in the International Nuclear Data Evaluation Network (INDEN) [29].
The effective "free-atom" neutron cross section at any temperature can be calculated by Doppler broadening the cross section at 0 K or easily interpolated between effective cross sections at neighboring temperatures.Thermal neutron scattering data, however, do not have this luxury, and they must be generated for each temperature used in the calculation.This presents a particular challenge to thermal nuclear propulsion systems which can operate at temperatures exceeding 3000 K. Determining a reliable method for interpolating and extrapolating thermal scattering law nuclear data in the temperature domain is an open question in the field.
New nuclear data evaluations are needed for advanced moderator and reflector materials which are being proposed for use in combination with High-Assay Low Enriched Uranium fuel (HALEU) (enrichment between 5% and 20%).Yttrium hydride is of particular interest, highlighting the progressive nuclear data needs of the advanced reactor community in two ways.First, it is a material which has not been widely used in the past and second, nuclear data in a different neutron energy range will be important.
The advanced reactor community needs not only new nuclear data but also their associated uncertainties.Nuclear data sensitivities and uncertainties are actively being used to inform where extra margins may need to be added for design safety [259][260][261].As an example, an estimation of Kairos Power's Fluoride-salt cooled High-Temperature Reactor (KP-FHR) coolant temperature reactivity coefficient as a function of design parameters carbon-to-heavy metal ratio (C/HM) and fuel kernel diameter is shown in Fig. 13.Using nuclear data uncertainty propagation, a 1200 pcm (1-sigma) uncertainty in system eigenvalue and a 30% (1-sigma) uncertainty in the coolant reactivity coefficient due to 7 Li(n, γ) was found.Ideally, design parameters would be selected to have a small, negative coolant temperature reactivity.However, Kairos Power does not depend on the nuclear data for a final design.A prototype reactor, HERMES, will be used to inform this aspect and many other aspects of the final design.
In Molten Chloride Fast Reactors (MCFR), nuclear data-induced uncertainties of 900-1700 pcm in keigenvalue have been reported with uncertainties arising from both 239 Pu and 35 Cl(n, p).
A particularly important uncertainty arises from angular distributions.Currently, the uncertainty on the angular distribution from elastic scattering is reported only for a small number of isotopes in ENDF/B-VIII.0.A concern is that the scattering angular distributions are known to have a significant impact on criticality of small nuclear systems relying on a reflector, such as the MCFR design which utilizes an MgO reflector/moderator.While the reflector material has an known impact on the criticality of that reactor design, there is no uncertainty information in ENDF/B-VIII.0 on the 24 Mg elastic scattering angular distribution and thus this effect is unaccounted for in uncertainty studies.Current mechanisms for systematic propagation of nuclear data uncertainties treat missing/unreported uncertainties to have zero uncertainty, exactly the same as quantities which are perfectly known.This is not a conservative approach from the perspective of safety.Furthermore, if an uncertainty is not reported, it usually means that a given quantity has not been investigated thoroughly and a large uncertainty may be possible.
Beyond the need for nuclear data uncertainty, there are also specific needs for integral experiments for nuclear data validation to support advanced reactor development.The International Criticality Safety Benchmark Evaluation Project (ICSBEP) contains on the order of 5000 critical and subcritical integral experiments, with a select few benchmarks used for nuclear data validation.Currently, there is a complete lack of criticality benchmarks for nitride fuels in thermal reactors.Nitrogen scattering cross section for 14 N and 15 N in the thermal range have little experimental justification.Dedicated experiments may be necessary to provide integral reaction rate measurements in specific advanced reactor neutron spectra and at elevated temperatures.
Computational modeling and simulation of nuclear security around advanced reactor design has its own nuclear data requirements.Nuclear security applications based on anti-neutrino physics require accurate fission product yields and beta decay chains.Fission product detection in Molten Salt Reactors (MSRs) requires more accurate nuclear data for the following isotopes: 95 Nb, 103 Ru, 106 Rh, 106 Ru, 125,126,127 Sb, 129m,132 Te, 131,132,134 I, and 138 Xe.Further, improved fission yield data are specifically needed for 233 U, 232 U, 232 Th, and 233 Pa for Thorium MSRs.Lastly, gamma ray and x-ray data must generally be improved to support safeguards applications for MSRs.Targeted needs are in gamma ray and x-ray energies, branching fractions, and x-ray line widths, (γ, n) neutron energy spectra, mass attenuation coefficients (for gamma attenuation and neutron selfshielding), and activation product yields.Fig. 14 shows the uncertainty contribution for nuclear data alongside other uncertain parameters such as detector statistics and the efficiency model.A recent paper identified uncertainty in branching ratios as a key contributor and performed additional measurements to achieve a factor of 2 to 3 reduction in 5 key branching ratio uncertainties [262].

B. Covariance Data and Uncertainty Quantification
"Covariance data" here refers to all uncertainty data and correlations which has traditionally taken the form of covariance matrices, approximating the joint probability density functions of the entire set of nuclear data as normal distributions.Covariance data are important for predicting the uncertainty in nuclear reactors due to (estimated) errors in the nuclear data at the design stage.During the prototype stage as well as with mea-FIG.14: Uncertainty analysis for signatures used in nondestructive assay of MSRs [262].
surements and system behavior, these data become less important.Nonetheless, sensitivity and uncertainty tools and nuclear data uncertainty propagation are now widely used to understand this possible source of uncertainty, however there is some concern that covariance data are not predictive enough.For example, the biases observed comparing calculations to critical experiments (0.1-0.5%) are in general much lower than the results of nuclear data uncertainty propagation (0.5-1.5%).
One of the fundamental challenges of employing covariance data is that the current ENDF/B format cannot represent and store certain types of covariance or correlation information, such as correlations between fission product yields and decay data.The newly-developed GNDS format is striving to allow all possible covariance data to be stored.However, work remains to be done to ensure that all potential sources of uncertainty can be represented, stored, read out and used in the new format.
Another challenging area for covariance data evaluation is the difficulty in validation.Since covariance information in evaluated nuclear data represents a degree of certainty in the reported mean values, it is not a physically measurable quantity.Therefore, validation of covariance data is not possible in the strict sense of validation.There is a need for robust (ideally open source) covariance verification, checking, and adjustment codes which can be used across all applications.
It is technically possible to generate applicationspecific covariance matrices which are calibrated to a set of measurements.The clear advantage of this process is the gain in predictive power [263].The disadvantage is the potential for misuse, such as an application outside the original intention, and the inability for these application-specific "corrections" to feed back into the fundamental data.A further downside is the potential for conflicting adjustments based on different application bases.It is the strong opinion of the community of nuclear data producers and users that it is the responsibility of nuclear data evaluators to declare which integral experiments have been used in the evaluation process and how those experiments were used, either systematically or non-systematically.Such declarations will help ensure that those experiments are not used in the code validation process.
There are significant gaps in the covariance data library.Missing or unreported covariance data are simply neglected in systematic uncertainty propagation methodologies.Regretfully, this is mathematically equivalent to having perfect knowledge of the quantities as missing covariance data results in zero propagated uncertainty attributed to that source.Missing covariance data and correlations which have the most immediate impact on advanced nuclear reactor modeling are currently missing thermal scattering law covariance data, angular distribution covariance data, and correlations between independent fission yields and decay data.Furthermore, there are gaps in code capabilities to systematically propagate the impact of some currently existing and future covariance data.For example, sensitivity coefficients for thermal scattering law data may not be calculated in MCNP, Serpent, or SCALE.

C. Incorporating New Nuclear Data Libraries into Advanced Reactor Analysis
Although the ENDF/B-VIII.0nuclear data library was released in February 2018, adoption of this evaluation has been slow among some nuclear data users.Issues have been uncovered in the validation of light water reactor depletion simulations with reactivity bias which increases as a function of burnup.Thus, while simulations of fresh fuel may match measurement very well, depleted fuel can have a significant bias in reactivity (700 pcm) [264].In the HTR-10 high temperature graphite reactor benchmark ENDF/B-VII.1 showed 500 pcm error compared to experiment and ENDF/B-VIII.0showed 800 pcm error [265].The nuclear data evaluation community maintains that ENDF/B-VIII.0performs better than ENDF/B-VII.1 in validation on the set of criticality safety related benchmarks in the ICSBEP, and it could be that we simply need more non-criticality validation cases to reduce potential for performance regression.
ENDF/B-VIII.0 also includes various thermal scattering law libraries for graphite at different porosities.While this is a large step forward, it also requires knowing the correct graphite porosity in order to properly simulate the results.For example, with HTR-10, simply swapping one porosity for another leads to a 665 pcm difference in reactivity [265].An unexpected trend with 16 O has also been found whereas the energy corresponding to the Average Lethargy of Fission (EALF) increased, reactivity decreased compared to ENDF/B-VII.1.A similar trend was found with Plutonium-Solution-Thermal (PST) benchmarks.The set of PST benchmarks had a positive bias in reactivity as EALF increased, suggesting an issue in the Plutonium evaluation above thermal neutron energies [266].
It is recommended that during the nuclear data evaluation validation process, proposed nuclear data evaluations are compared against a wider subset of benchmarks and benchmark-like data, particularly where there can be large impacts on advanced reactors.

D. Metrics for Nuclear Data Need Prioritization
The nuclear data needs for advanced reactors are primarily driven by material choices which include coolants such as FLiBe and molten chloride salts; moderators such as graphite and yttrium-hydride; control materials; and advanced fuels like uranium nitride and cladding materials like silicon carbide.In the case of nuclear security for reactors, reducing uncertainties in gamma-ray and xray energies, branching fractions, and x-ray line widths for nondestructive isotopic analysis on important isotopic ratios is key to enabling a robust, economic safeguards and security approach to advanced reactors and nuclear fuel cycle facilities.In view of this diverse set of needs, including many not discussed here, it is very important to develop an effective procedure for identifying and prioritizing needs.
One way to accomplish this is driven by the requirements for accurately predicting reactor behavior during steady-state and transient operation via sensitivity analysis (SA) and uncertainty quantification (UQ) in the context of regulatory requirements, set by the U.S. Nuclear Regulatory Commission (NRC).This SA/UQ is highly dependent on the quality of covariance data for uncertainty propagation.The identification of isotopic data of significance as related to the prediction of key quantities of interest such as core reactivity, decay heat, power distribution, and source term, should be used as a basis for prioritization of the needs.
Short-term versus long-term prioritization of the nuclear data needs is also necessary.The deployment time is critical for advanced reactors and most vendors will adjust their margin and move on with their system deployment plan if their nuclear data needs cannot be fulfilled on a short timeline.The nuclear data pipeline from need, experiment, and modeling to evaluation, validation and verification of the library is too long to effectively support advanced reactor deployment in 4-8 years.A plan is necessary for having a long-term impact if improvements are needed because long-term, committed effort is necessary to significantly accelerate the nuclear data pipeline.This must be balanced with short-term, targeted R&D investments.Detailed feedback from technology developers is needed for nuclear data evaluators to be effective.
An alternate approach to prioritizing nuclear data needs is to only consider needs that are critical to deployment of an advanced reactor systems.This report has highlighted some specific isotopes that are key to the deployment of particular advanced reactor technology: elastic scattering off 24 Mg for MCRE, thermal scattering data for graphite and FLiBe, and cross sections for 19 F, 9 Be, 6 Li, and 7 Li for FHR.However, it is unclear if any nuclear data need reaches this level of being critical to deployment, i.e., a reactor cannot be built without it.In many cases, targeted experiments or reactor prototypes will fill any gaps.Additional design margin (e.g., increased fuel enrichment) and lower core lifetime (e.g., material damage rates not well characterized) are possible outcomes that have a potentially large cost, but one driven entirely by the specifics of a design.
An intriguing possibility would be for the NRC to become involved in the prioritization of nuclear data needs, based on license applications, to identify significant data gaps based primarily on safety concerns.However, such a development could have detrimental impacts on reactor cost and on the length of their reviews.
Based on these considerations, the top nuclear data priorities to support the deployment of advanced reactors in the US, as well as the development of nuclear security for advanced reactor applications, are to: 1) address missing data and any discovered artifacts by the community in ENDF/B-VIII.1;2) improve evaluations with large uncertainty that are relevant for any currently considered design with the expectation that data may come from new experiments and/or reactor prototypes and not new differential measurements; 3) improve the generality of the evaluated data files to represent correlations of data; 4) improve the verification and validation processes used in the development of the next ENDF/B VIII.1 to include more cases representative of advanced reactors; and 5) continue to develop and improve methodologies for evaluation of all sources of uncertainty (not just nuclear data) and the associated cost/benefit of refinement.

VII. THE HUMAN PIPELINE FOR NUCLEAR DATA
Researchers play a key role along the entire pipeline, contributing effort, improving the links between pipeline components, and evolving the pipeline to meet the needs of the nuclear data community.The focus of the WANDA workshops has been on needs associated with each of the pipeline components shown in Fig. 1.In a session at WANDA 2021, the human element involved in each component of the pipeline was considered.The discussion centered on three topics.One involved summaries of ongoing activities to further develop and expand the nuclear workforce, a second was focused on new possibilities for nuclear data workforce expansion, while the third discussed the evolution of the skillset of the nuclear data workforce, especially with regard to automation.

Outreach
A variety of nuclear science outreach activities exist to engage and educate the general public, students of all age levels, and researchers both within and external to the nuclear science community.These activities are coordinated by universities, national laboratories, and university-national laboratory collaborations, and include, for example public events or displays that expose the general population to nuclear science concepts and provide a general overview of our field.As an example, Michigan State University provides a range of different successful programs to engage members of the public beyond the nuclear science community [267][268][269][270][271]. Opportunities for younger students also exist through university and broader collaborations [272][273][274].Teach-the-teacher programs provide schoolteachers with knowledge and materials for introducing nuclear science at precollege levels [275,276].Such activities provide an early introduction to nuclear science and may influence a student's selection of undergraduate or graduate-level coursework.
It is critical to note that these outreach activities are focused on nuclear physics rather than on the specialized field of nuclear data.They serve to indirectly expand the nuclear data workforce.However, some nuclear dataspecific outreach activities also exist.The Exotic Beam Summer School [277], an annual event for graduate students, recently added a new component focusing specifically on nuclear data.The nuclear data community leads the FRIB Working Group on Nuclear Data [278] which organizes a working group session on nuclear data at the annual Low Energy Community Meeting [279].

Internships and Research Opportunities
Students have a range of opportunities to take part in nuclear physics research, through undergraduate and graduate research and conferences [280,281]  Similar to the aforementioned outreach activities, the internships and research opportunities mentioned above are primarily directed at nuclear physics in general rather than nuclear data in particular.Some US Nuclear Data Program (USNDP) centers bring interns into their programs, which has already proven to be beneficial for expanding the nuclear data workforce.It would further benefit the nuclear data community to greatly expand such opportunities specifically targeted at nuclear data.

B. New Possibilities for Workforce Expansion
Some initial recommendations related to expanding the nuclear data workforce are outlined here.Further exploration of these possibilities is needed to establish a robust nuclear data workforce for the future.

Catalog Pipeline Skills
To most effectively expand the workforce along the nuclear data pipeline, it would be beneficial to enumerate the necessary and valuable skills for successful participation in each technical component of the pipeline.Such a catalog of recommended skills could help target recruiting efforts as well as assist current researchers in developing additional skills to enhance their career path and prepare them for future activities.This information will also help mentors guide junior researchers.
In such a catalog, technical skillsets beyond physics knowledge should be included; for example, strong computational skills are needed across the nuclear data pipeline.Furthermore, it is essential to identify situations where the expertise in a particular nuclear data topic is in danger of being lost completely.One example is Thermal Neutron Scatter Law (TNSL) evaluation work, where the capability was nearly lost a few years ago.At WANDA 2021, a newly-developed and very successful university-based program for TNSL evaluations at North Carolina State was highlighted.While similar efforts may not fit all evaluation needs, university involvement is critical for training the next generation of nuclear data workers.Finally, recruiting additional data evaluators and nuclear theorists was identified as a pressing need.
By comparing the current nuclear data workforce skill distribution with a catalog of required skills, roles requiring increased activity may be identified and short-and long-term plans to fill those roles can be developed.In parallel, it would be advantageous to define minimum skill requirements, identify qualified mentors, and suggest possible career paths.It is essential to identify and highlight different career paths for students entering the pipeline and share these potential opportunities with students, faculty, and career counselors.It must be empha-sized that while technical nuclear data efforts comprise the pipeline shown in Fig. 1, the human element often moves freely between different activities.There is no specific entrance or pre-determined career path in nuclear data and a wide range of skillsets are needed.

Expanded Outreach and Internships
The importance of increasing the exposure of nuclear science activities to undergraduate academic institutions (community colleges, colleges, and universities) without nuclear science programs was discussed as a way of broadening the range of students entering the pipeline and nuclear-related fields.Establishing formal outreach programs may augment the many volunteer-based outreach activities that are challenging to sustain.Broader outreach activities from the nuclear data community to institutions with nuclear science programs are also essential to show students and junior researchers how the data they produce are used.This could be accomplished through increased participation in summer schools or via a dedicated lecture series on nuclear data-related topics.Another possibility is the creation of a Nuclear Data Outreach Position, to serve as a liaison between the nuclear data community and the broader nuclear physics community.The nuclear data community also would significantly benefit from adding nuclear data-specific components to many existing nuclear physics outreach activities.Finally, while some USNDP data centers do bring in interns, establishing a national program for dataspecific research internships could make a significant impact on the number of young researchers who pursue careers along the nuclear data pipeline.

Mentoring Support
Strong mentors are needed at all career levels.Mentor relationships are key for introducing new researchers to nuclear science activities and sustaining their involvement as well as to preserve the expert knowledge from the aging nuclear data workforce.In the case of junior researchers, mentorship extends beyond academic support to include career advice, confidence building, networking, and communication skills.It is critical to provide additional support for mentors, both in terms of funding, such as summer salary for faculty and time for national laboratory researchers, as well as resources.These resources include readily-available information about the nuclear data pipeline and career opportunities for mentees.Resources and opportunities for potential mentors to financially support students should also be made available.
Connections and support pathways between mentors should be encouraged.Special sessions at national-level meetings may help to share resources and advice.Crossinstitutional academic and financial support for mentors would benefit junior researchers working within the nu-clear data pipeline.Given the wide range of skills needed, expertise may not be readily available at the mentee's home institution.This may be particularly true for evaluation efforts, where knowledge is required from many different subfields.Therefore, collaboration with lab partners can be beneficial for mentors.

Increase Diversity, Equity, and Inclusion
Improving Diversity, Equity, and Inclusion (DEI) in the workplace is an important practice now pursued in all employment categories.Nuclear data should follow such practices by building improved connections to underrepresented and marginalized groups, and subsequently work actively to strengthen, sustain, and maintain these connections.The nuclear data that enters the pipeline originates from facilities throughout the world, and thus, opportunities for non-US citizens to participate in the US nuclear data community would increase the diversity and expertise in the field.Outreach activities to these groups are often initiated and run by volunteers, and it can be challenging to sustain them in the long term as the group infrastructure and leadership changes.In addition, while funding opportunities may exist for new partnerships with underrepresented institutions or individual researchers, ongoing collaborations must also be supported to establish firm connections.
In addition to canonical DEI efforts, it is imperative to create an environment that is welcoming to those outside of nuclear science.For example, the nuclear data pipeline relies on a range of individual expertise, with many talented computer scientists, software engineers, and statisticians playing key roles.By building a strong collaborative community with shared objectives, the field can be strengthened.
Flexibility is required to allow individual researchers to enter and exit pipeline activities at any stage.Such researchers include those with experience outside of nuclear science, as described above, but also those within nuclear-related fields.An open and accessible community where individuals feel free to explore other options, especially those that may ultimately benefit the overall pipeline, must be maintained.Specifically, because many researchers enter nuclear data with previous experience in different fields, it is important that they maintain those prior connections.Establishing positions where researchers can commit some fraction of their effort to research in their field of expertise would benefit both the individual and the nuclear data community.
Finally, DEI efforts can be aided by making extensive information about the nuclear data pipeline readily accessible, including details of the skill sets needed to contribute, especially those skills beyond nuclear-related fields.

C. Evolving the Pipeline
Advances in computational tools, containerization, and machine learning algorithms have opened the door to automate significant portions of the nuclear data pipeline.Automation will transform, rather than reduce, the nuclear data workforce.New technologies can eliminate or minimize many rote or tedious activities required of a nuclear data evaluator, allowing for more time to focus on the physics, interpretation, and quality of evaluations.Continuous integration and deployment software now automates portions of the pipeline used to generate revisions of ENDF, serving also to increase the quality of each evaluation [313].Machine learning techniques have the potential to decisively augment an evaluator's interpretation to find trends in large, complex datasets that are impossible to discern by humans [314,315].While computational advances have lowered the human workload in processing and verifying the nuclear data libraries, it has also increased the complexity and volume of nuclear data in the libraries has vastly increased.The human component remains vital to interpreting the results as well as casting existing data in meaningfully interpretable forms for algorithms and improving the physics.The role of humans in nuclear data activities will likely change considerably in response to the changing characteristics of the pipeline and the premises of automation.The community should prepare for, and embrace, these new developments that will provide improved nuclear data for a wide variety of applications.

D. Summary of the Pipeline
At WANDA 2021, an initial discussion about the human role in the nuclear data pipeline was initiated.This conversation should continue in the future.From this initial dialog, a number of "needs" related to humans supporting nuclear data activities were identified.It is important to increase community outreach activities specific to nuclear data to ensure a diverse and creative workforce and provide additional support to mentors who are often the main support for researchers new to pipeline activities.A list of required and valuable skillsets for each element within the nuclear data pipeline must be cataloged for researchers entering the field, and potential career paths should be defined.An inclusive, collaborative environment should be established, allowing researchers from a range of backgrounds to successfully contribute and strengthen the pipeline.

VIII. SUMMARY
A brief summary of the needs and recommendations for the topics discussed above is reiterated below.These needs, as identified by close interactions at WANDA 2021 of nuclear data users, producers, and funding managers across multiple programs, provide a clear picture of the cross-cutting nuclear data research priorities in the US.

A. Advanced Computing for Nuclear Data
Nuclear data is fundamentally tied to computation.Accurate nuclear data enables predictive computation for nuclear science and engineering.Computational hardware has been rapidly developing through advanced architectures, including GPU-enabled architectures, presenting a unique opportunity to significantly improve the predictive power of nuclear modeling methods with current nuclear databases by allowing more complex calculations to be carried out.While QC is not currently a feasible reality for large computations, it may have the potential to revolutionize computing in the future and warrants some scoping studies to ensure that this future technology will be useful for nuclear data applications.
Advances in computer hardware allows for nuclear physics models to be integrated directly in transport codes without a significant run-time penalty.This will be most impactful for applications where experimental data are missing or inconsistent.Furthermore, advances in computer hardware and architectures, along with the rapid development of Artificial Intelligence and Machine Learning, also enables the development of nuclear physics emulators which can be integrated into transport codes to accelerate data flow through the pipeline and in enduser applications.AI/ML, much of which has been enabled by advances in computational power, can be integrated in multiple parts of the pipeline.AI/ML methods can be integrated at multiple stages of the nuclear data pipeline.To harness the full potential of AI/ML methods, APIs are needed to be along the pipeline, starting from a machinereadable format for experimental data, like the EXFOR database.Further, certain parts of the pipeline can be automated, such as AI/ML algorithms that parse and process data from journals and scientific reports.ML algorithms could aid in extracting physics from nuclear data, thereby helping design experiments to address specific nuclear data gaps or identify critical modeling needs that make the largest impact on evaluations.Increased computational capabilities, together with high-fidelity emulators of physics models, would also greatly facilitate the quantification and propagation of uncertainties in nuclear data.
In the longer term, progress in high-performance computing and increased employment of ML techniques could pave the way to grand challenge problems such as partial-(or full-) automation of the nuclear data pipeline and uncertainty quantification over the entire table of isotopes.

B. Predictive Codes for Isotope Production
A robust, validated predictive code for reaction data is the single highest priority need for the isotope production community.This is a cross-cutting need for the entire nuclear data community, as many other applications require this same capability.
While large-scale measurement campaigns for individual reaction data should to be continued, there is a great need to improve the nuclear data in all reaction channels for given beam-target interactions.While more easily attainable, stable isotope production data has often been neglected in measurements.However, these data are extremely valuable as they provide important constraints on code performance.Stable isotope measurements may include chemical and physical methods (such as ICP-MS and other chromatographic techniques), as well as the use of prompt gamma spectroscopy, which can give a more general view of isotopic angular momentum and level densities.Furthermore, secondary particle spectra have the potential to partially constrain level densities and separate contributions from compound nucleus and pre-equilibrium emission as a function of angle.While more difficult to measure, establishing the capability for such measurements could significantly improve nuclear modeling capabilities.
Nuclear structure data are also needed for tuning level density and pre-equilibrium models.This need is crosscutting with the astrophysics community, and they have already established detectors and analysis techniques for these measurements.Similarly, pre-equilibrium models could be improved by the development of quantum mechanical models of pre-equilibrium emission, rather than the phenomenological models currently employed.
Furthermore, a charged-particle evaluation subcommittee was recommended as an addition to the Cross Section Evaluation Working Group (CSWEG) to keep a sustained focus on this critical, but inadequately resourced, effort.An evaluated database for isotope production would function as a standardized resource supporting all codes and applications, similar to the role that ENDF plays for neutron-induced reactions.
Finally, the isotope production community needs to design a set of integral benchmarks for validation of predictive codes similar to those available for criticality calculations.

C. Expanded Benchmarks and Validation for Nuclear Data
Accurate predictions of nuclear systems requires adequate testing of the codes and underlying nuclear data against real experiments.One of the biggest challenges for validating nuclear data against benchmark experiments is the unequal coverage of benchmarks for different applications.There are a wide variety of welldocumented benchmark experiments covering different aspects of criticality as well as a collection of benchmark experiments for reactor physics.However, as discussed in Sec.VI, the reactor benchmarks are incomplete, especially considering the wide variety of reactor designs and the quantities that impact them.The collected documentation of shielding and transmission benchmarks also lags behind criticality benchmarks in quality.There are very few benchmarks to support other applications, such as predictive codes for isotope production.
The existing benchmarks used in the validation component of the nuclear data pipeline are heavily influenced by a subset of criticality benchmarks.It is recommended that the user community carefully study current experiments and historical records for benchmark-quality data pertinent for their particular application.It is important to recognize that each user community should develop their own set of benchmarks that are sensitive to the reactions and energy regions of interest for a particular application.These application-specific benchmarks then need to become part of the validation process to enable for general-purpose nuclear data libraries such as ENDF to have the greatest utility to the entire nuclear data applications community.
Additionally, sensitivity methods should be developed to benchmark observables beyond k eff for both new and current benchmarks.These methods should then be used to produce and archive sensitivity profiles of calculated cross sections.A library of sensitivity profiles for a wide range of benchmark experiments will allow fast and efficient data testing by nuclear data producers without relying on specific applications.

D. Nuclear Data for Space Applications
The nuclear data needs for space applications overlap strongly with those of isotope production, medical physics, safeguards, stewardship, homeland security and terrestrial-based nuclear reactors.While a wide variety of nuclear data users are involved in space-based research, the nuclear data needs for radiation protection and planetary spectroscopy were highlighted here.
Some of these needs are already being addressed by ongoing work. in support of terrestrial applications.For example, the needs for space reactor development will, in many cases, follow those for terrestrial reactors, and many of the needs for satellite-based nuclear detonation detection are being addressed by ongoing fission product yield research.
However, there are a number of specific, critical, needs for space radiation protection, such as He-induced inclusive double differential light ion (p, n, 2 H, 3 H, 3 He, 4 He) cross sections for beams from 0.1 to several GeV per nucleon on targets of H, C, O, Al and Fe.Additionally, total reaction cross sections for most galactic cosmic ray ion species on targets at beam energies above 1.5 GeV per nucleon are needed.Finally, studies in planetary spectroscopy require precise (n, n γ) cross sections for rock-forming elements between ∼ 0.1 and ∼ 50 MeV with less than 5% uncertainty.

E. Nuclear Data for Advanced Reactors and Security Applications
The nuclear data needs for advanced reactors are quite varied, based on the large number of different, competing designs in the US.Nuclear data needs are driven by the materials chosen for each design and the sensitivity of a wide range of performance and safety characteristics beyond criticality.Quantities of interest include core reactivity, decay heat, power distribution, and source terms.These diverse characteristics test a wide range of nuclear data which has not previously been rigorously validated.Furthermore, studies utilizing the propagation of uncertainties of current nuclear data libraries result in large model uncertainties, forcing nuclear reactor designers to implement additional engineering safety margins.
The prioritization of short-term versus long-term needs is also necessary in reactor design.Deployment time is critical for advanced reactors because most reactor designers will adjust their margins and continue with system deployment if their data needs cannot be fulfilled on a short timeline.The flow of data through the pipeline from need, experiment, and modeling to evaluation, validation and library release is too long to effectively support advanced reactor deployment in 4-8 years.A plan for long-term impact is important if improvements are needed because long-term, committed effort is necessary to significantly accelerate the nuclear data pipeline.This effort must be balanced with short-term, targeted investments.
In the case of nuclear security for reactors, reducing uncertainties in gamma-ray and x-ray energies, branching fractions, and x-ray line widths for nondestructive isotopic analysis on important isotopic ratios is key to enabling a robust, economic safeguards and security approach to advanced reactors and nuclear fuel cycle facilities.
The top nuclear data priorities to support the deployment of advanced reactors in the US, as well as the development of nuclear security for advanced reactor applications, are five-fold: address missing data and any discovered artifacts by the community in ENDF/B-VIII.1;improve evaluations with large uncertainties that are relevant for currently-considered designs with the expectation that data may come from new experiments and/or reactor prototypes and not new differential measurements; improve the general applicability of the evaluated data files to include correlations; improve the verification and validation processes used in the development of the next ENDF/B release to include more cases representative of advanced reactors; and continue to develop and improve methodologies for uncertainty evaluation, not only of nuclear data, and the associated costs/benefits of refinement.

F. The Human Pipeline for Nuclear Data
Several needs related to human support of nuclear data activities were identified.It is important to increase community outreach activities to ensure an expanded, sustainable, diverse, and creative workforce.It is also critical to provide additional support to mentors who take on the critical responsibility of training the next generation of nuclear data researchers.
The workflow within the pipeline must be specified and critically assessed, and a minimum list of desired skill sets and potential career paths should be defined.Because it is important for the community to consider which roles can and should be automated in the coming years, this is also an intersection point with the introduction of artificial intelligence and automation in the nuclear data pipeline.Furthermore, we need to expand recruiting to include researchers with the skill set to automate the pipeline.
Lastly, an inclusive, collaborative environment is also necessary.Nuclear data producers support a wide range of applications.Scientists and engineers from multiple different disciplines work synergistically to produce more accurate nuclear data.Contributions from researchers from a range of backgrounds will ensure that the nuclear data pipeline can best address the needs of the community.

G. Summary of Cross-cutting Needs
The WANDA series facilitates discussion of the future direction of nuclear data research in the US.There are a number of persistent themes which have been recommended across several of the six topics covered here.
Nuclear data are inextricably tied to computational modeling and simulations.Modeling and simulations need to be both precise and accurate to have meaningful impact on the programs they support.Computational accuracy can be improved both by better measurements and by integration of physics models.First, more accurate experimental nuclear measurements are necessary to supply the beginning of the nuclear data pipeline, which after evaluation, processing, and validation will be incorporated into application codes.While measurement needs for specific isotopes and reactions have been discussed for each area above, much of the focus has been on replacing historical, low-fidelity, evaluated nuclear data.Gamma-ray production data, charged particle reaction data, and comprehensive measurements of all reaction channels are emphasized.Second, by fully integrating nuclear physics models in application codes rather than relying on tables or single-valued data, predictive modeling and simulations can achieve increased accuracy.The rapid expansion in computational power is now enabling this exciting possibility.
The precision of predictions from modeling and simulation is also a cross-cutting topic of great importance.
The quality of the evaluated uncertainties in the current nuclear data libraries is in general lagging behind the quality of evaluations of the mean quantities.Many of the evaluated quantities are missing covariance data altogether.Furthermore, methodologies for uncertainty propagation are not currently implemented in the computational toolbox in many applications (e.g., nuclear engineering), even though uncertainty quantification is sought out by most nuclear data users.
All predictive modeling and simulation codes in nuclear science and engineering should be validated on benchmark-quality integral experiments.This practice validates the combination of the particular modeling code with the nuclear data inputs.The difficulty lies in developing a wide representation of applications to produce a comprehensive set of benchmark experiments for code and data validation.A further call to action to the entire community is to ensure that all benchmark experiments are considered in the validation and testing of updated nuclear data libraries.
The last recurring theme is automation.The rapid development of AI/ML in recent years has shown great potential for use in the nuclear data pipeline.Natural language processing technology has the potential to automate the early, compilation, stage of the pipeline.Nuclear physics emulators can accelerate on-the-fly computation of nuclear physics models.Machine learning and outlier detection technology can be used in validation.Integration of these technologies presents new challenges and opportunities to the human staffing of the nuclear data pipeline.The opportunity to connect multiple, automated segments of the pipeline is a grand challenge for nuclear data, leading to greater reliability and reproducibility.Ultimately, automation has the potential to significantly accelerate the response time of the data community and their databases to the needs of the users.

FIG. 1 :
FIG. 1: Schematic showing technical components of the nuclear data pipeline toward production of libraries used by applications.Schematic example of a linear data pipeline showing how the components of the pipeline contribute to the production of data libraries used by applications.Given the multi-disciplinary nature of nuclear data, some activities may involve more than one component.

FIG. 3 :
FIG. 3: Example setup of a time-of-flight neutron scattering experiment using organic scintillators at the RPI LINAC [155].

FIG. 11 :
FIG. 11: Energy deposition from a 50 kt yield neutron source visualized in a 80 cm SiO2 asteroid using MCNP.The color scale corresponds to the number of factors above the melt threshold the asteroid was heated.Dark blue indicates the material was unmelted.(From Ref. [249].)