Panagiotis Spentzouris and Jim Kowalkowski, head and deputy head of the Accelerator and Detector Simulation and Support Department, wrote this column.
To collect data and perform a typical physics analysis in today’s high-energy physics experiments requires a wide range of computational tools as well as specialized techniques and algorithms. The analysis goal is to measure fundamental physical quantities and compare them to theoretical models. Raw event data must be filtered, stored and then processed using a variety of computational tools and algorithms.
The first step in the transformation of raw experimental data to physics discoveries is the data acquisition, where we read out millions of detector electronics channels, then filter and process the data to construct a detector “event.” It is here that we push the limits of real-time computing hardware, network configurations and data filtering to produce the highest-quality data sets that can be achieved. This is an area that requires employment of sophisticated low- and high-level software, including operating system modifications, custom drivers, physics reconstruction software and graphical user interfaces for data quality and control.
Our division has been heavily involved in the design and development of data acquisition and event filtering systems for current and future experiments. We have an ever-growing toolkit of reusable pieces: “artdaq” for collecting events and managing data paths; “art” for complex data processing, filtering and formatting tasks; and a large set of tools for state management, problem reporting and messaging.
Since data processing requires many steps that are not specific to a particular experiment, you can find components of our toolkit deployed by NOvA, MicroBooNE, DarkSide-50 and soon Mu2e. The elements of the toolkit are designed to accommodate important current and future needs of these experiments to permit continuous, high-rate readout of detector signals.
Changes in high-performance commodity computing and networking (including multi- and many-core improvements) have led to important changes in how our tools can operate. Through close collaboration among real-time system developers, computer science researchers and experiment scientists, we continuously adapt and improve our toolkit features to best use the latest technologies.
The latest improvements, for example, are already visible in the differences between the NOvA and DarkSide-50 implementations. NOvA has a farm of more than 180 computers handling the data flow from more than 190 custom data concentrator modules using conventional 1-GB/s Ethernet. DarkSide-50 has a system of fewer than 10 commodity computers with data processing bandwidth that can exceed 1 GB/s using off-the-shelf high-performance networking. Our toolkit evolved quickly and now is able to handle such data rates using almost an order of magnitude less computing resources!
The most mature part of our toolkit is the event processing framework “art.” This is a third-generation framework based on the expertise and successful tools developed and acquired by Fermilab computer scientists for Tevatron Run II, MiniBooNE and CMS experiments. A key feature of the new framework is the packaging. Our “art” package is delivered to experiments and maintained as an external product or library, and online training is available. New physics algorithms and analysis modules can be introduced without any framework recompilation. The algorithms can be used in the context of event filtering, reconstruction from data and simulation, and analysis. Its underlying packaging comes with all the tools that an experiments needs, from event generators to Geant4 to ROOT to a standard version of the GCC compiler.
Our developers work hard to ensure that “art” uses the latest standards and programming techniques, including C++11 and Intel Thread Building Blocks. In addition to serving as the offline software of NOvA, DarkSide-50 and Mu2e, “art” is used as the base of LArSoft, a common set of physics tools developed by experimenters working within the liquid-argon detector program. Other Intensity Frontier experiments, including ArgoNeut, MicroBooNE and the upcoming LBNE, use this shared resource.