Fermilab feature

DUNE prepares for data onslaught

The international Deep Underground Neutrino Experiment, hosted by Fermilab, will be one of the most ambitious attempts ever made at understanding some of the most fundamental questions about our universe. Currently under construction at the Sanford Underground Research Facility in South Dakota, DUNE will provide a massive target for neutrinos. When it’s operational, DUNE will comprise around 70,000 tons of liquid argon — more than enough to fill a dozen Olympic-sized swimming pools — contained in cryogenic tanks nearly a mile underground.

Neutrinos are ubiquitous. They were formed in the first seconds after the Big Bang, even before atoms could form, and they are constantly being produced by nuclear reactions in stars. When massive stars explode and become supernovae, the vast majority of the energy given off in the blast is released as a burst of neutrinos.

In the laboratory, scientists use particle accelerators to make neutrinos. In DUNE’s case, Fermilab accelerators will generate the world’s most powerful high-energy neutrino beam, aiming it at the DUNE neutrino detector 800 miles (1,300 kilometers) away in South Dakota.

When any of these neutrinos — star-born or terrestrial — strikes one of the argon atoms in the DUNE detector, a cascade of particles results. Every time this happens, billions of detector digits are generated, which must be saved and analyzed further by collaborators over the world. The resulting data that will be churned out by the detector will be immense. So, while construction continues in South Dakota, scientists around the world are hard at work developing the computing infrastructure necessary to handle the massive volumes of data the experiment will produce.

The goal of the DUNE Computing Consortium is to establish a global computing network that can handle the massive data dumps DUNE will produce by distributing them across the grid. Photo: Reidar Hahn, Fermilab

The first step is ensuring that DUNE is connected to Fermilab with the kind of bandwidth that can carry tens of gigabits of data per second, said Liz Sexton-Kennedy, Fermilab’s chief information officer. As with other aspects of the collaboration, it requires “a well-integrated partnership,” she said. Each neutrino collision in the detector will produce an array of information to be analyzed.

“When there’s a quantum interaction at the center of the detector, that event is physically separate from the next one that happens,” Sexton-Kennedy said. “And those two events can be processed in parallel. So, there has to be something that creates more independence in the computing workflow that can split up the work.”

Sharing the load

One way to approach this challenge is by distributing the workflow around the world. Mike Kirby of Fermilab and Andrew McNab of the University of Manchester in the UK are the technical leads of the DUNE Computing Consortium, a collective effort by members of the DUNE collaboration and computing experts at partner institutions. Their goal is to establish a global computing network that can handle the massive data dumps DUNE will produce by distributing them across the grid.

“We’re trying to work out a roadmap for DUNE computing in the next 20 years that can do two things,” Kirby said. “One is an event data model,” which means figuring out how to handle the data the detector produces when a neutrino collision occurs, “and the second is coming up with a computing model that can use the conglomerations of computing resources around the world that are being contributed by different institutions, universities and national labs.”

It’s no small task. The consortium includes dozens of institutions, and the challenge is ensuring the computers and servers at each are orchestrated together so that everyone on the project can carry out their analyses of the data. A basic challenge, for example, is making sure a computer in Switzerland or Brazil recognizes a login from a computer at Fermilab.

Coordinating computing resources across a distributed grid has been done before, most notably by the Worldwide LHC Computing Grid, which federates the United States’ Open Science Grid and others around the world. But this is the first time an experiment at this scale led by Fermilab has used this distributed approach.

“Much of the Worldwide LHC Computing Grid design assumes data originates at CERN and that meetings will default to CERN, but as DUNE now has an associate membership of WLCG things are evolving,” said Andrew McNab, DUNE’s international technical lead for computing. “One of the first steps was hosting the monthly WLCG Grid Deployment Board town hall at Fermilab last September, and DUNE computing people are increasingly participating in WLCG’s task forces and working groups.”

“We’re trying to build on a lot of the infrastructure and software that’s already been developed in conjunction with those two efforts and extend it a little bit for our specific needs,” Kirby said. “It’s a great challenge to coordinate all of the computing around the world. In some sense, we’re kind of blazing a new trail, but in many ways, we are very much reliant on a lot of the tools that were already developed.”

Coordinating computing resources across a distributed grid has been done before — but this is the first time an experiment at this scale led by Fermilab has used this approach.

Supernovae signals

Another challenge is that DUNE has to organize the data it collects differently from particle accelerator physics experiments.

“For us, a typical neutrino event from the accelerator beam is going to generate something on the order of six gigabytes of data,” Kirby said. “But if we get a supernova neutrino alert,” in which a neutrino burst from a supernova arrives, signaling the cosmic explosion before light from it arrives at Earth, “a single supernova burst record could be as much as 100 terabytes of data.”

One terabyte equals one trillion bytes, an amount of data equal to about 330 hours of Netflix movies. Created in a few seconds, that amount of data is a huge challenge because of the computer processing time needed to handle it. DUNE researchers must begin recording data soon after a neutrino alert is triggered, and it adds up quickly. But it will also offer an opportunity to learn about neutrino interactions that take place inside supernovae while they’re exploding.

McNab said DUNE’s computing requirements are also slightly different because the size of each of the events it will capture is typically 100 times larger than the LHC experiments like ATLAS or CMS.

“So, the computers need more memory — not 100 times more, because we can be clever about how we use it, but we’re pushing the envelope certainly,” McNab said. “And that’s before we even start talking about the huge events if we see a supernova.”


When a neutrino strikes one of the argon atoms in the DUNE detector, a cascade of particles results. Every time this happens, billions of detector digits are generated, which must be saved and analyzed further by collaborators over the world. via GIPHY

Georgia Karagiorgi, a physicist at Columbia University who leads data selection efforts for the DUNE Data Acquisition Consortium, said a nearby supernova will generate up to thousands of interactions in the DUNE detector.

“That will allow us to answer questions we have about supernova dynamics and about the properties of neutrinos themselves,” she said.

To do so, DUNE scientists will have to combine data on the timing of neutrino arrival, their abundance and what kinds of neutrinos are present.

“If neutrinos have weird, new types of interactions as they’re propagating through the supernova during the explosion, we might expect modifications to the energy distribution of those neutrinos as a function of time” as they are picked up by the detector, Karagiorgi said. “That goes hand-in-hand with very detailed, and also quite computationally intensive, simulations, with different theoretical assumptions going into them, to actually be able to extract our science. We need both the theoretical simulations and the actual data to make progress.”

Gathering that data is a huge endeavor. When a supernova event occurs, “we read out our far-detector modules for about 100 seconds continuously,” Kirby said.

Because the scientists don’t know when a supernova will happen, they have to start collecting data as soon as an alert occurs and could be waiting for 30 seconds or longer for the neutrino burst to conclude. All the while, data could be piling up.

To prevent too much buildup, Kirby said, the experiment will use an approach called a circular buffer, in which memory that doesn’t include neutrino hits is reused, not unlike rewinding and recording over the tape in a video cassette.

McNab said the supernovae aspect of DUNE is also presenting new opportunities for computing collaboration.

“I’m a particle physicist by training, and one of my favorite aspects about working on this project is that way that it connects to other scientific disciplines, particularly astronomy,” he said. In the UK, particle physics and astronomy computing are collectively providing support for DUNE, the Vera C. Rubin Observatory Legacy Survey of Space and Time, and the Square Kilometer Array radio telescopes on the same computers. “And then we have the science aspect that, if we do see a supernova, then we will hopefully be viewing it with multiple wavelengths using these different instruments. DUNE provides an excellent pathfinder for the computing, because we already have real data coming from DUNE’s prototype detectors that needs to be processed.”

Kirby said that the computing effort is leading to exciting new developments in applications on novel architectures, artificial intelligence and machine learning on diverse computer platforms.

“In the past, we’ve focused on doing all of our data processing and analysis on CPUs and standard Intel and PC processors,” he said. “But with the rise of GPUs [graphics processing units] and other computing hardware accelerators such as FPGAs [field-programmable gate arrays] and ASICs [application-specific integrated circuits], software has been written specifically for those accelerators. That really has changed what’s possible in terms of event identification algorithms.”

These technologies are already in use for the on-site data acquisition system in reducing the terabytes per second generated by the detectors down to the gigabytes per second transferred offline. The challenge that remains for offline is figuring out how to centrally manage these applications across the entire collaboration and get answers back from distributed centers across the grid.

“How do we stitch all of that together to make a cohesive computing model that gets us to physics as fast as possible?” Kirby said. “That’s a really incredible challenge.”

This work is supported by the Department of Energy Office of Science.

Fermilab is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.

Para una versión en español, haga clic aquí. Para a versão em português, clique aqui.

Hard to believe you can play pool with neutrinos, but certain neutrino interaction events are closer to the game than you think.

In these charged-current quasielastic interactions — let’s call them CCQE interactions for short — a neutrino strikes a particle in an atom’s nucleus — a proton or a neutron. Two particles emerge from the collision. One is a muon, a heavier cousin of the electron. The other is either a proton (if the stationary particle is a neutron) or a neutron (if the stationary particle is a proton).

The neutrino interactions that result from these quasielastic reactions are like the collisions between balls in a game of pool: You can guess the energy of the incoming neutrino by measuring the direction and energy of only one of the outgoing particles, provided you know the types of all four particles that were in the interaction in the first place and the original direction of the neutrino.

CCQE interactions are an important interaction mode of neutrinos in current and future neutrino oscillation experiments, such as the international Deep Underground Neutrino Experiment, hosted by Fermilab.

They are similar to the elastic interactions every pool player knows except in one important way: The weak nuclear force allows the particles to change from one kind into another, hence the “quasielastic” name. In this subatomic pool game, the cue ball (neutrino) strikes a stationary red ball (proton), which emerges from the collision as an orange ball (neutron).

This display shows a CCQE-like event reconstructed in the MINERvA detector. Image: MINERvA

Since most modern neutrino experiments use targets made of heavy nuclei ranging from carbon to argon, nuclear effects and correlations between the neutrons and protons inside the nucleus can cause significant changes in the observed interaction rates and modifications to the estimated neutrino energy.

At MINERvA, scientists identify the CCQE interactions by a long muon track left in the particle detector and potentially one or more proton tracks. However, this experimental signature can sometimes be produced by non-CCQE interactions due to nuclear effects inside the target nucleus. Similarly, nuclear effects can also modify the final-state particles to make a CCQE event look like a non-CCQE event and vice versa.

Since nuclear effects can make it challenging to identify a true CCQE event, MINERvA reports measurements based on the properties of the final-state particles only and calls them CCQE-like events (since they will have contributions from both true CCQE and non-CCQE events). A CCQE-like event is one that has at least one outgoing muon, any number of protons or neutrons, and no mesons as final-state particles. (Mesons, like protons and neutrons, are made of quarks. Protons and neutrons have three quarks; mesons have two.)

MINERvA has measured the likelihood of CCQE-like neutrino interactions using Fermilab’s medium-energy neutrino beam, with the neutrino flux peaking at 6 GeV. Compared to MINERvA’s earlier measurements, which were conducted with a low-energy beam (3 GeV peak neutrino flux), this measurement has the advantage of a broader energy reach and much larger statistics: 1,318,540 CCQE-like events compared to 109,275 events in earlier low-energy runs.

MINERvA made these CCQE interaction probability measurements as a function of the square of the momentum transferred by the neutrino to the nucleus, which scientists denote as Q2. The plot shows discrepancies between the data and most predictions in low-Q2 and high-Q2 regions. By comparing MINERvA’s measurement with various models, scientists can refine them and better explain the physics inside the nuclear environment.

This plot shows the ratio of cross-section as a function of Q2 of data and various predictions with respect
to one commonly used interaction model. Image: MINERvA

MINERvA has also made more detailed measurements of the probability of neutrino interaction based on the outgoing muon’s momentum. They take into account the muon’s momentum both in the direction of the incoming neutrino’s trajectory and in the direction perpendicular to its trajectory. This work helps current and future neutrino experiments understand their own data over a wide range of muon kinematics.

Mateus Carneiro, formerly of the Brazilian Center for Research in Physics and Oregon State University and now at Brookhaven National Laboratory, and Dan Ruterbories of the University of Rochester were the main drivers of this analysis. The results were published in Physical Review Letters.

Amit Bashyal is an Oregon State University scientist on the MINERvA experiment.

 

This work is supported by the DOE Office of Science, National Science Foundation, Coordination for the Improvement of Higher Education Personnel in Brazil, Brazilian National Council for Scientific and Technological Development, Mexican National Council of Science and Technology, Basal Project in Chile, Chilean National Commission for Scientific and Technological Research, Chilean National Fund for Scientific and Technological Development, Peruvian National Council for Science, Technology and Technological Innovation, Research Management Directorate at the Pontifical Catholic University of Peru, National University of Engineering in Peru, Polish National Science Center and UK Science and Technology Facilities Council.

Fermi National Accelerator Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.

The first baby bison of the season was born on April 28. Photo: Paul Hackelberg, Fermilab

On April 28, baby bison season officially began. The first calf of the year was born in the late morning, and mother and baby are doing well.

Fermilab is expecting between 12 and 14 new calves this spring.

Fermilab’s first director, Robert Wilson, established the bison herd in 1969 as a symbol of the history of the Midwestern prairie and the laboratory’s pioneering research at the frontiers of particle physics.

And thanks to the science of genetic testing, Fermilab has confirmed that the laboratory’s herd shows no evidence of cattle gene mixing. Farmers during the early settlement era would breed bison with cattle in an attempt to create more tame bison or more hardy cattle.

A herd of bison is a natural fit for a laboratory surrounded by nature. Fermilab hosts nearly 1,000 acres of reconstructed tallgrass prairie, as well as remnant oak savannas, marshes and forests.

To learn more about Fermilab’s bison herd, please visit the section on wildlife at Fermilab on our website.

The entire Fermilab site in Batavia is closed to the general public at this time, so visits to view the bison are not currently possible. Updates will be posted on the Visit Fermilab webpage. Learn more about Fermilab’s science and people by following Fermilab’s social media pages @Fermilab.

The Fermilab site has been designated a National Environmental Research Park by the U.S. Department of Energy. The lab’s environmental stewardship efforts are supported by the Department of Energy Office of Science as well as Fermilab Natural Areas.

Fermilab is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.

In March, Fermilab Chief Information Officer Liz Sexton-Kennedy got in touch with Frank Würthwein, executive director of the Open Science Grid, to find out whether Fermilab could work with the Open Science Grid to contribute computing power to COVID-19 research.

It turned out that Würthwein was thinking along similar lines.

“Then he really ran with it,” Sexton-Kennedy said.

Shortly thereafter, members of the Open Science Grid — a network of organizations that provides computing services for science research of all stripes — were contributing tens of thousands of core-hours to COVID-19 projects. That number quickly grew to more than a million core-hours.

As part of the unified response, scientists and engineers at the Department of Energy’s Fermilab spurred themselves into high gear, preparing computing clusters — sets of connected computers — as COVID-19-research machines.

As of April 27, Fermilab has contributed a total of 1.8 million core hours to the pandemic-fighting effort carried out by projects such as Folding@home, which is simulating how viral proteins fold to help scientists design better therapeutics. Brookhaven National Laboratory, another DOE national lab and OSG member, has contributed 2.9 million core hours.

As a member of the Open Science Grid, Fermilab is dedicating a number of computing clusters for COVID-19 research. Photo: Reidar Hahn, Fermilab

“Understanding how the COVID-19 virus proteins fold — it’s all electrochemistry. Basically, it’s physics in the end that they’re simulating,” Sexton-Kennedy said. Like particle physics, “it’s computationally complicated science that takes lots of computing resources to figure out.”

Both Fermilab and Brookhaven’s high-throughput computing capabilities have for over a decade met the needs of experiments at the Large Hadron Collider, an enormous particle physics research facility in Europe, each analyzing data from billions of particle collisions.

During the times that select Fermilab and Brookhaven computers get a break from particle collision analysis, they’re free to crunch data outside particle physics. That’s where the Open Science Grid comes in. Among other tasks, the OSG evaluates research proposals to determine which are a good fit for its networks. In offering its resources to COVID-19 proposals, it provided the kind of vetting that the labs wouldn’t have been able to assume on their own. All the computational work is handled remotely.

Providing the world’s scientists with powerful computing capacity fuels research that would otherwise not be possible. Recognizing this, the U.S. Department of Energy supports Fermilab and Brookhaven science programs for the use and development of the OSG, and the National Science Foundation funds universities that partner with the OSG.

Brookhaven National Laboratory is contributing significant computing capacity to COVID-19 research through the Open Science Grid. Photo: Brookhaven National Laboratory

This effort is part of a larger COVID-19 High Performance Computing Consortium designed to provide access to the world’s most powerful high-performance computing resources in support of COVID-19 research. Consortium members include industrial partners, academic leaders, and agencies within the federal government, including the Department of Energy and many of its national laboratories. OSG prioritizes jobs to meet the needs of the consortium first.

“To maximize the chances that important COVID-19 research ideas reach us, we joined multiple national and international calls for computing requests: the COVID-19 High Performance Computing Consortium, a call with EGI for Advanced Computing Research in Europe, and the Worldwide LHC Computing Grid COVID-19 task force. We are also coordinating with all of the National Science Foundation software institutes,” said Würthwein, who is also a physics professor at the University of California, San Diego, and the lead for high-throughput computing at the university’s San Diego Supercomputer Center.

The computing power at Fermilab, Brookhaven and other OSG institutions can be used to model how virus proteins interact with receptors on cells in the human respiratory system. Simulations run on these systems can also sort through billions of potential drug molecules to narrow the search for drugs that might interfere with those interactions or disrupt the function of a virus protein in other ways.

Recent developments in machine learning accelerate this discovery process by improving the selection of leads. Modeling drug-protein interactions can be broken down into a larger number of independent calculations that can be farmed out to many computer cores. This characteristic makes the simulations well-suited to high-throughput capabilities at Fermilab, Brookhaven and the OSG infrastructure.

“This is a core competency that we can bring to the fight against this pandemic,” said Eric Lançon, director of Brookhaven’s Scientific Data and Computing Center and a member of the Open Science Grid executive team.

“I’m proud of our staff for being able to respond and think about what we can do usefully,” Sexton-Kennedy said. “We have had plans to create an institutional cluster for over a year, and the people involved got an extra bit of enthusiasm thinking that, ‘Hey, one of the first uses for this facility could be COVID research.'”

Learn more about how Open Science Grid is helping fight the COVID-19 pandemic.

This work is supported by the Department of Energy Office of Science and the National Science Foundation.

Fermi National Accelerator Laboratory and Brookhaven National Laboratory are supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.

David Ibbett is Fermilab’s 2020 guest composer. Photo: Tom Nichol, Fermilab

In David Ibbett’s latest musical offering as Fermilab guest composer, soprano Beth Sterling sings of the subtle neutrino: “You should be massless … you should be changeless.” Yet experiments have shown that neutrinos have mass and continually change form as they move through time and space.

With “Particle of Doubt,” scored for soprano, violin, viola, cello, piano and electronics, Ibbett creates a plaintive ode to the neutrino’s mysteries.

A four-minute video of the performance as well as the composer’s commentary (which opens with a cameo by Ibbett’s new baby) is now available online. It is a trailer for a larger piece that is planned to premiere at Fermilab in 2021.

In his commentary, Ibbett explains why he was drawn to the ubiquitous and strange particles.

“They don’t quite fit,” Ibbett said. “They have mass, but they shouldn’t, according to the Standard Model, and this raises all sorts of questions and opportunities to take our physics understanding further.”

The video features scenes from an animation of the international Deep Underground Neutrino Experiment, hosted by Fermilab. The groundbreaking experiment is the inspiration for the lyrics.

“Particle of Doubt” features what Ibbett calls a “sonification” of neutrino oscillation, the phenomenon in which a neutrino morphs between its various types. He mapped the probability waves of neutrino transformation to the three string melodies.

With its modern musicality, “Particles of Doubt” underscores that neutrinos are neither massless nor changeless – nor voiceless.

“We’re thrilled to be working with David as Fermilab’s first guest composer,” said Janet MacKay-Galbraith, head of the Fermilab Arts and Lecture Series. “His musical creativity, intellectual curiosity and passion for physics perfectly express the synchronicities between the arts and science. We can’t wait to hear and see what he comes up with next in this year-long endeavor.”

This work is supported by Fermi Research Alliance LLC.

Fermilab is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.