Every proton collision at the Large Hadron Collider is different, but only a few are special. The special collisions generate particles in unusual patterns — possible manifestations of new, rule-breaking physics — or help fill in our incomplete picture of the universe.
Finding these collisions is harder than the proverbial search for the needle in the haystack. But game-changing help is on the way. Fermilab scientists and other collaborators successfully tested a prototype machine-learning technology that speeds up processing by 30 to 175 times compared to traditional methods.
Confronting 40 million collisions every second, scientists at the LHC use powerful, nimble computers to pluck the gems — whether it’s a Higgs particle or hints of dark matter — from the vast static of ordinary collisions.
Rifling through simulated LHC collision data, the machine learning technology successfully learned to identify a particular postcollision pattern — a particular spray of particles flying through a detector — as it flipped through an astonishing 600 images per second. Traditional methods process less than one image per second.
The technology could even be offered as a service on external computers. Using this offloading model would allow researchers to analyze more data more quickly and leave more LHC computing space available to do other work.
It is a promising glimpse into how machine learning services are supporting a field in which already enormous amounts of data are only going to get bigger.

Particles emerging from proton collisions at CERN’s Large Hadron Collider travel through through this stories-high, many-layered instrument, the CMS detector. In 2026, the LHC will produce 20 times the data it does currently, and CMS is currently undergoing upgrades to read and process the data deluge. Photo: Maximilien Brice, CERN
The challenge: more data, more computing power
Researchers are currently upgrading the LHC to smash protons at five times its current rate. By 2026, the 17-mile circular underground machine at the European laboratory CERN will produce 20 times more data than it does now.
CMS is one of the particle detectors at the Large Hadron Collider, and CMS collaborators are in the midst of some upgrades of their own, enabling the intricate, stories-high instrument to take more sophisticated pictures of the LHC’s particle collisions. Fermilab is the lead U.S. laboratory for the CMS experiment.
If LHC scientists wanted to save all the raw collision data they’d collect in a year from the High-Luminosity LHC, they’d have to find a way to store about 1 exabyte (about 1 trillion personal external hard drives), of which only a sliver may unveil new phenomena. LHC computers are programmed to select this tiny fraction, making split-second decisions about which data is valuable enough to be sent downstream for further study.
Currently, the LHC’s computing system keeps roughly one in every 100,000 particle events. But current storage protocols won’t be able to keep up with the future data flood, which will accumulate over decades of data taking. And the higher-resolution pictures captured by the upgraded CMS detector won’t make the job any easier. It all translates into a need for more than 10 times the computing resources than the LHC has now.
The recent prototype test shows that, with advances in machine learning and computing hardware, researchers expect to be able to winnow the data emerging from the upcoming High-Luminosity LHC when it comes online.
“The hope here is that you can do very sophisticated things with machine learning and also do them faster,” said Nhan Tran, a Fermilab scientist on the CMS experiment and one of the leads on the recent test. “This is important, since our data will get more and more complex with upgraded detectors and busier collision environments.”

Particle physicists are exploring the use of computers with machine learning capabilities for processing images of particle collisions at CMS, teaching them to rapidly identify various collision patterns. Image: Eamonn Maguire/Antarctic Design
Machine learning to the rescue: the inference difference
Machine learning in particle physics isn’t new. Physicists use machine learning for every stage of data processing in a collider experiment.
But with machine learning technology that can chew through LHC data up to 175 times faster than traditional methods, particle physicists are ascending a game-changing step on the collision-computation course.
The rapid rates are thanks to cleverly engineered hardware in the platform, Microsoft’s Azure ML, which speeds up a process called inference.
To understand inference, consider an algorithm that’s been trained to recognize the image of a motorcycle: The object has two wheels and two handles that are attached to a larger metal body. The algorithm is smart enough to know that a wheelbarrow, which has similar attributes, is not a motorcycle. As the system scans new images of other two-wheeled, two-handled objects, it predicts — or infers — which are motorcycles. And as the algorithm’s prediction errors are corrected, it becomes pretty deft at identifying them. A billion scans later, it’s on its inference game.
Most machine learning platforms are built to understand how to classify images, but not physics-specific images. Physicists have to teach them the physics part, such as recognizing tracks created by the Higgs boson or searching for hints of dark matter.
Researchers at Fermilab, CERN, MIT, the University of Washington and other collaborators trained Azure ML to identify pictures of top quarks — a short-lived elementary particle that is about 180 times heavier than a proton — from simulated CMS data. Specifically, Azure was to look for images of top quark jets, clouds of particles pulled out of the vacuum by a single top quark zinging away from the collision.
“We sent it the images, training it on physics data,” said Fermilab scientist Burt Holzman, a lead on the project. “And it exhibited state-of-the-art performance. It was very fast. That means we can pipeline a large number of these things. In general, these techniques are pretty good.”
One of the techniques behind inference acceleration is to combine traditional with specialized processors, a marriage known as heterogeneous computing architecture.
Different platforms use different architectures. The traditional processors are CPUs (central processing units). The best known specialized processors are GPUs (graphics processing units) and FPGAs (field programmable gate arrays). Azure ML combines CPUs and FPGAs.
“The reason that these processes need to be accelerated is that these are big computations. You’re talking about 25 billion operations,” Tran said. “Fitting that onto an FPGA, mapping that on, and doing it in a reasonable amount of time is a real achievement.”
And it’s starting to be offered as a service, too. The test was the first time anyone has demonstrated how this kind of heterogeneous, as-a-service architecture can be used for fundamental physics.

Data from particle physics experiments are stored on computing farms like this one, the Grid Computing Center at Fermilab. Outside organizations offer their computing farms as a service to particle physics experiments, making more space available on the experiments’ servers. Photo: Reidar Hahn
At your service
In the computing world, using something “as a service” has a specific meaning. An outside organization provides resources — machine learning or hardware — as a service, and users — scientists — draw on those resources when needed. It’s similar to how your video streaming company provides hours of binge-watching TV as a service. You don’t need to own your own DVDs and DVD player. You use their library and interface instead.
Data from the Large Hadron Collider is typically stored and processed on computer servers at CERN and partner institutions such as Fermilab. With machine learning offered up as easily as any other web service might be, intensive computations can be carried out anywhere the service is offered — including off site. This bolsters the labs’ capabilities with additional computing power and resources while sparing them from having to furnish their own servers.
“The idea of doing accelerated computing has been around decades, but the traditional model was to buy a computer cluster with GPUs and install it locally at the lab,” Holzman said. “The idea of offloading the work to a farm off site with specialized hardware, providing machine learning as a service — that worked as advertised.”
The Azure ML farm is in Virginia. It takes only 100 milliseconds for computers at Fermilab near Chicago, Illinois, to send an image of a particle event to the Azure cloud, process it, and return it. That’s a 2,500-kilometer, data-dense trip in the blink of an eye.
“The plumbing that goes with all of that is another achievement,” Tran said. “The concept of abstracting that data as a thing you just send somewhere else, and it just comes back, was the most pleasantly surprising thing about this project. We don’t have to replace everything in our own computing center with a whole bunch of new stuff. We keep all of it, send the hard computations off and get it to come back later.”
Scientists look forward to scaling the technology to tackle other big-data challenges at the LHC. They also plan to test other platforms, such as Amazon AWS, Google Cloud and IBM Cloud, as they explore what else can be accomplished through machine learning, which has seen rapid evolution over the past few years.
“The models that were state-of-the-art for 2015 are standard today,” Tran said.
As a tool, machine learning continues to give particle physics new ways of glimpsing the universe. It’s also impressive in its own right.
“That we can take something that’s trained to discriminate between pictures of animals and people, do some modest amount computation, and have it tell me the difference between a top quark jet and background?” Holzman said. “That’s something that blows my mind.”
This work is supported by the DOE Office of Science.
Fermilab’s newest particle accelerator is small but mighty. The Integrable Optics Test Accelerator, designed to be versatile and flexible, is enabling researchers to push the frontiers of accelerator science.
Instead of smashing beams together to study subatomic particles like most high-energy physics research accelerators, IOTA is dedicated to exploring and improving the particle beams themselves.
IOTA researchers say they are excited by the observation of single-electron beams near the speed of light and the first results on decreasing beam instabilities. They are eager to use their single-electron technique to probe aspects of quantum science and see future breakthroughs in accelerator science.
“The scientists who designed the accelerator are also the scientists that use it,” said Vladimir Shiltsev, a Fermilab distinguished scientist and one of the founders of IOTA. “It’s an opportunity to get great insight into the physics of beams at relatively small cost.”

Scientists using the 40-meter-circumference Integrable Optics Test Accelerator saw their first results from IOTA this summer. Photo: Giulio Stancari
Versatility is the mother of innovation
In the Fermilab Accelerator Science and Technology facility, a particle accelerator delivers intense bursts of electrons that are then stored in IOTA’s 40-meter-circumference ring, where they circulate about 7.5 million times every second at near the speed of light. The system’s design enables a small team to adjust or exchange components in the beamline to perform a variety of experiments on the frontier of accelerator science.
“This machine was designed with a lot of flexibility in mind,” said Fermilab scientist Alexander Valishev, head of the team that developed and constructed IOTA.
Consider the accelerator magnets, which are responsible for the size and shape of the particle beam’s profile. At IOTA, every magnet is powered separately so that researchers can reconfigure the machine for completely different experiments in a few minutes. At other accelerator facilities, a comparable change could require a lengthy shutdown of weeks or months.
For research accelerators that serve researchers, the focus is typically on maximizing running time and maintaining well-understood, established beam parameters. In contrast, the IOTA team expects the accelerator to be routinely shut down, reconfigured and restarted. Its technical and operational flexibilities make it easier for outside teams to use IOTA to conduct their own experiments, exploring a variety of topics at the frontier of accelerator and beam physics.
IOTA’s versatility has already attracted groups from Lawrence Berkeley National Laboratory; Northern Illinois University; SLAC National Accelerator Laboratory; University of California, Berkeley; University of Chicago and other institutions. Not only are they conducting exciting science, but early-career researchers are also receiving valuable practical training in accelerator and beam science that can be challenging to come by.
“If you wanted to have a comparable scientific program at a more traditional facility, it would be very difficult, if not prohibitive. Typically, those facilities are designed for a narrow range of research, aren’t easily modified and require nearly continuous operation,” said Fermilab scientist Jonathan Jarvis, who works on IOTA. “But here at IOTA, we are a purpose-built facility for frontier topics in accelerator research and development, and we have those flexibilities by design.”

Fermilab scientist Alexander Valishev inspects the specially designed nonlinear insert that produces the nonlinear magnetic fields for IOTA experiments. Photo: Giulio Stancari
First results: Testing IOTA’s IO
As part of the only dedicated ring-based accelerator R&D facility for high-intensity beam physics in the United States, IOTA is designed to develop technologies to increase the number of particles in a beam without increasing the beam’s size and thus the size and cost of the accelerator. Since all particles in the beam have an identical charge, they electrically repel each other, and as more particles are packed into the beam, it can become unstable. Particles may behave chaotically and escape. It takes expertise and innovative technology to tame a dense particle beam.
To that end, IOTA researchers are investigating a novel technique called nonlinear integrable optics. The technique uses specially designed sets of magnets configured to prevent beam instabilities, significantly better than the configurations of magnets used over the past 50 years.
To test the nonlinear integrable optics technique, IOTA researchers deliberately produced instability in the beam. They then measured how difficult it was to provoke unstable behavior in IOTA’s electron beam both with and without the influence of the magnetic fields
The technique was a winner: Scientists observed that these specialized magnets significantly decreased the instability.
During the next run of the system, the team plans to more rigorously study this effect.
“The first result is merely a demonstration,” Valishev said. “But I think it’s already a big accomplishment.”

IOTA’s nonlinear magnets help prevent instabilities in high-intensity particle beams. Photo: Giulio Stancari
Watching a single electron near the speed of light
In a first for Fermilab, the researchers have also observed the circulation of a single electron.
The IOTA beam, when injected into the storage ring, can contain about a billion electrons. As the beam circulates, electrons tend to escape the beam due to collisions with one another or with stray gas molecules in the beam pipe. So if you want to see an electron fly solo around the ring, it is just a matter of waiting.
The real trick is being able to observe the last electron left “standing.”
The fast-moving electrons emit visible light as they travel along the curves of the ring. This light is synchrotron radiation, which is emitted when charged particles moving near the speed of light change direction. The light provides researchers with information about the beam, including how many electrons are in it.
IOTA researchers used the synchrotron radiation to observe the loss of electrons, one by one, until they finally witnessed a solitary electron.

This plot illustrates the decrease in the amount of measured synchrotron light every time an electron was knocked out of the particle beam.
On their next round, rather than play the waiting game to get down to a beam of one electron, the team tried a faster, more deliberate approach. They devised a way to instead inject single electrons into IOTA on demand. It worked. The method reliably saw lone particles traveling around the ring.
The wait was over.
This feat is more than just a novel curiosity. The ability to store and observe a single electron, or even a very small number of electrons moving around at high speeds, creates opportunities to probe interesting quantum science.
“Everything we do is rather macroscopic, so you wouldn’t think of any of this facility, let alone a 40-meter ring, as a quantum instrument,” Jarvis said. “But we’ve got this situation where there’s an individual particle circulating in the ring at nearly the speed of light, and it gives us fascinating opportunities to do something that is very quantum in nature.”
For instance, in its upcoming run, IOTA will become the first facility in the world with the ability to precisely redirect synchrotron light back on the particle that generated it.
This capability opens the door to a wide variety of fundamental quantum experiments and will also enable Fermilab scientists to attempt the world’s first demonstration of a powerful technique called optical stochastic beam cooling. Generally, beam cooling methods sap accelerated particles of their chaotic or frenetic motion. Optical stochastic cooling is expected to be thousands of times stronger than the current state of the art and is a perfect example of the high-impact returns that IOTA is targeting.
“We’ve got this situation where there’s an individual particle circulating in the ring at nearly the speed of light, and it gives us fascinating opportunities to do something that is very quantum in nature.”
Accelerating into the future: proton beams, electron lenses and more
IOTA is currently set up to circulate electrons, and this work sets the stage for future, more challenging experiments with protons.
The high-energy electron beam naturally shrinks to a smaller size due to synchrotron radiation, which makes it a well-behaved system for IOTA researchers to confirm important parts of beam physics theories.
In contrast to IOTA’s electron beam, its forthcoming experiments with protons will see beam circulate at low velocity, be significantly larger and be strongly affected by the repulsive forces between beam particles. Research into the behavior of such proton beams will be integral to understanding how nonlinear integrable optics can be effectively applied in the high-power accelerators of the future.
And with both electrons and protons in the mix, scientists can also advance to another exciting phase in IOTA’s research program: electron lenses. Electron lenses are yet another technique that researchers are investigating in their quest to create stable particle beams. This technique uses the negative charge of electrons to oppose the positive charges of protons to pull the protons into a compact, stable beam. The electron lens will also allow IOTA scientists to demonstrate the nonlinear integrable optics concept using special charge distributions rather than the specialized nonlinear magnets.
With its breadth of unique capabilities, IOTA and its team are ready for several years of exciting research.
“Frontier science requires frontier research and development, and at IOTA, we are focused on realizing those major innovations that could invigorate accelerator-based high-energy physics for the next several decades,” Jarvis said.
This work is supported by the Department of Energy Office of Science.
About 10 years ago, the world’s most powerful X-ray laser — the Linac Coherent Light Source — made its debut at SLAC National Accelerator Laboratory. Now the next revolutionary X-ray laser in a class of its own, LCLS-II, is under construction at SLAC, with support from four other DOE national laboratories.
Researchers in biology, chemistry and physics will use LCLS-II to probe fundamental pieces of matter, creating 3-D movies of complex molecules in action, making LCLS-II a powerful, versatile instrument at the forefront of discovery.
The project is coming together thanks largely to a crucial advance in the fields of particle and nuclear physics: superconducting accelerator technology. DOE’s Fermilab and Thomas Jefferson National Accelerator Facility are building the superconducting modules necessary for the accelerator upgrade for LCLS-II.

SLAC National Accelerator Laboratory is upgrading its Linac Coherent Light Source, an X-ray laser, to be a more powerful tool for science. Both Fermilab and Thomas Jefferson National Accelerator Facility are contributing to the machine’s superconducting accelerator, seen here in the left part of the diagram. Image: SLAC
A powerful tool for discovery
Inside SLAC’s linear particle accelerator today, bursts of electrons are accelerated to energies that allow LCLS to fire off 120 X-ray pulses per second. These pulses last for quadrillionths of a second – a time scale known as a femtosecond – providing scientists with a flipbook-like look at molecular processes.
“Over time, you can build up a molecular movie of how different systems evolve,” said SLAC scientist Mike Dunne, director of LCLS. “That’s proven to be quite remarkable, but it also has a number of limitations. That’s where LCLS-II comes in.”
Using state-of-the-art particle accelerator technology, LCLS-II will provide a staggering million pulses per second. The advance will provide a more detailed look into how chemical, material and biological systems evolve on a time scale in which chemical bonds are made and broken.
To really understand the difference, imagine you’re an alien visiting Earth. If you take one image a day of a city, you would notice roads and the cars that drive on them, but you couldn’t tell the speed of the cars or where the cars go. But taking a snapshot every few seconds would give you a highly detailed picture of how cars flow through the roads and would reveal phenomena like traffic jams. LCLS-II will provide this type of step-change information applied to chemical, biological and material processes.
To reach this level of detail, SLAC needs to implement technology developed for particle physics – superconducting acceleration cavities – to power the LCLS-II free-electron laser, or XFEL.

This is an illustration of the electron accelerator of SLAC’s LCLS-II X-ray laser. The first third of the copper accelerator will be replaced with a superconducting one. The red tubes represent cryomodules, which are provided by Fermilab and Jefferson Lab. Image: SLAC
Accelerating science
Cavities are structures that impart energy to particle beams, accelerating the particles within them. LCLS-II, like modern particle accelerators, will take advantage of superconducting radio-frequency cavity technology, also called SRF technology. When cooled to 2 Kelvin, superconducting cavities allow electricity to flow freely, without any resistance. Like reducing the friction between a heavy object and the ground, less electrical resistance saves energy, allowing accelerators to reach higher power for less cost.
“The SRF technology is the enabling step for LCLS-II’s million pulses per second,” Dunne said. “Jefferson Lab and Fermilab have been developing this technology for years. The core expertise to make LCLS-II possible lives at these labs.”
Fermilab modified a cryomodule design from DESY, in Germany, and specially prepared the cavities to draw the record-setting performance from the cavities and cryomodules that will be used for LCLS-II.
The cylinder-shaped cryomodules, about a meter in diameter, act as specialized containers for housing the cavities. Inside, ultracold liquid helium continuously flows around the cavities to ensure they maintain the unwavering 2 Kelvin essential for superconductivity. Lined up end to end, 37 cryomodules will power the LCLS-II XFEL.

Thirty-seven cryomodules lined end to end — half from Fermilab and half from Jefferson Lab — will make up the bulk of the LCLS-II accelerator. Photo: Reidar Hahn
Fermilab and Jefferson Lab share the responsibility for fabricating, testing and delivering the cryomodules to SLAC. Together, the two labs will build all the cryomodules that will house the cavities. Fermilab will provide 19 cryomodules, and Jefferson Lab will provide the other 18. The largest of these cylinders reach 12 meters (40 feet) in length, about the length of a school bus. Each lab will also send a few spares to SLAC as well.
The cavities and their cryomodules represent breakthroughs in SRF technology, providing high-energy beams far more efficiently than previously possible. Researchers have improved SRF cavities to achieve record gradients, a measure of how quickly a beam can achieve a certain energy. The cavities also recently achieved an unprecedented result in their energy efficiency, doubling the previous state-of-the-art design while reducing cost.
The scientists and engineers were meticulous in developing LCLS-II’s accelerator components. For example, to create the cryomodules and cavities, Fermilab used earthquake detecting equipment to identify whether vibrations affecting the cavities’ effectiveness were internal or external. Once they determined the cause, they changed the configuration of the liquid-helium pipes to reduce those vibrations.

Each cryomodule houses a string of acceleration cavities like this one. Cavities propel the particles as the particles move through them. At LCLS-II, electrons will charge through one cavity after another, picking up energy as they go. Pictured here is a 1.3-gigahertz cavity. Photo: Reidar Hahn
Fermilab and Jefferson Lab will also send scientists and engineers to assist SLAC when LCLS-II first powers up the cryomodules.
Jefferson Lab is also providing the design and procurement of the cryogenic refrigeration plants that supply the liquid helium to cool the SRF cavities to 2 Kelvin, while Fermilab is providing the design and procurement of components for the cryogenic distribution systems which moves the liquid helium from these plants to the cryomodules. Berkeley Lab and Argonne National Laboratory are also contributing components for LCLS-II, including the source that provides the electron beam and the magnets that force the beam into the wave-like motion that creates the X-ray light. Cornell University supported the R&D for LCLS-II cavity prototypes and helped process the cavities.
“We’re all in this together,” said Rich Stanek, LCLS-II Fermilab senior team lead. “This close collaboration of national laboratories bodes well for future projects. It has benefits over and above the project itself.”
Those benefits have made LCLS-II one of the top priority projects for DOE’s Office of Science and expand beyond the interests of the partner laboratories. LCLS-II is expected to build on its progenitor, diving even deeper into fields ranging from biology and chemistry to material science and astrophysics.
Opening up, diving deep
Eric Isaacs, the president of the Carnegie Institution for Science and chair of the SLAC Scientific Policy Committee, has already reviewed a number of proposals for LCLS-II.
“There are any number of processes that occur on very short time scales,” Isaacs said, a condensed matter physicist by training. “And LCLS-II opens up whole new areas of the sciences to study.”
One such question will use the X-ray laser to probe material under conditions similar to the very center of our planet and gain insight into how Earth formed. Astrophysicists would then be able to adapt that information for their search for life on exoplanets.
With LCLS-II, scientists will be able to study photosynthesis at a deeper level than ever before. The hope is that humans will one day be able to reverse engineer photosynthesis and harness a new biological tool for generating energy.
One of the ways LCLS-II will advance research in biology is by mapping proteins and enzymes in conditions resembling their normal environments. This deeper understanding will pave the way for scientists to create better drugs.
Scientists also intend to use LCLS-II to research superconductors, bringing the machine’s use of accelerator technology full circle. Current superconductors are limited by their need for specific, low temperatures. By understanding the atomic phenomenon of superconductivity, researchers might be able to create a room-temperature superconductor.
“Particle and nuclear physics have developed the superconducting technologies and capabilities that LCLS-II will use,” Isaacs said. “These advancements will enable LCLS-II to look at some of the most important questions across many branches of science.”
As with any major advancement, the true transformative power of LCLS-II will be revealed once its X-rays illuminate a sample for the first time. LCLS-II is planned to start up in 2021.
This work is supported by the DOE Office of Science program for Basic Energy Sciences.
Fermilab scientist Erik Ramberg and the Fermilab Archives present a new exhibit, “The Response to Relativity,” as part of their series on the history of physics in print. The exhibit can be viewed in the glass display case in the Fermilab Art Gallery through the end of September. The gallery is open Monday through Friday, 8 a.m. to 4:30 p.m. It is located on the second floor of Fermilab’s main office building, Wilson Hall.
The exhibit showcases pieces on special and general relativity: publications in English and German by Albert Einstein, other scientific publications, reproductions of newspaper articles that show the public reaction to the discovery, and a book on relativity intended for a general audience.
1905 was a “Wunderjahr” (miracle year) for Albert Einstein. He published groundbreaking work on his discovery of special relativity and on his famous equation: E= mc2 (a system’s energy is equal to its mass times the square of the speed of light). A copy of “Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?” (“Does the Inertia of a Body Depend Upon its Energy Content?”) is on display, along with copies of Über die spezielle und die allgemeine Relativitätstheorie, translated into English as Relativity: The Special and the General Theory.
Headlines displayed from The New York Times and a diagram from The Illustrated London News published later in 1919 demonstrate how the public started to show an interest in Einstein’s theory. In later years, many books explaining general relativity for general audiences would be published, further fixing Einstein’s theory in popular consciousness. The example on display shows pages from Relativity for the Million illustrating some of his theory’s predictions.
This exhibit was designed by Valerie Higgins, Karin Kemp and Erik Ramberg. Exhibited items are from Erik Ramberg’s collection.

Learn about the reaction from the public in the early 20th century to the discovery of relativity. Come to Fermilab’s latest science history exhibit, located on Wilson Hall’s second-floor art gallery. Photo: Valerie Higgins

Fermilab scientist Nhan Tran is developing computer systems to cope with the increasing amounts of data that particle colliders produce. Photo: Reidar Hahn
The field of particle physics produces colossal amounts of data as it pushes the frontiers of human understanding. Fermilab scientist Nhan Tran is working to make sure researchers have the innovative techniques and technology they need to handle the data deluge and to select the details that are likely to reveal exciting new physics phenomena.
And now he has received a prestigious award to advance this work. The Department of Energy Early Career Research Award will provide Tran with $2.5 million over five years to lead efforts to apply artificial intelligence to push the frontiers of both physics and technology.
Tran will use a type of artificial intelligence called deep learning to expand the capabilities of particle collider research. Deep learning uses a complex artificial neural network, inspired by biological brains, to allow a computer to learn how to perform a complex task, such as identifying an object in a picture, without being explicitly programmed for that task. Each of the many layers or artificial neurons in an artificial neural network can modify the data before passing it on to another piece of the network. The program adjusts how these changes occur based on test information that is provided to it in order to configure itself for the desired task.
“As we get more and more data, computing is going to be a real issue,” Tran said. “Using new types of computing hardware with deep learning algorithms is a promising way to overcome this issue.”
Tran is applying his expertise to expand the impact of experiments at the Large Hadron Collider, a 17-mile-around particle collider at CERN in Switzerland, in two major ways. He will use the award funds to develop systems that will improve how computers handle the LHC data flood. He will also further develop a technique he pioneered that allows scientists to identify in the LHC data particles called Higgs bosons, whose discovery in 2012 led to a Nobel Prize.
At the LHC, beams of protons collide to produce other particles, including Higgs bosons, and these post-collision particles carry a wide range of momenta. Nhan is interested in using deep learning to identify Higgs bosons with high momentum and how they decay into other particles. Studying Higgs bosons under these new conditions opens the door to discovering new physics.
“Observing these specific events was considered to be impossible at hadron colliders,” said Fermilab senior scientist Anadi Canepa, head of the CMS Department in which Tran works. “And then Nhan and collaborators came up with this new technique, and he’s extracting much more than we thought we could from the data we collect.”
His work on the Higgs boson will be helped by the other thrust of his award-supported work: developing computer systems to cope with the increasing amounts of data that particle colliders produce. Instead of just improving generalized computing technologies, Tran’s plan is to develop systems that combine hardware that is specialized for efficiently performing specific types of computations, exceeding the capabilities of traditional computer technology to address those tasks.
“Nhan is a very clear thinker with a keen taste for where to focus his efforts,” said Fermilab scientist Gabriel Perdue, who leads artificial intelligence efforts at Fermilab. “He knows better than anyone that what you choose to work on is more important than any other decision you make, and his ability to figure out the most important problems and tackle them directly really sets him apart. He really understands the point of highest leverage for AI in high-energy physics and is making a huge difference right at that point.”
Tran was introduced to the world of advanced computer technology through his postdoctoral physics research on trigger systems, which select what collision data to keep from among hundreds of millions of particle collisions generated every second at the LHC. He and researchers in industry are now exploring the high-tech electronics that Tran worked on for opportunities to improve computers.
Tran plans to couple more efficient hardware developed by tech companies, such as Microsoft and Xilinx, with Fermilab’s existing computer infrastructure to meet future demands.
“People in the computer industry are always interested to hear about our problems, because we present them with very challenging data sets in terms of their complexity, their size and our own unique requirements,” Tran said. “We generally present them with problems that can push the boundaries of their own technology.”
Tran plans to use the award to purchase new computer hardware, to hire a postdoctoral researcher for the projects and to enable experts at Fermilab to determine how to apply novel computing systems on a large scale.
“Nhan is internationally recognized as an innovator who completes his ideas,” Canepa said. “If he has an idea, that idea then becomes a reality. He has demonstrated that in challenging situations, and every time, he succeeded. He also chooses the right ideas. He really sees what are the main challenges — what is pulling us back from extracting the most information out of our precious data — and he identifies what’s the most relevant and impactful improvement and goes after that.”
Tran says Fermilab is a great environment for his work. He appreciates the excellent postdoctoral researchers that come to the lab and the ability to just walk up or down a floor and consult with experts, especially about the computing topics that he is not formally trained in.
“It’s an honor to receive this award,” Tran said. “And it’s really exciting to explore new techniques in deep learning and push it as far as we can go, both on the physics side and the technical side.”
This work is supported by the Department of Energy Office of Science.

