Fermilab feature

Under pressure: balloons for particle acceleration

Balloons can help make a space perfect for a party. Now they also can help when it comes to accelerating particles to near the speed of light.

The innovative use of balloons provides a new, patented way for engineers to shape the metal heart of particle accelerators.

Many particle accelerators use structures called cavities, which provide the kick needed to accelerate particles to higher and higher energies as the particles barrel through one after the other. Situated deep inside an accelerator and cooled by a shell containing liquid helium, cavities have to be just the right shape and size to boost particles to the desired energies. Even small differences in the shape of these metal chambers make large differences in the electric fields that are generated inside the cavities to push particles to greater speeds.

Faced with one particular cavity that was too misshaped to use and inaccessible because of its metal shell, Fermilab engineers Mohamed Hassan and Donato Passarelli got an idea: What if you could reshape a cavity without removing the surrounding shell? They went to work, developing an innovative process called balloon tuning.

“I hope balloon tuning is an example for the accelerator community — that we should think out of the box and not always stick with the standard and common technique,” said Passarelli.

The patented balloon tuning process is a new option in the suite of techniques used to prepare cavities before they’re installed in an accelerator.

Fermilab engineers Mohamed Hassan, left, and Donato Passarelli stand near an accelerator cavity and the patented balloons used to tune, or reshape, the cavity from the inside. Photo: Reidar Hahn

Most acceleration cavities are a series of round, hollow cells that look like a giant strand of metal beads. Before any cavity is installed, it is carefully tested and tuned using an automated machine that grasps the edges of each cell to make small, precise adjustments: a little push here, a little stretch there. The process continues until the cavity is adjusted so that, once the cavity is up and running inside an accelerator, it’s in the shape to produce the perfect electric field to propel charged particles.

But before most cavities can be installed, they must also be fitted with a metal jacket so the cavity can be cooled to extremely low temperatures with liquid helium. After that, the only easy way to apply forces to the cells is to push or pull on the ends of the cavity, rather than targeting each cell individually. If a cavity becomes misshaped during or after the process of putting the jacket on, the traditional tuning method can’t be applied without cutting the metal jacket off — a laborious, time-consuming task.

Hassan and Passarelli started contemplating this challenge after an old test cavity deformed during a pressure test.

“After the pressure test, I was determined to find a way to fix this cavity and thought, ‘Why not access it from the inside, which is accessible even with a jacket?’” Hassan said.

The need to apply the force inside the cavities without scratching the inner surface or introducing unacceptable levels of contamination led them to using specially designed balloons made of rubberized nylon.

A pump fills each balloon with air until it applies about two bars of pressure — a little less than what’s recommended for standard car tires. This isn’t enough pressure to reshape a cavity cell on its own, but that pressure can be used to influence which cell deforms when forces are applied to the ends of a cavity at room temperature. Balloons let you single out a particular cell, either stretching or squeezing it.

If a particular cell needs to be stretched, a balloon inflated inside it provides an extra nudge for it to expand as the flanges are pulled apart. Whereas if a cell needs to be squeezed, a series of balloons can support all the other cells as the two ends are pushed together.

To stretch one cell of an accelerator cavity, a balloon is placed inside it and inflated. Image: Diana Brandonisio

To squeeze a particular cavity cell, balloons are placed inside the cells surrounding it. The balloons support these cells, resulting in the unoccupied cell being reshaped as forces are applied to each end of the cavity. Image: Diana Brandonisio

The engineers and their team demonstrated the concept by tuning an unjacketed cavity. Then they turned their attention to the misshaped cavity that had inspired them to develop the process. They succeeded in returning it to usable condition.

“Balloon tuning will be a nice additional tool for cavity production that can save quite a bit of money and time,” Hassan said.

High-performing cavities are crucial components in Fermilab’s upcoming PIP-II accelerator and SLAC National Accelerator Laboratory’s LCLS-II X-ray laser, and they are a major part of a current Fermilab project to extend the time that a qubit can maintain information.

The balloon-tuning technique was recently patented, speeding through the patent office in record time for Fermilab, said Aaron Sauers, the lab’s patent and licensing executive.

“Mohamed and Donato developed a truly beautiful method and apparatus to tune dressed cavities,” Sauers said. “I was excited to file the patent application on their invention.”

Hassan and Passarelli see automated balloon tuning as a possibility, which could make it as convenient to use as the current method is for unjacketed cavities. The technique may also find applications in other fields that use similar cavities.

“The hope is that people looking at this idea will get inspired and either adapt or use this technique in their own application,” Passarelli said.

The U.S. Department of Energy has awarded researchers at its Fermi National Accelerator Laboratory more than $3.5 million to boost research in the fast-emerging field of Quantum Information Science.

“Few pursuits have the revolutionary potential that quantum science presents,” said Fermilab Chief Research Officer Joe Lykken. “Fermilab’s expertise in quantum physics and cryogenic engineering is world-class, and combined with our experience in conventional computing and networks, we can advance quantum science in directions that not many other places can.”

As part of a number of grants to national laboratories and universities offered through its Quantum Information Science-Enabled Discovery (QuantISED) program, DOE’s recent round of funding to Fermilab covers three initiatives related to quantum science. It also funds Fermilab’s participation in a fourth initiative led by Argonne National Laboratory.

The DOE QuantISED grants will fund initiatives related to quantum computing. These include the simulation of advanced quantum devices that will improve quantum computing simulations and the development of novel electronics to work with large arrays of ultracold qubits.

For a half-century, Fermilab researchers have closely studied the quantum realm and provided the computational and engineering capabilties needed to zoom in on nature at its most fundamental level. The projects announced by the Department of Energy will build on those capabilities, pushing quantum science and technology forward and leading to new discoveries that will enhance our picture of the universe at its smallest scale.

“Fermilab is well-versed in engineering, algorithmic development and recruiting massive computational resources to explore quantum-scale phenomena,” said Fermilab Head of Quantum Science Panagiotis Spentzouris. “Now we’re wrangling those competencies and capabilities to advance quantum science in many areas, and in a way that only a leading physics laboratory could.”


The Fermilab-led initiatives funded through these DOE QuantISED grants are:

Large-scale simulations of quantum systems on high-performance computing with analytics for high-energy physics algorithms
Lead principal investigator: Adam Lyon, Fermilab

The large-scale simulation of quantum computers has plenty in common with simulations in high-energy physics: Both must sweep over a large number of variables. Both organize their inputs and outputs similarly. And in both cases, the simulation has to be analyzed and consolidated into results. Fermilab scientists, in collaboration with scientists at Argonne National Laboratory, will use tools from high-energy physics to produce and analyze simulations using high-performance computers at the Argonne Leadership Computing Facility. Specifically, they will simulate the operation of a qubit device that uses superconducting cavities (which are also used as components in particle accelerators) to maintain quantum information over a relatively long time. Their results will determine the device’s impact on high-energy physics algorithms using an Argonne-developed quantum simulator.

Partner institution: Argonne National Laboratory

Research technology for quantum information systems
Lead principal investigator: Gustavo Cancelo, Fermilab

One of the main challenges in quantum information science is designing an architecture that solves problems of massive interconnection, massive data processing and heat load. The electronics must be able to operate and interface with other electronics operating both at 4 kelvins and at near absolute zero. Fermilab scientists and engineers are designing novel electronic circuits as well as massive control and readout electronics to be compatible with quantum devices, such as sensors and quantum qubits. These circuits will enable many applications in the quantum information science field.

Partner institutions: Argonne National Laboratory, Massachusetts Institute of Technology, University of Chicago

MAGIS-100 – co-led by Stanford University and Fermilab
Lead Fermilab principal investigator: Rob Plunkett

Fermilab will host a new experiment to test quantum mechanics on macroscopic scales of space and time. Scientists on the MAGIS-100 experiment will drop clouds of ultracold atoms down a 100-meter-long vacuum pipe on the Fermilab site, and use a stable laser to create an atom interferometer which will look for dark matter made of ultralightweight particles. They will also advance a technique for gravitational-wave detection at relatively low frequencies.

This is a joint venture under the collaboration leadership of Stanford University Professor Jason Hogan, who is funded by grant GBMF7945 from the Gordon and Betty Moore Foundation. Rob Plunkett of Fermilab serves as the project manager.

Other participating institutions: Northern Illinois University, Northwestern University, Stanford University, Johns Hopkins University, University of Liverpool


Fermilab was also funded to participate in another initiative led by Argonne National Laboratory:

Quantum sensors for widie-band axion dark matter detection
Lead principal investigator: Peter Barry, Argonne

Researchers are searching high and low for dark matter, the mysterious substance that makes up a quarter of our universe. One theory proposes that it could be made of particles called axions, which would signal their presence by converting into particles of light, called photons. Fermilab researchers are part of a team developing specialized detectors that look for photons in the terahertz range — at frequencies just below the infrared. The development of these detectors will widen the range of frequencies where axions may be discovered. To bring the faint signals to the fore, the team is using supersensitive quantum amplifiers.

Other participating institutions: National Institute of Standards and Technology, University of Colorado

The CMS experiment at CERN’s Large Hadron Collider has achieved yet another significant milestone in its already storied history as a leader in the field of high-energy experimental particle physics. The U.S. contingent of the CMS collaboration, known as USCMS and managed by Fermilab, has been granted the Department of Energy’s final Critical Decision- 4 approval for its multiyear Phase 1 Detector Upgrade program, formally signifying the completion of the project after having met every stated goal — on time and under budget.

“Getting CD-4 approval is a tremendous vote of confidence for the many people involved in CMS,” said Fermilab scientist Steve Nahn, U.S. project manager for the CMS detector upgrade. “The LHC is the best tool we have for further explication of the particle nature of the universe, and there are still mysteries to solve, so we have to have the best apparatus we can to continue the exploration.”

The CMS experiment is a generation-spanning effort to build, operate and upgrade a particle-detecting behemoth that observes its protean prey in a large but cramped cavern 300 feet beneath the French countryside. CMS is one of four large experiments situated along the LHC accelerator complex, operated by CERN in Geneva, Switzerland. The LHC is a 17-mile-round ring of magnets that accelerates two beams of protons in opposite directions, each to 99.999999999% the speed of light, and forces them to collide at the centers of CMS and the LHC’s other experiments: ALICE, LHCb and ATLAS.

Fermilab scientists Nadja Strobbe and Jim Hirschauer test chips for the CMS detector upgrades. Photo: Reidar Hahn

The main goal of CMS (and the other LHC experiments) is to keep track of which particles emerge from the rapture of pure energy created from the collisions in order to search for new particles and phenomena. In catching sight of such new phenomena, scientists aim to answer some of the most fundamental questions we have about how the universe works.

The global CMS collaboration comprises more than 5,000 professionals — including roughly 1,000 students — from over 200 institutes and universities across more than 50 countries. This international team collaborates to design, build, commission and operate the CMS detector, whose data is then distributed to dedicated centers in 40 nations for analysis. And analysis is their raison d’etre. By sussing out patterns in the data, CMS scientists search for previously unseen or unconfirmed phenomena and measure the properties of elementary particles that make up the universe with greater precision. To date, CMS has published over 900 papers.

The USCMS collaboration is the single largest national group in CMS, involving 51 American universities and institutions in 24 states and Puerto Rico, over 400 Ph.D. physicists, and more than 200 graduate students and other professionals. USCMS has played a primary role in much of the CMS experiment’s original design and construction, including a wide network of eight CMS computing centers located across the United States, and in the experiment’s data analysis. USCMS is supported by the U.S. Department of Energy and the National Science Foundation and has played an integral role in the success of the CMS collaboration as a whole from its founding.

The CMS experiment, the LHC and the other LHC experiments became operational in 2009 (17 years after the CMS letter of intent), beginning a 10-year data-taking period referred to as Phase 1.

Phase 1 was divided into four major epochs, alternating two periods of data-taking with two periods of maintenance and upgrade operations. The two data-taking periods are referred to as Run 1 (2009-2013) and Run 2 (2015-2018). It was during Run 1 (in 2012) that the CMS and ATLAS collaborations jointly announced they each had observed the long predicted Higgs boson, resulting in a Nobel Prize awarded a year later to scientists Peter Higgs and François Englert, and a further testament to the strength of the Standard Model of particle physics, the theory within which the Higgs boson was first hypothesized in 1964.

“That prize was a historic triumph of every individual, institution and nation involved with the LHC project, not only validating the Higgs conjecture, a cornerstone of the Standard Model, but also giving science a new particle to use as a tool for further exploration,” Nahn said. “This discovery and every milestone CMS has achieved since then is encouragement to continue working toward further discovery. That goes for our latest approval milestone.”

Fermilab scientist Maral Alyari and Stephanie Timpone conduct CMS pixel detector work. Photo: Reidar Hahn

During the entirety of Phase 1, the wizard-like LHC particle accelerator experts were continually ramping up the collision energy and intensity, or in particle physics parlance, the luminosity of the LHC beam. The CMS technical team was charged with fulfilling the Phase 1 Upgrade plan, a series of hardware upgrades to the detector that allowed it to fully profit from the gains the LHC team was providing.

While the LHC accelerator folks were prepping to push 20 times as many particles through the experiments per second, the experiments were busy upgrading their systems to handle this major influx of particles and the resulting data. This meant updating many of the readout electronics with faster and more capable brains to manage and process the data produced by CMS.

With support from the Department of Energy’s Office of Science and the National Science Foundation, USCMS implemented $40 million worth of these strategic upgrades on time and under budget.

With these upgrades complete, the CMS detector is now ready for LHC Run 3, which will go from 2021-23, and the collaboration is starting the stage of data taking on a solid foundation.

Still, USCMS isn’t taking a break: The collaboration is already gearing up for its next, even more ambitious set of upgrades, planned for installation after Run 3. This USCMS upgrade phase will prepare the detector for an even higher luminosity, resulting in a data set 10 times greater than what the LHC provides currently.

Every advance in the CMS detector ensures that it will support the experiment through 2038, when the LHC is planned to complete its final run.

“For the last decade, we’ve worked to improve and enhance the CMS detector to squeeze everything we can out of the LHC’s collisions,” Nahn said. “We’re prepared to do the same for the next two decades to come.”

It was a three-hour nighttime road trip that capped off a journey begun seven years ago.

From about 12:30-3 a.m. on Friday, Aug. 16, the first major superconducting section of a particle accelerator that will power the biggest neutrino experiment in the world made its way along a series of Chicagoland roadways at a deliberate 10 miles per hour.

Hauled on a special carrier created just for its 25-mile journey, at 3:07 a.m. the nine-ton structure pulled into its permanent home at the Department of Energy’s Fermilab. It arrived from nearby Argonne National Laboratory, also a DOE national laboratory.

The high-tech component is the first completed cryomodule for the PIP-II particle accelerator, a powerful machine that will become the heart of Fermilab’s accelerator complex. The accelerator will generate high-power beams of protons, which will in turn produce the world’s most powerful neutrino beam, for the international, Fermilab-hosted Deep Underground Neutrino Experiment. It will also provide for the long-term future of the Fermilab research program.

PIP-II is the first particle accelerator project in the United States with significant international contribution, with cavities and cryomodules built in France, India, Italy, the United Kingdom and the United States.

The cryomodule effort at Argonne began in 2012. Scientists and engineers at Argonne led its design, working with a Fermilab team. The Argonne group also built the cryomodule, tested its subcomponents and assembled it, evolving a design used in one of Argonne’s particle accelerators.

And now it’s arrived.

“There is a profound significance in the arrival of the first PIP-II cryomodule: it ushers in a new era for the Fermilab accelerator complex, the era of superconducting radio-frequency acceleration,” said Fermilab PIP-II Project Director Lia Merminga.

The first cryomodule of the PIP-II superconducting linear accelerator is lifted off the truck that delivered it from Argonne National Laboratory to Fermilab on Aug. 16. Photo: Reidar Hahn

The PIP-II accelerator blueprint

A cryomodule is the major unit of a particle accelerator. Like the cars of a train, cryomodules are hitched together end-to-end. The PIP-II linear accelerator will comprise 23 of them, adding up to a roughly 200-meter, near-light-speed runway for powerful protons.

Very powerful protons. The new accelerator will enable a 1.2-megawatt proton beam for the lab’s experiments. That’s 60% more power than the lab’s current accelerator chain can provide.

And it’s put together one cryomodule at a time. Each houses a string of superconducting acceleration cavities. These shiny metal tubes impart energy to the beam, and they too are placed end-to-end. As the proton beam shoots through one cavity after the next, it picks up energy, thanks to the electromagnetic fields inside the cavities, propelling the beam forward.

By the time the beam exits the final cavity of the last PIP-II cryomodule, it will have gained 800 million electronvolts of energy and travel at 84% of the speed of light.

Then it’s really off to the races: After the beam leaves the PIP-II linac, it will continue down any of a number of paths, charging through Fermilab’s accelerators and eventually smashing into a block of material. The resulting shower of particles will be sorted and routed to various experiments, where scientists study these morsels of matter to better understand how our universe operates at its most fundamental level.

The 60% boost in PIP-II power  — with the potential to increase power into the multimegawatt range at a later time — will provide more particles for scientists to study, accelerating the path to discovery.

The PIP-II accelerator is expected to be integrated into the Fermilab accelerator complex in 2026.

This architectural rendering shows the buildings that will house the new PIP-II accelerators. Credit: Fermilab

Riding the half-wave

The Argonne-designed PIP-II cryomodule contains eight accelerating cavities that look like big balloon bow ties. They’re a special type, called half-wave resonators. (“Half-wave,” because the profile of the electromagnetic field inside it resembles half of a standing wave.)

The half-wave resonator cryomodule will be first in the line of 23 and the only one of its kind at PIP-II.

The job of the half-wave resonator cryomodule is to get the beam going almost as soon as it comes out of the gate, taking it from 2 to 10 million electronvolts. Each cryomodule after that takes its turn ramping up the beam to its final energy of 800 million electronvolts.

Its design is based on those used in Argonne’s ATLAS particle accelerator, which accelerates heavy ions for nuclear physics research.

The PIP-II version features a few improvements. For one, the cavity performance is top-notch, thanks to advances in acceleration technology. The cavities are made of superconducting niobium. Refinements over the past decade in both niobium treatment and cavity manufacture have made it possible for PIP-II cavities to kick the beam to higher energies over shorter distances compared to ATLAS and other comparable cavities. They’re also more energy-efficient.

“We’re proud of the cavities we’ve built and their performance,” said Argonne physicist Zack Conway, who led the effort to build the cavities. “They’re truly world-leading.”

The cryomodule keeps the cavities at a cool 2 kelvins, or minus 270 degrees Celsius. Niobium superconducts at 9.2 K, but its performance soars at 2 K. Advanced cryogenics (the “cryo” in cryomodule) ensure that the PIP-II cavities maintain their chill temperature.

The result is a high-performance vehicle for beam.

“It’s been good to collaborate with one of our sister labs,” said Fermilab scientist Joe Ozelis, who oversees the cryomodule project. “This model of collaborative effort with our partners is key to the continued future success of PIP-II. It’s gratifying to now know that it can indeed work.”

accelerator cavities

Scientists and engineers at Argonne led the design of these eight accelerator cavities, of a type called half-wave resonators, for the PIP-II accelerator. The Argonne team worked with Fermilab in the design. Photo: Argonne National Laboratory

Time to test

The recently arrived cryomodule has a way to go before it will be permanently installed as part of the PIP-II linear accelerator. For the next several months, Fermilab’s PIP-II group will perform a series of tests to make sure it meets specifications. Then, next year, a Fermilab group will test it with beam, putting the cryomodule through its paces.

“The first of anything in a project like this is always exciting, but there’s more to this for me personally,” said Genfa Wu, Fermilab physicist and a PIP-II SRF and cryogenics system manager. “This is the first low-beta superconducting cryomodule I’ll get to test in my professional experience.”

It’s also an initial run-through for the PIP-II cryomodule collaboration more generally. Twenty-two cryomodules are yet to be built and tested at Fermilab, of which 15 will arrive from outside the United States, including one prototype.

“PIP-II is an international collaboration,” Wu said. “We’re actively working with our international partners to make sure all the cryomodules work together.”

 

Partners in global science

PIP-II’s internationality reflects the biggest experiment it will power, the Deep Underground Neutrino Experiment, supported by the Long-Baseline Neutrino Facility at Fermilab. The flagship science project aims to unlock the mysteries of neutrinos, subtle particles that may carry the imprint of the universe’s beginnings.

Protons from the PIP-II beam will produce a beam of neutrinos, which will be sent 800 miles straight through Earth’s crust from Fermilab to particle detectors located a mile underground at the Sanford Underground Research Facility in South Dakota. DUNE scientists will study how the neutrinos change over that long distance. Their findings aim to tell us why we live in a universe dominated by matter.

More than 1,000 scientists from dozens of countries participate in LBNF/DUNE, which will start in the mid-2020s. It’s a global project with the ambitious research goals to match. And four of the LBNF/DUNE international partners also contribute to PIP-II. For the United States, the international nature of the PIP-II project is a new way of building large accelerator projects.

“The half-wave resonator cryomodule is a stellar example of how DOE labs work together to execute major projects that involve technological aptitude that no single lab has by itself,” Merminga said. “By leveraging Argonne’s experience in half-wave resonator technology, Fermilab is taking a major step in realizing its future while paving the road for even more collaboration. Exactly the same principle applies to our international partnerships, making PIP-II a very powerful new paradigm for future accelerator projects.”

And in some ways, it is all starting to come together when a truck with a huge, high-tech metal container rolls down a street in the middle of the night.

“The collaboration between has been very smooth, from design through fabrication,” Conway said. “That’s been wonderful.”

It pays dividends in other dimensions, too.

“We’ve learned so much from this for future collaborations, and those lessons are going to be vital for the linac project as a whole,” Ozelis said. “This is more than institutional. It’s a human endeavor as well.”

This work is supported by the Department of Energy Office of Science.

Every proton collision at the Large Hadron Collider is different, but only a few are special. The special collisions generate particles in unusual patterns — possible manifestations of new, rule-breaking physics — or help fill in our incomplete picture of the universe.

Finding these collisions is harder than the proverbial search for the needle in the haystack. But game-changing help is on the way. Fermilab scientists and other collaborators successfully tested a prototype machine-learning technology that speeds up processing by 30 to 175 times compared to traditional methods.

Confronting 40 million collisions every second, scientists at the LHC use powerful, nimble computers to pluck the gems — whether it’s a Higgs particle or hints of dark matter — from the vast static of ordinary collisions.

Rifling through simulated LHC collision data, the machine learning technology successfully learned to identify a particular postcollision pattern — a particular spray of particles flying through a detector — as it flipped through an astonishing 600 images per second. Traditional methods process less than one image per second.

The technology could even be offered as a service on external computers. Using this offloading model would allow researchers to analyze more data more quickly and leave more LHC computing space available to do other work.

It is a promising glimpse into how machine learning services are supporting a field in which already enormous amounts of data are only going to get bigger.

Particles emerging from proton collisions at CERN’s Large Hadron Collider travel through through this stories-high, many-layered instrument, the CMS detector. In 2026, the LHC will produce 20 times the data it does currently, and CMS is currently undergoing upgrades to read and process the data deluge. Photo: Maximilien Brice, CERN

The challenge: more data, more computing power

Researchers are currently upgrading the LHC to smash protons at five times its current rate. By 2026, the 17-mile circular underground machine at the European laboratory CERN will produce 20 times more data than it does now.

CMS is one of the particle detectors at the Large Hadron Collider, and CMS collaborators are in the midst of some upgrades of their own, enabling the intricate, stories-high instrument to take more sophisticated pictures of the LHC’s particle collisions. Fermilab is the lead U.S. laboratory for the CMS experiment.

If LHC scientists wanted to save all the raw collision data they’d collect in a year from the High-Luminosity LHC, they’d have to find a way to store about 1 exabyte (about 1 trillion personal external hard drives), of which only a sliver may unveil new phenomena. LHC computers are programmed to select this tiny fraction, making split-second decisions about which data is valuable enough to be sent downstream for further study.

Currently, the LHC’s computing system keeps roughly one in every 100,000 particle events. But current storage protocols won’t be able to keep up with the future data flood, which will accumulate over decades of data taking. And the higher-resolution pictures captured by the upgraded CMS detector won’t make the job any easier. It all translates into a need for more than 10 times the computing resources than the LHC has now.

The recent prototype test shows that, with advances in machine learning and computing hardware, researchers expect to be able to winnow the data emerging from the upcoming High-Luminosity LHC when it comes online.

“The hope here is that you can do very sophisticated things with machine learning and also do them faster,” said Nhan Tran, a Fermilab scientist on the CMS experiment and one of the leads on the recent test. “This is important, since our data will get more and more complex with upgraded detectors and busier collision environments.”

Particle physicists are exploring the use of computers with machine learning capabilities for processing images of particle collisions at CMS, teaching them to rapidly identify various collision patterns. Image: Eamonn Maguire/Antarctic Design

Machine learning to the rescue: the inference difference

Machine learning in particle physics isn’t new. Physicists use machine learning for every stage of data processing in a collider experiment.

But with machine learning technology that can chew through LHC data up to 175 times faster than traditional methods, particle physicists are ascending a game-changing step on the collision-computation course.

The rapid rates are thanks to cleverly engineered hardware in the platform, Microsoft’s Azure ML, which speeds up a process called inference.

To understand inference, consider an algorithm that’s been trained to recognize the image of a motorcycle: The object has two wheels and two handles that are attached to a larger metal body. The algorithm is smart enough to know that a wheelbarrow, which has similar attributes, is not a motorcycle. As the system scans new images of other two-wheeled, two-handled objects, it predicts — or infers — which are motorcycles. And as the algorithm’s prediction errors are corrected, it becomes pretty deft at identifying them. A billion scans later, it’s on its inference game.

Most machine learning platforms are built to understand how to classify images, but not physics-specific images. Physicists have to teach them the physics part, such as recognizing tracks created by the Higgs boson or searching for hints of dark matter.

Researchers at Fermilab, CERN, MIT, the University of Washington and other collaborators trained Azure ML to identify pictures of top quarks — a short-lived elementary particle that is about 180 times heavier than a proton — from simulated CMS data. Specifically, Azure was to look for images of top quark jets, clouds of particles pulled out of the vacuum by a single top quark zinging away from the collision.

“We sent it the images, training it on physics data,” said Fermilab scientist Burt Holzman, a lead on the project. “And it exhibited state-of-the-art performance. It was very fast. That means we can pipeline a large number of these things. In general, these techniques are pretty good.”

One of the techniques behind inference acceleration is to combine traditional with specialized processors, a marriage known as heterogeneous computing architecture.

Different platforms use different architectures. The traditional processors are CPUs (central processing units). The best known specialized processors are GPUs (graphics processing units) and FPGAs (field programmable gate arrays). Azure ML combines CPUs and FPGAs.

“The reason that these processes need to be accelerated is that these are big computations. You’re talking about 25 billion operations,” Tran said. “Fitting that onto an FPGA, mapping that on, and doing it in a reasonable amount of time is a real achievement.”

And it’s starting to be offered as a service, too. The test was the first time anyone has demonstrated how this kind of heterogeneous, as-a-service architecture can be used for fundamental physics.

Data from particle physics experiments are stored on computing farms like this one, the Grid Computing Center at Fermilab. Outside organizations offer their computing farms as a service to particle physics experiments, making more space available on the experiments’ servers. Photo: Reidar Hahn

At your service

In the computing world, using something “as a service” has a specific meaning. An outside organization provides resources — machine learning or hardware — as a service, and users — scientists — draw on those resources when needed. It’s similar to how your video streaming company provides hours of binge-watching TV as a service. You don’t need to own your own DVDs and DVD player. You use their library and interface instead.

Data from the Large Hadron Collider is typically stored and processed on computer servers at CERN and partner institutions such as Fermilab. With machine learning offered up as easily as any other web service might be, intensive computations can be carried out anywhere the service is offered — including off site. This bolsters the labs’ capabilities with additional computing power and resources while sparing them from having to furnish their own servers.

“The idea of doing accelerated computing has been around decades, but the traditional model was to buy a computer cluster with GPUs and install it locally at the lab,” Holzman said. “The idea of offloading the work to a farm off site with specialized hardware, providing machine learning as a service — that worked as advertised.”

The Azure ML farm is in Virginia. It takes only 100 milliseconds for computers at Fermilab near Chicago, Illinois, to send an image of a particle event to the Azure cloud, process it, and return it. That’s a 2,500-kilometer, data-dense trip in the blink of an eye.

“The plumbing that goes with all of that is another achievement,” Tran said. “The concept of abstracting that data as a thing you just send somewhere else, and it just comes back, was the most pleasantly surprising thing about this project. We don’t have to replace everything in our own computing center with a whole bunch of new stuff. We keep all of it, send the hard computations off and get it to come back later.”

Scientists look forward to scaling the technology to tackle other big-data challenges at the LHC. They also plan to test other platforms, such as Amazon AWS, Google Cloud and IBM Cloud, as they explore what else can be accomplished through machine learning, which has seen rapid evolution over the past few years.

“The models that were state-of-the-art for 2015 are standard today,” Tran said.

As a tool, machine learning continues to give particle physics new ways of glimpsing the universe. It’s also impressive in its own right.

“That we can take something that’s trained to discriminate between pictures of animals and people, do some modest amount computation, and have it tell me the difference between a top quark jet and background?” Holzman said. “That’s something that blows my mind.”

This work is supported by the DOE Office of Science.

Fermilab’s newest particle accelerator is small but mighty. The Integrable Optics Test Accelerator, designed to be versatile and flexible, is enabling researchers to push the frontiers of accelerator science.

Instead of smashing beams together to study subatomic particles like most high-energy physics research accelerators, IOTA is dedicated to exploring and improving the particle beams themselves.

IOTA researchers say they are excited by the observation of single-electron beams near the speed of light and the first results on decreasing beam instabilities. They are eager to use their single-electron technique to probe aspects of quantum science and see future breakthroughs in accelerator science.

“The scientists who designed the accelerator are also the scientists that use it,” said Vladimir Shiltsev, a Fermilab distinguished scientist and one of the founders of IOTA. “It’s an opportunity to get great insight into the physics of beams at relatively small cost.”

Scientists using the 40-meter-circumference Integrable Optics Test Accelerator saw their first results from IOTA this summer. Photo: Giulio Stancari

Versatility is the mother of innovation

In the Fermilab Accelerator Science and Technology facility, a particle accelerator delivers intense bursts of electrons that are then stored in IOTA’s 40-meter-circumference ring, where they circulate about 7.5 million times every second at near the speed of light. The system’s design enables a small team to adjust or exchange components in the beamline to perform a variety of experiments on the frontier of accelerator science.

“This machine was designed with a lot of flexibility in mind,” said Fermilab scientist Alexander Valishev, head of the team that developed and constructed IOTA.

Consider the accelerator magnets, which are responsible for the size and shape of the particle beam’s profile. At IOTA, every magnet is powered separately so that researchers can reconfigure the machine for completely different experiments in a few minutes. At other accelerator facilities, a comparable change could require a lengthy shutdown of weeks or months.

For research accelerators that serve researchers, the focus is typically on maximizing running time and maintaining well-understood, established beam parameters. In contrast, the IOTA team expects the accelerator to be routinely shut down, reconfigured and restarted. Its technical and operational flexibilities make it easier for outside teams to use IOTA to conduct their own experiments, exploring a variety of topics at the frontier of accelerator and beam physics.

IOTA’s versatility has already attracted groups from Lawrence Berkeley National Laboratory; Northern Illinois University; SLAC National Accelerator Laboratory; University of California, Berkeley; University of Chicago and other institutions. Not only are they conducting exciting science, but early-career researchers are also receiving valuable practical training in accelerator and beam science that can be challenging to come by.

“If you wanted to have a comparable scientific program at a more traditional facility, it would be very difficult, if not prohibitive. Typically, those facilities are designed for a narrow range of research, aren’t easily modified and require nearly continuous operation,” said Fermilab scientist Jonathan Jarvis, who works on IOTA. “But here at IOTA, we are a purpose-built facility for frontier topics in accelerator research and development, and we have those flexibilities by design.”

Fermilab scientist Alexander Valishev inspects the specially designed nonlinear insert that produces the nonlinear magnetic fields for IOTA experiments. Photo: Giulio Stancari

First results: Testing IOTA’s IO

As part of the only dedicated ring-based accelerator R&D facility for high-intensity beam physics in the United States, IOTA is designed to develop technologies to increase the number of particles in a beam without increasing the beam’s size and thus the size and cost of the accelerator. Since all particles in the beam have an identical charge, they electrically repel each other, and as more particles are packed into the beam, it can become unstable. Particles may behave chaotically and escape. It takes expertise and innovative technology to tame a dense particle beam.

To that end, IOTA researchers are investigating a novel technique called nonlinear integrable optics. The technique uses specially designed sets of magnets configured to prevent beam instabilities, significantly better than the configurations of magnets used over the past 50 years.

To test the nonlinear integrable optics technique, IOTA researchers deliberately produced instability in the beam. They then measured how difficult it was to provoke unstable behavior in IOTA’s electron beam both with and without the influence of the magnetic fields

The technique was a winner: Scientists observed that these specialized magnets significantly decreased the instability.

During the next run of the system, the team plans to more rigorously study this effect.

“The first result is merely a demonstration,” Valishev said. “But I think it’s already a big accomplishment.”

IOTA’s nonlinear magnets help prevent instabilities in high-intensity particle beams. Photo: Giulio Stancari

Watching a single electron near the speed of light

In a first for Fermilab, the researchers have also observed the circulation of a single electron.

The IOTA beam, when injected into the storage ring, can contain about a billion electrons. As the beam circulates, electrons tend to escape the beam due to collisions with one another or with stray gas molecules in the beam pipe. So if you want to see an electron fly solo around the ring, it is just a matter of waiting.

The real trick is being able to observe the last electron left “standing.”

The fast-moving electrons emit visible light as they travel along the curves of the ring. This light is synchrotron radiation, which is emitted when charged particles moving near the speed of light change direction. The light provides researchers with information about the beam, including how many electrons are in it.

IOTA researchers used the synchrotron radiation to observe the loss of electrons, one by one, until they finally witnessed a solitary electron.

This plot illustrates the decrease in the amount of measured synchrotron light every time an electron was knocked out of the particle beam.

On their next round, rather than play the waiting game to get down to a beam of one electron, the team tried a faster, more deliberate approach. They devised a way to instead inject single electrons into IOTA on demand. It worked. The method reliably saw lone particles traveling around the ring.

The wait was over.

This feat is more than just a novel curiosity. The ability to store and observe a single electron, or even a very small number of electrons moving around at high speeds, creates opportunities to probe interesting quantum science.

“Everything we do is rather macroscopic, so you wouldn’t think of any of this facility, let alone a 40-meter ring, as a quantum instrument,” Jarvis said. “But we’ve got this situation where there’s an individual particle circulating in the ring at nearly the speed of light, and it gives us fascinating opportunities to do something that is very quantum in nature.”

For instance, in its upcoming run, IOTA will become the first facility in the world with the ability to precisely redirect synchrotron light back on the particle that generated it.

This capability opens the door to a wide variety of fundamental quantum experiments and will also enable Fermilab scientists to attempt the world’s first demonstration of a powerful technique called optical stochastic beam cooling. Generally, beam cooling methods sap accelerated particles of their chaotic or frenetic motion. Optical stochastic cooling is expected to be thousands of times stronger than the current state of the art and is a perfect example of the high-impact returns that IOTA is targeting.

“We’ve got this situation where there’s an individual particle circulating in the ring at nearly the speed of light, and it gives us fascinating opportunities to do something that is very quantum in nature.”

Accelerating into the future: proton beams, electron lenses and more

IOTA is currently set up to circulate electrons, and this work sets the stage for future, more challenging experiments with protons.

The high-energy electron beam naturally shrinks to a smaller size due to synchrotron radiation, which makes it a well-behaved system for IOTA researchers to confirm important parts of beam physics theories.

In contrast to IOTA’s electron beam, its forthcoming experiments with protons will see beam circulate at low velocity, be significantly larger and be strongly affected by the repulsive forces between beam particles. Research into the behavior of such proton beams will be integral to understanding how nonlinear integrable optics can be effectively applied in the high-power accelerators of the future.

And with both electrons and protons in the mix, scientists can also advance to another exciting phase in IOTA’s research program: electron lenses. Electron lenses are yet another technique that researchers are investigating in their quest to create stable particle beams. This technique uses the negative charge of electrons to oppose the positive charges of protons to pull the protons into a compact, stable beam. The electron lens will also allow IOTA scientists to demonstrate the nonlinear integrable optics concept using special charge distributions rather than the specialized nonlinear magnets.

With its breadth of unique capabilities, IOTA and its team are ready for several years of exciting research.

“Frontier science requires frontier research and development, and at IOTA, we are focused on realizing those major innovations that could invigorate accelerator-based high-energy physics for the next several decades,” Jarvis said.

This work is supported by the Department of Energy Office of Science.