The long-awaited first results from the Muon g-2 experiment at the U.S. Department of Energy’s Fermi National Accelerator Laboratory show fundamental particles called muons behaving in a way that is not predicted by scientists’ best theory, the Standard Model of particle physics. This landmark result, made with unprecedented precision, confirms a discrepancy that has been gnawing at researchers for decades.
The strong evidence that muons deviate from the Standard Model calculation might hint at exciting new physics. Muons act as a window into the subatomic world and could be interacting with yet undiscovered particles or forces.
“Today is an extraordinary day, long awaited not only by us but by the whole international physics community,” said Graziano Venanzoni, co-spokesperson of the Muon g-2 experiment and physicist at the Italian National Institute for Nuclear Physics. “A large amount of credit goes to our young researchers who, with their talent, ideas and enthusiasm, have allowed us to achieve this incredible result.”

First results from the Muon g-2 experiment at Fermilab have strengthened evidence of new physics. The centerpiece of the experiment is a 50-foot-diameter superconducting magnetic storage ring, which sits in its detector hall amidst electronics racks, the muon beamline, and other equipment. This impressive experiment operates at negative 450 degrees Fahrenheit and studies the precession (or wobble) of muons as they travel through the magnetic field. Photo: Reidar Hahn, Fermilab
A muon is about 200 times as massive as its cousin, the electron. Muons occur naturally when cosmic rays strike Earth’s atmosphere, and particle accelerators at Fermilab can produce them in large numbers. Like electrons, muons act as if they have a tiny internal magnet. In a strong magnetic field, the direction of the muon’s magnet precesses, or wobbles, much like the axis of a spinning top or gyroscope. The strength of the internal magnet determines the rate that the muon precesses in an external magnetic field and is described by a number that physicists call the g-factor. This number can be calculated with ultra-high precision.
As the muons circulate in the Muon g-2 magnet, they also interact with a quantum foam of subatomic particles popping in and out of existence. Interactions with these short-lived particles affect the value of the g-factor, causing the muons’ precession to speed up or slow down very slightly. The Standard Model predicts this so-called anomalous magnetic moment extremely precisely. But if the quantum foam contains additional forces or particles not accounted for by the Standard Model, that would tweak the muon g-factor further.
“This quantity we measure reflects the interactions of the muon with everything else in the universe. But when the theorists calculate the same quantity, using all of the known forces and particles in the Standard Model, we don’t get the same answer,” said Renee Fatemi, a physicist at the University of Kentucky and the simulations manager for the Muon g-2 experiment. “This is strong evidence that the muon is sensitive to something that is not in our best theory.”
The predecessor experiment at DOE’s Brookhaven National Laboratory, which concluded in 2001, offered hints that the muon’s behavior disagreed with the Standard Model. The new measurement from the Muon g-2 experiment at Fermilab strongly agrees with the value found at Brookhaven and diverges from theory with the most precise measurement to date.

The first result from the Muon g-2 experiment at Fermilab confirms the result from the experiment performed at Brookhaven National Lab two decades ago. Together, the two results show strong evidence that muons diverge from the Standard Model prediction. Image: Ryan Postel, Fermilab/Muon g-2 collaboration
The accepted theoretical values for the muon are:
g-factor: 2.00233183620(86)
anomalous magnetic moment: 0.00116591810(43)
[uncertainty in parentheses]
The new experimental world-average results announced by the Muon g-2 collaboration today are:
g-factor: 2.00233184122(82)
anomalous magnetic moment: 0.00116592061(41)
The combined results from Fermilab and Brookhaven show a difference with theory at a significance of 4.2 sigma, a little shy of the 5 sigma (or standard deviations) that scientists require to claim a discovery but still compelling evidence of new physics. The chance that the results are a statistical fluctuation is about 1 in 40,000.
The Fermilab experiment reuses the main component from the Brookhaven experiment, a 50-foot-diameter superconducting magnetic storage ring. In 2013, it was transported 3,200 miles by land and sea from Long Island to the Chicago suburbs, where scientists could take advantage of Fermilab’s particle accelerator and produce the most intense beam of muons in the United States. Over the next four years, researchers assembled the experiment; tuned and calibrated an incredibly uniform magnetic field; developed new techniques, instrumentation, and simulations; and thoroughly tested the entire system.

Thousands of people welcomed the Muon g-2 magnet to Fermilab in 2013. Data from the experiment’s first run has yielded a result with unprecedented precision. Data from four additional experimental runs will reveal the muon’s behavior in even more detail. Photo: Reidar Hahn, Fermilab
The Muon g-2 experiment sends a beam of muons into the storage ring, where they circulate thousands of times at nearly the speed of light. Detectors lining the ring allow scientists to determine how fast the muons are precessing.
In its first year of operation, in 2018, the Fermilab experiment collected more data than all prior muon g-factor experiments combined. With more than 200 scientists from 35 institutions in seven countries, the Muon g-2 collaboration has now finished analyzing the motion of more than 8 billion muons from that first run.
“After the 20 years that have passed since the Brookhaven experiment ended, it is so gratifying to finally be resolving this mystery,” said Fermilab scientist Chris Polly, who is a co-spokesperson for the current experiment and was a lead graduate student on the Brookhaven experiment.
Data analysis on the second and third runs of the experiment is under way, the fourth run is ongoing, and a fifth run is planned. Combining the results from all five runs will give scientists an even more precise measurement of the muon’s wobble, revealing with greater certainty whether new physics is hiding within the quantum foam.
“So far we have analyzed less than 6% of the data that the experiment will eventually collect. Although these first results are telling us that there is an intriguing difference with the Standard Model, we will learn much more in the next couple of years,” Polly said.
“Pinning down the subtle behavior of muons is a remarkable achievement that will guide the search for physics beyond the Standard Model for years to come,” said Fermilab Deputy Director of Research Joe Lykken. “This is an exciting time for particle physics research, and Fermilab is at the forefront.”
A press conference discussing the Muon g-2 experiment’s first results will be held at noon US Central Time on April 7. Reporters should contact media@fnal.gov for connection information. More images of the Muon g-2 experiment are available in the Muon g-2 gallery. More information about the experiment is available at the Muon g-2 website.
More ways to engage: Take a virtual 360 tour of the Muon g-2 experiment or watch a guided video tour. View the full Muon g-2 video playlist. Register for a free virtual public lecture on April 17 that will explain the new Muon g-2 results. Print your own “Marvelous Muon” poster.
The Muon g-2 experiment is supported by the Department of Energy (US); National Science Foundation (US); Istituto Nazionale di Fisica Nucleare (Italy); Science and Technology Facilities Council (UK); Royal Society (UK); European Union’s Horizon 2020; National Natural Science Foundation of China; MSIP, NRF and IBS-R017-D1 (Republic of Korea); and German Research Foundation (DFG).
Fermilab is America’s premier national laboratory for particle physics research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance LLC. Visit Fermilab’s website at https://www.fnal.gov and follow us on Twitter @Fermilab.
The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://science.energy.gov.
Quantum computing promises to harness the strange properties of quantum mechanics in machines that will outperform even the most powerful supercomputers of today. But the extent of their application, it turns out, isn’t entirely clear.
To fully realize the potential of quantum computing, scientists must start with the basics: developing step-by-step procedures, or algorithms, for quantum computers to perform simple tasks, like the factoring of a number. These simple algorithms can then be used as building blocks for more complicated calculations.
Prasanth Shyamsundar, a postdoctoral research associate at the Department of Energy’s Fermilab Quantum Institute, has done just that. In a preprint paper released in February, he announced two new algorithms that build upon existing work in the field to further diversify the types of problems quantum computers can solve.
“There are specific tasks that can be done faster using quantum computers, and I’m interested in understanding what those are,” Shyamsundar said. “These new algorithms perform generic tasks, and I am hoping they will inspire people to design even more algorithms around them.”
Shyamsundar’s quantum algorithms, in particular, are useful when searching for a specific entry in an unsorted collection of data. Consider a toy example: Suppose we have a stack of 100 vinyl records, and we task a computer with finding the one jazz album in the stack.
Classically, a computer would need to examine each individual record and make a yes-or-no decision about whether it is the album we are searching for, based on a given set of search criteria.
“You have a query, and the computer gives you an output,” Shyamsundar said. “In this case, the query is: Does this record satisfy my set of criteria? And the output is yes or no.”
Finding the record in question could take only a few queries if it is near the top of the stack, or closer to 100 queries if the record is near the bottom. On average, a classical computer would locate the correct record with 50 queries, or half the total number in the stack.
A quantum computer, on the other hand, would locate the jazz album much faster. This is because it has the ability to analyze all of the records at once, using a quantum effect called superposition.
With this property, the number of queries needed to locate the jazz album is only about 10, the square root of the number of records in the stack. This phenomenon is known as quantum speedup and is a result of the unique way quantum computers store information.
The quantum advantage
Classical computers use units of storage called bits to save and analyze data. A bit can be assigned one of two values: 0 or 1.
The quantum version of this is called a qubit. Qubits can be either 0 or 1 as well, but unlike their classical counterparts, they can also be a combination of both values at the same time. This is known as superposition, and allows quantum computers to assess multiple records, or states, simultaneously.

Qubits can be in a superposition of 0 and 1, while classical bits can be only one or the other. Image: Jerald Pinson
“If a single qubit can be in a superposition of 0 and 1, that means two qubits can be in a superposition of four possible states,” Shyamsundar said. The number of accessible states grows exponentially with the number of qubits used.
Seems powerful, right? It’s a huge advantage when approaching problems that require extensive computing power. The downside, however, is that superpositions are probabilistic in nature — meaning they won’t yield definite outputs about the individual states themselves.
Think of it like a coin flip. When in the air, the state of the coin is indeterminate; it has a 50% probability of landing either heads or tails. Only when the coin reaches the ground does it settle into a value that can be determined precisely.
Quantum superpositions work in a similar way. They’re a combination of individual states, each with their own probability of showing up when measured.
But the process of measuring won’t necessarily collapse the superposition into the value we are looking for. That depends on the probability associated with the correct state.
“If we create a superposition of records and measure it, we’re not necessarily going to get the right answer,” Shyamsundar said. “It’s just going to give us one of the records.”
To fully capitalize on the speedup quantum computers provide, then, scientists must somehow be able to extract the correct record they are looking for. If they cannot, the advantage over classical computers is lost.
Amplifying the probabilities of correct states
Luckily, scientists developed an algorithm nearly 25 years ago that will perform a series of operations on a superposition to amplify the probabilities of certain individual states and suppress others, depending on a given set of search criteria. That means when it comes time to measure, the superposition will most likely collapse into the state they are searching for.
But the limitation of this algorithm is that it can be applied only to Boolean situations, or ones that can be queried with a yes or no output, like searching for a jazz album in a stack of several records.

A quantum computer can amplify the probabilities of certain individual records and suppress others, as indicated by the size and color of the disks in the output superposition. Standard techniques are able to assess only Boolean scenarios, or ones that can be answered with a yes or no output. Illustration: Prasanth Shyamsundar
Scenarios with non-Boolean outputs present a challenge. Music genres aren’t precisely defined, so a better approach to the jazz record problem might be to ask the computer to rate the albums by how “jazzy” they are. This could look like assigning each record a score on a scale from 1 to 10.

New amplification algorithms expand the utility of quantum computers to handle non-Boolean scenarios, allowing for an extended range of values to characterize individual records, such as the scores assigned to each disk in the output superposition above. Illustration: Prasanth Shyamsundar
Previously, scientists would have to convert non-Boolean problems such as this into ones with Boolean outputs.
“You’d set a threshold and say any state below this threshold is bad, and any state above this threshold is good,” Shyamsundar said. In our jazz record example, that would be the equivalent of saying anything rated between 1 and 5 isn’t jazz, while anything between 5 and 10 is.
But Shyamsundar has extended this computation such that a Boolean conversion is no longer necessary. He calls this new technique the non-Boolean quantum amplitude amplification algorithm.
“If a problem requires a yes-or-no answer, the new algorithm is identical to the previous one,” Shyamsundar said. “But this now becomes open to more tasks; there are a lot of problems that can be solved more naturally in terms of a score rather than a yes-or-no output.”
A second algorithm introduced in the paper, dubbed the quantum mean estimation algorithm, allows scientists to estimate the average rating of all the records. In other words, it can assess how “jazzy” the stack is as a whole.
Both algorithms do away with having to reduce scenarios into computations with only two types of output, and instead allow for a range of outputs to more accurately characterize information with a quantum speedup over classical computing methods.
Procedures like these may seem primitive and abstract, but they build an essential foundation for more complex and useful tasks in the quantum future. Within physics, the newly introduced algorithms may eventually allow scientists to reach target sensitivities faster in certain experiments. Shyamsundar is also planning to leverage these algorithms for use in quantum machine learning.
And outside the realm of science? The possibilities are yet to be discovered.
“We’re still in the early days of quantum computing,” Shyamsundar said, noting that curiosity often drives innovation. “These algorithms are going to have an impact on how we use quantum computers in the future.”
This work is supported by the Department of Energy’s Office of Science Office of High Energy Physics QuantISED program.
The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.
Supersymmetry is a theory that predicts that all known particles have as-yet-undiscovered partner particles, or superpartners. These superpartners, according to this theory, could be produced in proton-proton collisions at the Large Hadron Collider at the European Laboratory for Particle Physics, CERN. Scientists have been actively exploring the nature of supersymmetry for over 40 years, in part because it can help explain the Higgs boson’s mass, which is currently measured to be about 125 times the mass of the proton. Searches for new partner particles with the CMS and ATLAS experiments at the LHC most often look for a large imbalance of energy resulting from supersymmetric particles that escape without detection. So far, these searches have found no sign of supersymmetry.
One team working on these searches comprises U.S. CMS physicists from Fermilab and associated universities collaborating under the umbrella of the Fermilab LHC Physics Center, which is also known as the LPC. This team is the first to perform a new kind of search for “stealthy” supersymmetry that does not result in an obvious signature of large energy imbalance. Instead, the LPC team is looking for collisions that result in an unusually large number of particles in the CMS detector. The LPC team uses a special machine-learning technique in which two neural networks, pitted against each other, discover subtle differences between signal and background events. Those differences are then used to help identify the few interesting collisions that may come from stealthy supersymmetry from the billion collisions per second that the LHC produces.
CMS recently published a physics briefing describing this analysis. In it, you can read more about stealthy supersymmetry, the importance of dueling neural networks and what the LPC team has learned from the data.
Jim Hirschauer is a Fermilab scientist.
Fermilab is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.