About 10 years ago, the world’s most powerful X-ray laser — the Linac Coherent Light Source — made its debut at SLAC National Accelerator Laboratory. Now the next revolutionary X-ray laser in a class of its own, LCLS-II, is under construction at SLAC, with support from four other DOE national laboratories.
Researchers in biology, chemistry and physics will use LCLS-II to probe fundamental pieces of matter, creating 3-D movies of complex molecules in action, making LCLS-II a powerful, versatile instrument at the forefront of discovery.
The project is coming together thanks largely to a crucial advance in the fields of particle and nuclear physics: superconducting accelerator technology. DOE’s Fermilab and Thomas Jefferson National Accelerator Facility are building the superconducting modules necessary for the accelerator upgrade for LCLS-II.

SLAC National Accelerator Laboratory is upgrading its Linac Coherent Light Source, an X-ray laser, to be a more powerful tool for science. Both Fermilab and Thomas Jefferson National Accelerator Facility are contributing to the machine’s superconducting accelerator, seen here in the left part of the diagram. Image: SLAC
A powerful tool for discovery
Inside SLAC’s linear particle accelerator today, bursts of electrons are accelerated to energies that allow LCLS to fire off 120 X-ray pulses per second. These pulses last for quadrillionths of a second – a time scale known as a femtosecond – providing scientists with a flipbook-like look at molecular processes.
“Over time, you can build up a molecular movie of how different systems evolve,” said SLAC scientist Mike Dunne, director of LCLS. “That’s proven to be quite remarkable, but it also has a number of limitations. That’s where LCLS-II comes in.”
Using state-of-the-art particle accelerator technology, LCLS-II will provide a staggering million pulses per second. The advance will provide a more detailed look into how chemical, material and biological systems evolve on a time scale in which chemical bonds are made and broken.
To really understand the difference, imagine you’re an alien visiting Earth. If you take one image a day of a city, you would notice roads and the cars that drive on them, but you couldn’t tell the speed of the cars or where the cars go. But taking a snapshot every few seconds would give you a highly detailed picture of how cars flow through the roads and would reveal phenomena like traffic jams. LCLS-II will provide this type of step-change information applied to chemical, biological and material processes.
To reach this level of detail, SLAC needs to implement technology developed for particle physics – superconducting acceleration cavities – to power the LCLS-II free-electron laser, or XFEL.

This is an illustration of the electron accelerator of SLAC’s LCLS-II X-ray laser. The first third of the copper accelerator will be replaced with a superconducting one. The red tubes represent cryomodules, which are provided by Fermilab and Jefferson Lab. Image: SLAC
Accelerating science
Cavities are structures that impart energy to particle beams, accelerating the particles within them. LCLS-II, like modern particle accelerators, will take advantage of superconducting radio-frequency cavity technology, also called SRF technology. When cooled to 2 Kelvin, superconducting cavities allow electricity to flow freely, without any resistance. Like reducing the friction between a heavy object and the ground, less electrical resistance saves energy, allowing accelerators to reach higher power for less cost.
“The SRF technology is the enabling step for LCLS-II’s million pulses per second,” Dunne said. “Jefferson Lab and Fermilab have been developing this technology for years. The core expertise to make LCLS-II possible lives at these labs.”
Fermilab modified a cryomodule design from DESY, in Germany, and specially prepared the cavities to draw the record-setting performance from the cavities and cryomodules that will be used for LCLS-II.
The cylinder-shaped cryomodules, about a meter in diameter, act as specialized containers for housing the cavities. Inside, ultracold liquid helium continuously flows around the cavities to ensure they maintain the unwavering 2 Kelvin essential for superconductivity. Lined up end to end, 37 cryomodules will power the LCLS-II XFEL.

Thirty-seven cryomodules lined end to end — half from Fermilab and half from Jefferson Lab — will make up the bulk of the LCLS-II accelerator. Photo: Reidar Hahn
Fermilab and Jefferson Lab share the responsibility for fabricating, testing and delivering the cryomodules to SLAC. Together, the two labs will build all the cryomodules that will house the cavities. Fermilab will provide 19 cryomodules, and Jefferson Lab will provide the other 18. The largest of these cylinders reach 12 meters (40 feet) in length, about the length of a school bus. Each lab will also send a few spares to SLAC as well.
The cavities and their cryomodules represent breakthroughs in SRF technology, providing high-energy beams far more efficiently than previously possible. Researchers have improved SRF cavities to achieve record gradients, a measure of how quickly a beam can achieve a certain energy. The cavities also recently achieved an unprecedented result in their energy efficiency, doubling the previous state-of-the-art design while reducing cost.
The scientists and engineers were meticulous in developing LCLS-II’s accelerator components. For example, to create the cryomodules and cavities, Fermilab used earthquake detecting equipment to identify whether vibrations affecting the cavities’ effectiveness were internal or external. Once they determined the cause, they changed the configuration of the liquid-helium pipes to reduce those vibrations.

Each cryomodule houses a string of acceleration cavities like this one. Cavities propel the particles as the particles move through them. At LCLS-II, electrons will charge through one cavity after another, picking up energy as they go. Pictured here is a 1.3-gigahertz cavity. Photo: Reidar Hahn
Fermilab and Jefferson Lab will also send scientists and engineers to assist SLAC when LCLS-II first powers up the cryomodules.
Jefferson Lab is also providing the design and procurement of the cryogenic refrigeration plants that supply the liquid helium to cool the SRF cavities to 2 Kelvin, while Fermilab is providing the design and procurement of components for the cryogenic distribution systems which moves the liquid helium from these plants to the cryomodules. Berkeley Lab and Argonne National Laboratory are also contributing components for LCLS-II, including the source that provides the electron beam and the magnets that force the beam into the wave-like motion that creates the X-ray light. Cornell University supported the R&D for LCLS-II cavity prototypes and helped process the cavities.
“We’re all in this together,” said Rich Stanek, LCLS-II Fermilab senior team lead. “This close collaboration of national laboratories bodes well for future projects. It has benefits over and above the project itself.”
Those benefits have made LCLS-II one of the top priority projects for DOE’s Office of Science and expand beyond the interests of the partner laboratories. LCLS-II is expected to build on its progenitor, diving even deeper into fields ranging from biology and chemistry to material science and astrophysics.
Opening up, diving deep
Eric Isaacs, the president of the Carnegie Institution for Science and chair of the SLAC Scientific Policy Committee, has already reviewed a number of proposals for LCLS-II.
“There are any number of processes that occur on very short time scales,” Isaacs said, a condensed matter physicist by training. “And LCLS-II opens up whole new areas of the sciences to study.”
One such question will use the X-ray laser to probe material under conditions similar to the very center of our planet and gain insight into how Earth formed. Astrophysicists would then be able to adapt that information for their search for life on exoplanets.
With LCLS-II, scientists will be able to study photosynthesis at a deeper level than ever before. The hope is that humans will one day be able to reverse engineer photosynthesis and harness a new biological tool for generating energy.
One of the ways LCLS-II will advance research in biology is by mapping proteins and enzymes in conditions resembling their normal environments. This deeper understanding will pave the way for scientists to create better drugs.
Scientists also intend to use LCLS-II to research superconductors, bringing the machine’s use of accelerator technology full circle. Current superconductors are limited by their need for specific, low temperatures. By understanding the atomic phenomenon of superconductivity, researchers might be able to create a room-temperature superconductor.
“Particle and nuclear physics have developed the superconducting technologies and capabilities that LCLS-II will use,” Isaacs said. “These advancements will enable LCLS-II to look at some of the most important questions across many branches of science.”
As with any major advancement, the true transformative power of LCLS-II will be revealed once its X-rays illuminate a sample for the first time. LCLS-II is planned to start up in 2021.
This work is supported by the DOE Office of Science program for Basic Energy Sciences.
Fermilab scientist Erik Ramberg and the Fermilab Archives present a new exhibit, “The Response to Relativity,” as part of their series on the history of physics in print. The exhibit can be viewed in the glass display case in the Fermilab Art Gallery through the end of September. The gallery is open Monday through Friday, 8 a.m. to 4:30 p.m. It is located on the second floor of Fermilab’s main office building, Wilson Hall.
The exhibit showcases pieces on special and general relativity: publications in English and German by Albert Einstein, other scientific publications, reproductions of newspaper articles that show the public reaction to the discovery, and a book on relativity intended for a general audience.
1905 was a “Wunderjahr” (miracle year) for Albert Einstein. He published groundbreaking work on his discovery of special relativity and on his famous equation: E= mc2 (a system’s energy is equal to its mass times the square of the speed of light). A copy of “Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?” (“Does the Inertia of a Body Depend Upon its Energy Content?”) is on display, along with copies of Über die spezielle und die allgemeine Relativitätstheorie, translated into English as Relativity: The Special and the General Theory.
Headlines displayed from The New York Times and a diagram from The Illustrated London News published later in 1919 demonstrate how the public started to show an interest in Einstein’s theory. In later years, many books explaining general relativity for general audiences would be published, further fixing Einstein’s theory in popular consciousness. The example on display shows pages from Relativity for the Million illustrating some of his theory’s predictions.
This exhibit was designed by Valerie Higgins, Karin Kemp and Erik Ramberg. Exhibited items are from Erik Ramberg’s collection.

Learn about the reaction from the public in the early 20th century to the discovery of relativity. Come to Fermilab’s latest science history exhibit, located on Wilson Hall’s second-floor art gallery. Photo: Valerie Higgins

Fermilab scientist Nhan Tran is developing computer systems to cope with the increasing amounts of data that particle colliders produce. Photo: Reidar Hahn
The field of particle physics produces colossal amounts of data as it pushes the frontiers of human understanding. Fermilab scientist Nhan Tran is working to make sure researchers have the innovative techniques and technology they need to handle the data deluge and to select the details that are likely to reveal exciting new physics phenomena.
And now he has received a prestigious award to advance this work. The Department of Energy Early Career Research Award will provide Tran with $2.5 million over five years to lead efforts to apply artificial intelligence to push the frontiers of both physics and technology.
Tran will use a type of artificial intelligence called deep learning to expand the capabilities of particle collider research. Deep learning uses a complex artificial neural network, inspired by biological brains, to allow a computer to learn how to perform a complex task, such as identifying an object in a picture, without being explicitly programmed for that task. Each of the many layers or artificial neurons in an artificial neural network can modify the data before passing it on to another piece of the network. The program adjusts how these changes occur based on test information that is provided to it in order to configure itself for the desired task.
“As we get more and more data, computing is going to be a real issue,” Tran said. “Using new types of computing hardware with deep learning algorithms is a promising way to overcome this issue.”
Tran is applying his expertise to expand the impact of experiments at the Large Hadron Collider, a 17-mile-around particle collider at CERN in Switzerland, in two major ways. He will use the award funds to develop systems that will improve how computers handle the LHC data flood. He will also further develop a technique he pioneered that allows scientists to identify in the LHC data particles called Higgs bosons, whose discovery in 2012 led to a Nobel Prize.
At the LHC, beams of protons collide to produce other particles, including Higgs bosons, and these post-collision particles carry a wide range of momenta. Nhan is interested in using deep learning to identify Higgs bosons with high momentum and how they decay into other particles. Studying Higgs bosons under these new conditions opens the door to discovering new physics.
“Observing these specific events was considered to be impossible at hadron colliders,” said Fermilab senior scientist Anadi Canepa, head of the CMS Department in which Tran works. “And then Nhan and collaborators came up with this new technique, and he’s extracting much more than we thought we could from the data we collect.”
His work on the Higgs boson will be helped by the other thrust of his award-supported work: developing computer systems to cope with the increasing amounts of data that particle colliders produce. Instead of just improving generalized computing technologies, Tran’s plan is to develop systems that combine hardware that is specialized for efficiently performing specific types of computations, exceeding the capabilities of traditional computer technology to address those tasks.
“Nhan is a very clear thinker with a keen taste for where to focus his efforts,” said Fermilab scientist Gabriel Perdue, who leads artificial intelligence efforts at Fermilab. “He knows better than anyone that what you choose to work on is more important than any other decision you make, and his ability to figure out the most important problems and tackle them directly really sets him apart. He really understands the point of highest leverage for AI in high-energy physics and is making a huge difference right at that point.”
Tran was introduced to the world of advanced computer technology through his postdoctoral physics research on trigger systems, which select what collision data to keep from among hundreds of millions of particle collisions generated every second at the LHC. He and researchers in industry are now exploring the high-tech electronics that Tran worked on for opportunities to improve computers.
Tran plans to couple more efficient hardware developed by tech companies, such as Microsoft and Xilinx, with Fermilab’s existing computer infrastructure to meet future demands.
“People in the computer industry are always interested to hear about our problems, because we present them with very challenging data sets in terms of their complexity, their size and our own unique requirements,” Tran said. “We generally present them with problems that can push the boundaries of their own technology.”
Tran plans to use the award to purchase new computer hardware, to hire a postdoctoral researcher for the projects and to enable experts at Fermilab to determine how to apply novel computing systems on a large scale.
“Nhan is internationally recognized as an innovator who completes his ideas,” Canepa said. “If he has an idea, that idea then becomes a reality. He has demonstrated that in challenging situations, and every time, he succeeded. He also chooses the right ideas. He really sees what are the main challenges — what is pulling us back from extracting the most information out of our precious data — and he identifies what’s the most relevant and impactful improvement and goes after that.”
Tran says Fermilab is a great environment for his work. He appreciates the excellent postdoctoral researchers that come to the lab and the ability to just walk up or down a floor and consult with experts, especially about the computing topics that he is not formally trained in.
“It’s an honor to receive this award,” Tran said. “And it’s really exciting to explore new techniques in deep learning and push it as far as we can go, both on the physics side and the technical side.”
This work is supported by the Department of Energy Office of Science.
To meet the evolving needs of high-energy physics experiments, the underlying computing infrastructure must also evolve. Say hi to HEPCloud, the new, flexible way of meeting the peak computing demands of high-energy physics experiments using supercomputers, commercial services and other resources.
Five years ago, Fermilab scientific computing experts began addressing the computing resource requirements for research occurring today and in the next decade. Back then, in 2014, some of Fermilab’s neutrino programs were just starting up. Looking further into future, plans were under way for two big projects. One was Fermilab’s participation in the future High-Luminosity Large Hadron Collider at the European laboratory CERN. The other was the expansion of the Fermilab-hosted neutrino program, including the international Deep Underground Neutrino Experiment. All of these programs would be accompanied by unprecedented data demands.
To meet these demands, the experts had to change the way they did business.
HEPCloud, the flagship project pioneered by Fermilab, changes the computing landscape because it employs an elastic computing model. Tested successfully over the last couple of years, it officially went into production as a service for Fermilab researchers this spring.

Scientists on Fermilab’s NOvA experiment were able to execute around 2 million hardware threads at a supercomputer the Office of Science’s National Energy Research Scientific Computing Center. And scientists on CMS experiment have been running workflows using HEPCloud at NERSC as a pilot project. Photo: Roy Kaltschmidt, Lawrence Berkeley National Laboratory
Experiments currently have some fixed computing capacity that meets, but doesn’t overshoot, its everyday needs. For times of peak demand, HEPCloud enables elasticity, allowing experiments to rent computing resources from other sources, such as supercomputers and commercial clouds, and manages them to satisfy peak demand. The prior method was to purchase local resources that on a day-to-day basis, overshoot the needs. In this new way, HEPCloud reduces the costs of providing computing capacity.
“Traditionally, we would buy enough computers for peak capacity and put them in our local data center to cover our needs,” said Fermilab scientist Panagiotis Spentzouris, former HEPCloud project sponsor and a driving force behind HEPCloud. “However, the needs of experiments are not steady. They have peaks and valleys, so you want an elastic facility.”
In addition, HEPCloud optimizes resource usage across all types, whether these resources are on site at Fermilab, on a grid such as Open Science Grid, in a cloud such as Amazon or Google, or at supercomputing centers like those run by the DOE Office of Science Advanced Scientific Computing Research program (ASCR). And it provides a uniform interface for scientists to easily access these resources without needing expert knowledge about where and how best to run their jobs.
The idea to create a virtual facility to extend Fermilab’s computing resources began in 2014, when Spentzouris and Fermilab scientist Lothar Bauerdick began exploring ways to best provide resources for experiments at CERN’s Large Hadron Collider. The idea was to provide those resources based on the overall experiment needs rather than a certain amount of horsepower. After many planning sessions with computing experts from the CMS experiment at the LHC and beyond, and after a long period of hammering out the idea, a scientific facility called “One Facility” was born. DOE Associate Director of Science for High Energy Physics Jim Siegrist coined the name “HEPCloud” — a computing cloud for high-energy physics — during a general discussion about a solution for LHC computing demands. But interest beyond high-energy physics was also significant. DOE Associate Director of Science for Advanced Scientific Computing Research Barbara Helland was interested in HEPCloud for its relevancy to other Office of Science computing needs.

The CMS detector at CERN collects data from particle collisions at the Large Hadron Collider. Now that HEPCloud is in production, CMS scientists will be able to run all of their physics workflows on the expanded resources made available through HEPCloud. Photo: CERN
The project was a collaborative one. In addition to many individuals at Fermilab, Miron Livny at the University of Wisconsin-Madison contributed to the design, enabling HEPCloud to use the workload management system known as Condor (now HTCondor), which is used for all of the lab’s current grid activities.
Since its inception, HEPCloud has achieved several milestones as it moved through the several development phases leading up to production. The project team first demonstrated the use of cloud computing on a significant scale in February 2016, when the CMS experiment used HEPCloud to achieve about 60,000 cores on the Amazon cloud, AWS. In November 2016, CMS again used HEPCloud to run 160,000 cores using Google Cloud Services, doubling the total size of CMS’s computing worldwide. Most recently in May 2018, NOvA scientists were able to execute around 2 million hardware threads at a supercomputer the Office of Science’s National Energy Research Scientific Computing Center (NERSC), increasing both the scale and the amount of resources provided. During these activities, the experiments were executing and benefiting from real physics workflows. NOvA was even able to report significant scientific results at the Neutrino 2018 conference in Germany, one of the most attended conferences in neutrino physics.
CMS has been running workflows using HEPCloud at NERSC as a pilot project. Now that HEPCloud is in production, CMS scientists will be able to run all of their physics workflows on the expanded resources made available through HEPCloud.
Next, HEPCloud project members will work to expand the reach of HEPCloud even further, enabling experiments to use the leadership-class supercomputing facilities run by ASCR at Argonne National Laboratory and Oak Ridge National Laboratory.
Fermilab experts are working to see that, eventually, all Fermilab experiments be configured to use these extended computing resources.
This work is supported by the DOE Office of Science.
Editor’s note: This article has been corrected. CMS’s November 2016 use of HEPCloud doubled the size of the CMS experiment’s computing worldwide, not the size of the LHC’s computing worldwide.

Karen Kosky and her team maintain the Fermilab site and keep lab’s conventional facilities and property operations running smoothly. And there is a reason for she keeps a large pipe on her desk. Photo: Reidar Hahn
How long have you been at Fermilab?
About two and a half years. I came in as the deputy head of the Facilities Engineering Services Section and was promoted to head about a year ago.
What is your role at Fermilab?
That’s a tough one, you know.
The Facilities Engineering Services Section is a group that maintains all conventional facilities and utilities across our site. We’re a team that designs, constructs, operates and maintains buildings and utilities. Our team also manages all of the open space on site. We even manage the bison.
And then we have a team dedicated to managing personal property, which includes shipping, receiving, tracking and disposing of things you can move, like laptops, chairs, oscilloscopes and vehicles.
It’s a diverse team. And my role is really just overseeing all of those operations and making sure that we are paying attention to meeting laboratory goals and contract requirements.
What is a day in your life like?
A neat part of the job is its variety. We could be meeting on any given day to discuss employee development strategies, bison herd issues or long-term strategic financial planning for site infrastructure.
It’s a pretty diverse set of day-to-day experiences.
What are you working on now?
This is a unique time to be working in conventional facilities at Fermilab. Our facilities have reached an age where they need a lot of attention. We are in the middle of putting together some proposals for significant infrastructure funding to bring more resources to the site so we can refresh it and renew some of the infrastructure that keeps this site running.
What is your favorite thing about working at Fermilab?
It’s got to be the people, right? I mean, what a fascinating place to work, with brilliant people from all over the world pursuing these fundamental questions about life and the universe.
You know, I work in a kind of ordinary field of conventional facilities and property management. But when you do that in the environment of this fascinating science laboratory, it’s just a neat place to come to work every day.
What is something that might surprise other Fermilab employees?
The bison are a fairly low-maintenance activity on site, actually. But every year they require annual inoculations.
This is something that few people on site ever get a chance to see. But the grounds team goes through a process of corralling the bison into a series of chutes that eventually bring one bison at a time through a holding pen where they administer shots to the bison.
It’s a really fascinating process to watch. Because how often at a national science lab do you get to see folks corralling giant mammals through wooden chutes to be administered vaccinations?
Why do you have a large, rusty pipe on the counter of your office?
The former head of the Facilities Engineering Services Section, Kent Collins, had a veritable museum of misfit infrastructure pieces. When he retired, he graciously offered to pass on the museum, and I declined most of the pieces, except this one. I thought it was fitting to keep at least one symbol.
It’s an elbow joint for a ductile-iron drainage pipe. And it’s a good example of infrastructure that was installed in the most frugal way possible. It should have had something on it called cathodic protection to protect the pipe from the damaging effects of conductivity in the soil, and it didn’t. So it was vulnerable to degradation much quicker than we should have seen.
A little bit, it just reminds me of Kent Collins, who’s a fun guy to know. But a little bit also, it reminds me that we want to do things in the future in the right way and not cut corners so that we’re investing our limited dollars in the best way possible.

