Millions for Black Hills included in appropriations bill

Joel Butler

While his official title might be Joel Butler, Distinguished Scientist, his tenure as the CMS spokesperson required a very different skill set than most people associate with research.

“At Fermilab, I was a famous nondelegator,” Butler said. “I liked to poke my nose into everything. But that doesn’t work when you’re overseeing a collaboration with more than 3,000 members.”

Butler started working at Fermilab in 1979 and quickly found himself managing large projects and departments. But according to Butler, nothing compares to his time at CERN as the CMS spokesperson for the last two years.

“It’s an experience very few people ever have,” said Butler. “I’ve run experiments before, but the magnitude of diversity, talent and energy on CMS is spectacular. There’s nothing like it.”

This week Butler prepares to return to Fermilab. He hands the reins of the CMS experiment over to his successor, Roberto Carlin, as well as to incoming deputy co-spokespersons Patty McBride of Fermilab and Luca Malgeri of CERN. He reflects on his time as the CMS spokesperson, lessons learned and major achievements.

***

What do the day-to-day activities look like for the CMS spokesperson?

The job of the CMS spokesperson is to keep everything going and make sure problems are solved. Every week we would go through all the detector systems to find out what the problems were and make sure we were on track to resolving them. Once a week I would also inform the collaboration with status updates and upcoming events, such as special LHC runs. That’s the job: communication with collaborators, listening to collaborators, problem solving and a lot of meetings. The CMS collaboration consists of many countries and funding agencies, and they all have different projects and priorities. The job of the spokesperson is to bring everyone together and make sure that everyone is doing the best work they can do.

What were some of the challenges that come along with being the spokesperson of a large international collaboration with 3,000-plus members?

The biggest problems were not conflicts, but rather communicating complicated ideas. The CMS experiment has a tremendous number of smart people with good arguments for what they want to do and why they want to do it. But just because someone is talking, doesn’t mean that they’re communicating. Sometimes people would leave meetings without a real understanding of what we had decided. Sometimes I’d say something extreme during a meeting just to see if people were listening. Sometimes silence is taken as consent, but other times, it’s just a lack of attention. So I started doing this thing called affirmative consensus, where silence was not consent and people had to state that they agreed.

I heard that you had “office hours” in the atrium of CERN’s Building 40 every morning. Why did you decide to do that, and what was it like?

Not exactly office hours, I just hung out there around breakfast time and hoped people would come by and talk, and often they did. The problem is that as things move up the chain, certain kinds of information get diluted, and by the time it reaches the spokesperson, you don’t hear what you need to hear. People have a tendency to filter information so that only urgent matters propagate, and persistent problems and annoyances don’t move up to management. Transparency is really important in a collaboration like this, because if we don’t know what the problems are, we cannot work toward solving them. I wanted to stress that people should not be shy about discussing their problems with me, which is why I kept my door open and tried to make myself as accessible as possible. If we put the problems out there, other people can engage and help solve them.

This was your first time managing a large international collaboration. How was it different than the projects you’ve overseen in the past?

When I was the U.S. CMS program manager, from 2007 to 2013, I worked with all the U.S. universities on CMS, which is about 30 percent of the collaboration. Because it’s the same funding environment and the university systems are similar, there was a lot of commonality and overlap in world view. Now as the CMS spokesperson, take that job and add another 48 nations, which are all really different. With U.S. universities, some are very large, and some very small. But internationally, there are even bigger differences. Many international partners have been in physics for decades, while others are just starting to get into the field. A big goal of CMS and CERN is to help develop powerful new collaborators around the world and bring the ability to do science to more and more people. The world has got smart people everywhere, but some places don’t have the pre-existing infrastructure to support them. So when it comes to working with international partners, they can have many of the same problems we see in the United States, but amplified, and they can have completely new ones. At the same time, it’s also very exciting. In an international collaboration, you get new perspectives on problems and can draw on knowledge and experience from all over the world.

What are some of the challenges CMS faced over the last two years?

While we’re always making discoveries, learning new things, and progressing the field, particle physics isn’t a discipline where we’re going to completely revolutionize our understanding of the universe every day. Progress is steady, but it often takes 10 or 20 years to go from one major breakthrough to the next. There are lots of false trails and dead ends you need to explore before you eventually hit the right path. Expectations tend to be higher than justified. Just because progress is high doesn’t mean that you’ll immediately find new physics.

What are some of the major accomplishments of CMS and the LHC over the last two years?

We’ve ruled out many theories for new physics and really enhanced our understanding of the Standard Model. We made measurements of the Higgs boson that, a few years ago, we could only have dreamed of making. It shows that patience pays off. We have a magnificent opportunity to explore the subatomic laws of nature with this machine that works so well. It’s astounding, really! I’ve been around long enough to remember when turning on an accelerator could be a matter of years. But the LHC is just fabulous. We’ve been given this miraculous device, and our responsibility is to make the most of it. Stay focused, and keep going. A breakthrough could happen at any moment or could take years.

What are you doing next?

I’m now the deputy head of a CMS upgrade that will give us precision timing of particles as they pass through the detector. This can be a tremendous asset because it will help us reconstruct events and better understand what happened during the collisions. That assignment should last around two years, and after that, I’ll see where I am. I’m near the end of my career and thinking about retirement, but I’ll never stop working. This is too much fun. I remember there was this saying from Confucius that went, “If you love what you do, you’ll never work a day in your life.” It’s true!

 

It is hard these days not to encounter examples of machine learning out in the world. Chances are, if your phone unlocks using facial recognition or if you’re using voice commands to control your phone, you are likely using machine learning algorithms — in particular deep neural networks.

What makes these algorithms so powerful is that they learn relationships between high-level concepts we wish to find in an image (faces) or sound wave (words) with sets of low-level patterns (lines, shapes, colors, textures, individual sounds), which represent them in the data. Furthermore, these low-level patterns and relationships do not have to be conceived of or hand-designed by humans, but instead are learned directly from examples of the data. Not having to come up with new patterns to find for each new problem is the reason deep neural networks have been able to advance the state of the art for so many different types of problems: from analyzing video for self-driving cars to assisting robots in learning how to manipulate objects.

Here at Fermilab, there has been a lot of effort in having these deep neural networks help us analyze the data from our particle detectors so that we can more quickly and effectively use it to look for new physics. These applications are a continuation of the high-energy physics community’s long history in adopting and furthering the use of machine learning algorithms.

Recently, the MicroBooNE neutrino experiment published a paper describing how they used convolutional neural networks — a particular type of deep neural network — to sort individual pixels coming from images made by a particular type of detector known as a liquid-argon time projection (LArTPC) chamber. The experiment designed a convolutional neural network called U-ResNet to distinguish between two types of pixels: those that were a part of a track-like particle trajectory from those that were a part of a shower-like particle trajectory.

This plot shows a comparison of U-ResNet performance on data and simulation, where the true pixel labels are provided by a physicist. The sample used is 100 events that contain a charged-current neutrino interaction candidate with neutral pions produced at the event vertex. The horizontal axis shows the fraction of pixels where the prediction by U-ResNet differed from the labels for each event. The error bars indicate only a statistical uncertainty.

Track-like trajectories, made by particles such as a muon or proton, consist of a line with small curvature. Shower-like trajectories, produced by particles such as an electron or photon, are more complex topological features with many branching trajectories. This distinction is important because separating these types of topologies can be difficult for traditional algorithms. Not only that, shower-like shapes are produced when electrons and photons interact in the detector, and these two particles are often an important signal or background in physics analyses.

MicroBooNE researchers demonstrated that these networks not only performed well but also worked in a similar fashion when presented with simulated data and real data. The latter is the first time this has been demonstrated for data from LArTPCs.

Showing that networks behave the same on simulated and real data is critical, because these networks are typically trained on simulated data. Recall that these networks learn by looking at many examples. In industry, gathering large “training” data sets is an arduous and expensive task. However, particle physicists have a secret weapon — they can create as much simulated data as they want, since all experiments produce a highly detailed model of their detectors and data acquisition systems in order to produce as faithful a representation of the data as possible.

However, these models are never perfect. And so a big question was, “Is the simulated data close enough to the real data to properly train these neural networks?” The way MicroBooNE answered this question is by performing a Turing test that compares the performance of the network to that of a physicist. They demonstrated that the accuracy of the human was similar to the machine when labeling simulated data, for which an absolute accuracy can be defined. They then compared the labels for real data. Here the disagreement between labels was low, and similar between machine and human. (See the top figure. See the figure below for an example of how a human and computer labeled the same data event.) In addition, a number of qualitative studies looked at the correlation between manipulations of the image and the label provided by the network. They showed that the correlations follow human-like intuitions. For example, as a line segment gets shorter, the network becomes less confident if the segment is due to a track or a shower. This suggests that the low-level correlations being used are the same physically motivated correlations a physicist would use if engineering an algorithm by hand.

This example image shows a charged-current neutrino interaction with decay gamma rays from a neutral pion (left). The label image (middle) is shown with the output of U-ResNet (right) where track and shower pixels are shown in yellow and cyan color respectively.

Demonstrating this simulated-versus-real data milestone is important because convolutional neural networks are valuable to current and future neutrino experiments that will use LArTPCs. This track-shower labeling is currently being employed in upcoming MicroBooNE analyses. Furthermore, for the upcoming Deep Underground Neutrino Experiment (DUNE), convolutional neural networks are showing much promise toward having the performance necessary to achieve DUNE’s physics goals, such as the measurement of CP violation, a possible explanation of the asymmetry in the presence of matter and antimatter in the current universe. The more demonstrations there are that these algorithms work on real LArTPC data, the more confidence the community can have that convolutional neural networks will help us learn about the properties of the neutrino and the fundamental laws of nature once DUNE begins to take data.

Learn more

Victor Genty, Kazuhiro Terao and Taritree Wongjirad are three of the scientists who analyzed this result. Victor Genty is a graduate student at Columbia University. Kazuhiro Terao is a physicist at SLAC National Accelerator Laboratory. Taritree Wongjirad is an assistant professor at Tufts University.

At the upcoming workshop, scientists will explore ways that the fields of high-energy physics and quantum information science can advance each other. It will also feature Google's first public hands-on tutorial on their quantum software. Photo: Reidar Hahn

At the upcoming workshop, scientists will explore ways that the fields of high-energy physics and quantum information science can advance each other. It will also feature Google’s first public hands-on tutorial on their quantum software. Photo: Reidar Hahn

Solving the longstanding, seemingly intractable problems of particle physics is about to get one quantum step closer. (And ideas for quantum teleportation experiments might surface at the same time.)

From Sept. 12-14, scientists, engineers and members of industry from around the globe will converge at the Department of Energy’s Fermilab to explore quantum computing technologies for high-energy physics.

The hands-on workshop, “Next Steps in Quantum Science for High-Energy Physics,” will feature speakers from Google, IBM, and several universities and national laboratories. Renowned physicist John Preskill of Caltech will anchor the first day with an address on quantum information science. And for the first time in a public setting, Google will conduct a hands-on tutorial on their quantum computing software, giving attendees an opportunity to use it directly and meet face-to-face with company representatives.

“This is the world’s first offering for people in high-energy physics to use a quantum software package — and with the people who wrote it,” said Fermilab Chief Research Officer and Deputy Director Joe Lykken. “And the topics go beyond high-energy physics. The problems our field is trying to solve through quantum computing share extensive overlap with other fields, such as nuclear physics and scientific computing.”

Fermilab is a natural host for this first-of-its-kind workshop. As home to groundbreaking discoveries in particle physics and a pioneer in high-performance and supercomputing, Fermilab’s expertise at the intersection of these areas is world-class.

And now it is directing some of its considerable expertise to the area of quantum computing, which holds great promise for solving some of nature’s toughest problems. Quantum computers may be able to tackle in minutes problems that would take classical computers years to solve.

Google’s hands-on tutorials will provide a first taste of this approach. A Google team led by Product Manager Alan Ho will demonstrate the use of two different software packages: Cirq, a quantum computing programming framework that exposes the hardware detail to the programmer, and OpenFermion, a platform for translating problems in quantum chemistry and material science to quantum computers.

“It’s great that Fermi National Accelerator Laboratory has convened some of the best scientists to coming up with ideas for quantum computing,” Ho said. “I’m looking forward to hearing the different ideas and experiments that can be run on a quantum computer. We believe that Fermilab is a great partner for this.”

The feedback from participants, he added, will be valuable.

“We want to make the computers useful, so we need to hear the best ideas,” Ho said.

Topics include quantum simulation of quantum field theories, algorithms for traditional high-energy physics computational problems, quantum teleportation experiments and qubit technologies for quantum sensors.

“This is the meeting where scientists can say, ‘I have this problem, I want to solve it. How do I do that with quantum?’ and get an answer,” Fermilab’s Lykken said. “It’s going to be an exciting glimpse into ways we can advance the field.”