![]() |
Rob Roser, head of the Scientific Computing Division, wrote this column.
The neutrino and muon experiments at Fermilab are getting more demanding! They have reached a level of sophistication and precision that the present computing resources available at Fermilab are no longer sufficient to handle. The solution: The Scientific Computing Division is now introducing grid and cloud services to satisfy those experiments’ appetite for large amounts of data and computing time.
An insatiable appetite for computing resources is not new to Fermilab. Both Tevatron experiments as well as the CMS experiment require computing resources that far exceed our on-site capacity to successfully perform their science. As a result the scientific collaborations have been working closely with us over many years to leverage computing capabilities at the universities and other laboratories. Now, the demand from our Intensity Frontier experiments has reached this level.
The Scientific Computing Services quadrant under the leadership of Margaret Votava has worked very hard over the past year with various computing organizations to provide experiments with the capability to run their software at remote locations, transfer data and bring the results back to Fermilab.
To tell this story, I have to start with FermiGrid and the Open Science Grid (OSG for short). FermiGrid is our local on-site grid, which enables sharing of computing resources locally between US CMS, Tevatron Run II, the Intensity Frontier experiments and all other Fermilab users. The OSG is a consortium of scientists and computing experts who collaborate on cyber-infrastructure and software tools to support scientists from many disciplines and enable them to run their scientific software across the nationally distributed “fabric” of high-throughput computational services — the grid — on more than 100 sites in the United States. While FermiGrid allows us to use all our local computing resources to the hilt, the availability of opportunistically sharing computing throughout the country is of great benefit to the new generation of Intensity Frontier experiments.
While OSG provides the toolkit and expertise, that alone does not define success. SCD computing experts provide a job submission tool to make it easy to submit jobs off site. They configure network-based file systems to make it easy to deliver software in a fast, scalable and reliable fashion. They provide the functionality to move data from the remote sites back to Fermilab so that experimenters can use it. SCD experts are also investigating the use of computing clouds like Amazon’s (yes, the same Amazon you buy stuff from) to provide additional computing resources when demand is extremely high, like the crunch time before a major conference.
NOvA is the first of the modern Intensity Frontier experiments to make use of off-site grid resources and clouds. Now work is under way to get others — including Muon g-2, MicroBooNE, Mu2e and LBNE — at a similar level of sophistication in the coming months.
The Intensity Frontier experiments may be demanding, but by collaborating with them as well as our colleagues in the Energy Frontier and the DOE- and NSF-funded OSG, the Scientific Computing Division has the ability to satisfy that demand.