Fermilab Achieves Record in Grid Computing for Science

Media contact

Batavia, IL – Today, in a milestone for scientific computing, researchers at the Department of Energy’s Fermi National Accelerator Laboratory announced that the laboratory had sustained a continuous data flow averaging 50 megabytes per second (MB/s) for 25 days from CERN in Geneva, Switzerland to the tape storage facility at Fermilab. Fermilab and six other major global computing centers sustained a continuous data flow averaging 600 MB/s from CERN to tape and disk storage at locations around the world. The total amount of data transmitted in 10 days-500 terabytes-would take about 250 years to download using a typical 512 kilobit per second household broadband connection.

The achievements represented a successful exercise designed to test the global grid computing infrastructure that will be used by thousands of scientists worldwide working on experiments at the Large Hadron Collider, currently being built at CERN to study the fundamental properties of subatomic particles and forces.

“This service challenge is a key step on the way to managing the torrents of data anticipated from the LHC,” said Jamie Shiers, coordinator of the service challenges at CERN. “When the LHC starts operating in 2007, it will be the most data-intensive physics instrument on the planet, producing more than 1500 MB/s of data continuously for over a decade.”

When the LHC begins operations, one copy of all of the particle physics data collected with the ALICE, ATLAS, CMS and LHCb experiments will be stored at CERN. A second copy will be distributed among eight global computing centers. Fermilab and other centers will need to store data on tape at rates of over 150 MB/s using global high-speed computer networks. The goal is to allow researchers easy access to the experimental data from the CERN experiments, wherever their home institutions are located.

“The further away the source of data and the recipient are, the harder it is to transfer data successfully,” said Don Petravick, Head of the Computation and Communications Fabric Department of Fermilab’s Computing Division. “At Fermilab, we demonstrated that we could accept huge data sets under demanding networking conditions.”

The current service challenge is the second in a series of four leading up to LHC operations in 2007. The service challenge participants included Fermilab and Brookhaven National Laboratory in the U.S., Forschungszentrum Karlsruhe in Germany, CNAF in Italy, CCIN2P3 in France, SARA/NIKHEF in the Netherlands and Rutherford Appleton Laboratory in the U.K.

Fermilab Computing Division head Vicky White welcomed the results of the challenge.

“High energy physicists have been transmitting large amounts of data around the world for years,” White said. “But this has usually been in relatively brief bursts and between two sites. Sustaining such high rates of data for days on end to multiple sites is a breakthrough, and augurs well for achieving the ultimate goals of LHC computing.”

During LHC operation, Fermilab will receive data from CERN. In turn, Fermilab will provide an interface between scientists at U.S universities working on the CMS experiment and the data collected by the experiment in Geneva. The current service challenge was the first to connect Fermilab to computing centers at four universities-the University of California, San Diego, the California Institute of Technology, the University of Florida and Purdue University-participating in the LHC global computing infrastructure. When completed in 2007, seven university computing facilities, each with about 100 computer nodes and 200 terabytes of disk space, will connect to Fermilab through this infrastructure.

“This service challenge was a first step toward our participation in the global LHC computing effort,” said Purdue University physicist Norbert Neumeister, who coordinated his institution’s participation in the exercise. “University centers will need to provide access to LHC physics data for the over 200 U.S. scientists participating in the CMS experiment. This was a good check for us as we ramp up our efforts toward full capacity in 2007.”

The next service challenge, due to start in the summer, will extend to more computing centers in the U.S. and globally, and will include more data management and experiment-specific tasks.

Fermilab is operated by Universities Research Association, Inc., a consortium of 90 research universities, for the United States Department of Energy’s Office of Science.