Rob Roser, head of the Scientific Computing Division, wrote this column.
The computing landscape continues to change as technologies improve. As a graduate student, I ran my physics analysis jobs on a DEC VAX and thought it was the greatest thing since sliced bread. As things evolved, I learned how to run jobs on farms of commodity PCs, which offered even more advantages. Now, experimenters use the Grid to run jobs on machines anywhere in the world.
Technology does not stand still. Modern-day computers now have upwards of 32 processors on a single chip and that trend will continue. The gaming industry has developed graphics processing units, or GPUs, that were originally designed to rapidly manipulate graphical images. Modern GPUs do this very efficiently. Their highly parallel structure makes them more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel. This technology has great potential for the high-energy physics community.
As technologies evolve, our software architects adapt in order to take advantage of the changes. The trouble is that many of the existing software tools of high-energy physics were not designed to work in this new computing environment. One of the most important tools of our community is GEANT4 (for GEometry ANd Tracking). GEANT4 is a platform for the simulation of the passage of particles through matter using Monte Carlo methods. In its current incarnation, the program is not well suited to these highly parallelized computing systems.
At a recent workshop in DC, computing experts met to strategize about how one could modify GEANT to operate it on these highly parallelized platforms of the future. The workshop was jointly sponsored by DOE’s HEP and ASCR, the Office for Advanced Scientific Computing Research. Together, these two groups are developing a unified plan that, if successfully funded, will take this software to the next level of performance.
Fermilab is taking a leading role in the evolution of these scientific computing tools for the next generation of hardware. Members of our Scientific Computing Division, led by Daniel Elvira, will continue to advance GEANT. Success of this project will mean not only faster simulation time but also enable the architects to add the next level of sophistication to this important physics tool.