NERSCPowering Scientific Discovery for 50 Years

NERSC Team Takes StorCloud Honors at SC05 Conference

February 1, 2006

One of the goals of providing comprehensive computing resources is to make the different components “transparent” to the end user. But if you are staff members demonstrating a groundbreaking approach for accessing distributed storage, such invisibility isn’t so desirable.

That was the case for a NERSC/LBNL team competing in the StorCloud Challenge at the SC05 conference held Nov. 12-18, 2005, in Seattle.

The team, led by Will Baird of NERSC’s Computational Systems Group, was given an award for “Best Deployment of a Prototype for a Scientific Application.” Unfortunately, however, their award slipped through the cracks and the group was not recognized at the SC05 awards session, nor in the conference news release announcing the various prizes and awards given out at SC05.

The team, which also included Jonathan Carter and Tavia Stone of NERSC and Michael Wehner, Cristina Siegerist and Wes Bethel of Berkeley Lab’s Computational Research Division, used the StorCloud infrastructure “to test the wide-area deployment of an unprecedented system in support of a groundbreaking climate modeling application,” according to the award. The application was fvCAM — of Finite Volume Community Atmospheric Model — which is being used to predict hurricane formation.

StorCloud is a special initiative for building a high performance computing storage capability showcasing HPC storage technologies (topologies, devices, interconnects) and applications. The StorCloud Challenge invited applicants from science and engineering communities to use the unique StorCloud infrastructure to demonstrate emerging techniques or applications, many of which consume enormous amounts of network and storage resources.Baird designed the TRI Data Storm prototype around the concept of using an integrated, multisystem file system to improve the analysis of results produced by the demanding HPC application — the Community Atmospheric Model (CAM).

“Our aim is to take the output of CAM from a high-resolution grid, filter out the data of interest, and visualize the formation of storms in the North Atlantic basin,” according to Baird. “This tool will be used in a study comparing real hurricane data with simulations.

While this is a fairly generic workflow that could hold true for virtually any HPC application, the unique aspect to our approach is that there is a single high-performance parallel file system serving all of the different systems and applications.”

For TRI Data Storm, the team used an externalized GPFS (General Parallel File System), shared out by a dedicated cluster and mounted on all of the different computational resources used by our tool: the IBM Power5 cluster “Bassi” and the PDSF Linux cluster at NERSC, the GPFS servers and storage at the conference and an SGI Altix cluster in the LBNL booth at SC05.

“There are very few places that do this, especially with the variety of systems that we demonstrated with,” Baird said. “Additionally, all the communication between the systems was simply through the file system, not through anything else.”

According to Baird, sharing data through multisystem file systems has been undergoing some radical changes in the last several years. Networked file storage first impacted the way data was shared among computers. Later on, the cluster file systems enabled whole clusters to be able to have access to the same data in a robust manner. In the last year, the cluster file systems have evolved to allow multiple systems to have access to that same data easily and reliably.

Such advancements in storage access have significant implications for HPC centers and users, who could access data from multiple systems in a way that is transparent. Using GPFS, a single file system can be presented to multiple computing systems to run extensive simulations without manual data transfers between systems.


About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, NERSC serves almost 10,000 scientists at national laboratories and universities researching a wide range of problems in climate, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. »Learn more about computing sciences at Berkeley Lab.