NERSCPowering Scientific Discovery for 50 Years

A New System at NERSC: Carver Goes into Production

May 28, 2010

carverracks.jpg

Built on IBM iDataPlex technology, Carver is comprised of 400 compute nodes interconnected by the latest 4X QDR InfiniBand technology. The image above shows Carver from the front.

A new system is in production at the Department of Energy's National Energy Research Scientific Computing Center (NERSC). Built on IBM iDataPlex technology, the new system is called "Carver" in honor of American scientist and inventor George Washington Carver.

Carver replaces NERSC's Opteron "Jacquard" cluster and IBM Power5 "Bassi" system, which were both decommissioned at the end of April. NERSC is a world leader in providing high-performance computing resources for science, serving more than 3,000 researchers annually in disciplines ranging from computational cosmology to nanoscience.

"Because NERSC users encompass such a wide range of scientific disciplines, our systems need to be optimized to tackle a diversity of needs," says David Turner, a NERSC user consultant who led the Carver implementation.

He notes that Carver contains 800 Intel Nehalem quad-core processors, or 3,200 cores, and has 3.5 times the theoretical peak performance capacity of Jacquard and Bassi combined. The system's 400 compute nodes are interconnected by the latest 4X QDR InfiniBand technology, meaning that 8 GB/s bidirectional peak bandwidth is available for high-performance message passing and access to the NERSC Global Filesystem (NGF), which holds Carver's home, system, and scratch storage.

According to David Paul, who helped set up the Carver production environment, Carver is one of the first platforms of this scale to employ four Voltaire switches, allowing information to quickly transfer between the machine and its external filesystem.

Top view of the Carver System

To make the environment user-friendly for its scientific community, Carver runs the Linux operating system. In addition, NERSC staff have configured Carver's batch queue and scratch storage to handle a range of large and small jobs. A queue that allows use of up to 32 processors now has the option to run for 168 hours, or seven straight days. Turner anticipates that this option will be especially useful for material science and chemistry research running the Gaussian code.

While Carver's default scratch storage quota of 2 terabytes will be sufficient for most jobs, Turner notes that some projects might occasionally need more than this if their job generates a large output. To meet these requirements, Carver's scratch storage is configured to give users up to 50 terabytes of limited-term disk space. This allows users to hold more than 2 terabytes of data in scratch storage for several weeks before they have to archive it in the High Performance Storage System (HPSS).

Early Scientific Successes on Carver

In February, selected NERSC users were invited to try out Carver to see whether the system could withstand the gamut of scientific demands that the center typically serves. Researchers Hyoungki Park and Professor John Wilkins, of Ohio State University, used this opportunity to devise models to mimic and predict the atomic interactions of titanium, other transition metals, and their alloys. This research could ultimately lead to the development of new materials for building everything from jet engines to biomedical implants. Wilkins notes that industries increasingly want to speed up the development of new products, and fast computational models could drastically reduce the time required to determine material properties.

"Getting pre-production time on Carver was sort of like being in California during the start of the gold rush," says Wilkins. "Carver gave us gold and now we can start to construct accurate and efficient classical potentials for titanium alloys."

"The DFT (Density Functional Theory) calculations for our projects are very memory intensive and require relatively long computing time. The 24 gigabytes of memory per node and excellent performance make Carver very suitable for our calculations," says Park.

Meanwhile, postdoctoral researcher Jeff Wereszczynski, of the University of California at San Diego (UCSD), used pre-production time on Carver to computationally combine x-ray crystallography data with small-angle x-ray scattering (SAXS) to get a more realistic view of how protein atoms interact inside of a cell. He notes that this technique could provide a valuable avenue for exploring how proteins contribute to a variety of cancers and diseases.

"Our research is very computationally expensive because we have to model every atom in the protein that we are studying and all of the liquid surrounding it. Then, we calculate how each atom interacts with all of other atoms in the system," says Wereszczynski, who is working with professor J. Andrew McCammon at UCSD.

"We were very happy with Carver; our code scaled extremely well on the system. Interconnect is also very important, and with eight processors per node Carver worked very well for our research," he adds.

Carver went into production on May 1, 2010.


About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, NERSC serves almost 10,000 scientists at national laboratories and universities researching a wide range of problems in climate, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. »Learn more about computing sciences at Berkeley Lab.