NERSCPowering Scientific Discovery for 50 Years

NERSC Increases System Storage and Security for Users

April 28, 2009

Franklin Upgrades Improve I/O Performance

Throughout the month of March the Cray XT4 machine Franklin underwent a series of upgrades and improvements, including a major I/O upgrade. The disk capacity of the scratch file system was increased by 30% to 460 TB, and the I/O bandwidth was nearly tripled to an aggregate write performance of 32 GB/sec, compared to 11 GB/s before the upgrade. Instead of adding the new hardware to the existing scratch file system, NERSC chose to implement a second scratch file system so that Franklin now has two scratch file systems, each with a peak write bandwidth of 16 GB/sec.

“We doubled the amount of I/O hardware and nearly tripled the bandwidth, which was a pleasant surprise,” said Kathy Yelick, NERSC Division Director.

The extra boost came from a set of hardware and software changes that included a rearrangement of the I/O nodes based on an analysis of an optimum layout for the particular torus network configuration on Franklin.

“The I/O upgrade will not only improve the peak I/O performance of applications, but should also result in more predictable performance and less network congestion even under heavy I/O workloads,” said Katie Antypas, of the
NERSC User Services Group,

More Storage for the NERSC Global File System

In April 2009, an additional 110 terabytes of storage was added to the NERSC Global Filesystem (NGF), which was launched to facilitate data sharing between science users and machines. The system currently contains close to 300 terabytes of user accessible storage, allowing users to store larger datasets without having to move data between disk storage and the archival tape storage system.

“We worked very hard to ensure minimal disruptions to our users, and we succeeded,” says Shane Canon of the NERSC Data Systems Group. “The additional space was added without taking the file system offline.”

Canon credits the seamless upgrade to the advanced capabilities of NGF's underlying file system, IBM's General Parallel File System (GPFS). NGF is mounted on all of NERSC's computing systems, allowing users who run applications on multiple machines to access information from one place, instead of copying large datasets from one machine to the next. For a handful of large-scale science projects, NGF also provides permanent online storage.

For more information on NGF, please visit:

Updates to NERSC's SSH Daemon Improve Long-Haul Transfers

Recent upgrades to NERSC's SSH software have improved security and increased the performance of long-haul transfers for users. SSH is the security software that protects most inbound connections to the center.

Last year, NERSC's Security team developed their own in-house variant of the SSH software that improves intrusion detection capabilities by capturing the user keystrokes while still preserving SSH's encryption of data in transit. This modification to SSH allows NERSC's intrusion detection system to automatically spot the signs of a security breach and immediately notifies the security team. In this way, NERSC is able to detect that hackers have gained access to a user account before any real damage is done.

Security team member Craig Lant notes that the keystrokes of a malicious intruder can be often be identified automatically because they are somewhat different from those of a legitimate user. The modifications to SSH enable the intrusion detection system to spot this suspicious activity immediately, and allow the security team to act and alert the user of the compromise.

After several months of experience with this modified version of SSH, the security team has been able to significantly improve this capability with a new version of the code. The new version closes loopholes that allowed hackers to evade the system, reports additional information, and fixes a few minor bugs.

In addition to these modifications, the NERSC security team also installed a patch to the SSH client that improved the performance of long-distance transfers of massive datasets. This particular patch came from the Pittsburgh Supercomputing Center. Lant notes that prior to this patch SSH was not tuned for managing massive, long distance transfers. Major remote experimental devices, such as the Large Hadron Collider in Europe, will be producing Petabytes of data that will be transferred to NERSC for analysis and storage.

All data traveling in and out of NERSC traverses the Department of Energy's Energy Sciences Network (ESnet), which interconnects more than 40 DOE research facilities and dozens of universities across the United States, also providing network connections to research networks, experimental facilities and research institutions around the globe. NERSC and ESnet are facilities supported by the DOE Office of Advanced Scientific Computing Research, and managed by the Lawrence Berkeley National Laboratory in Berkeley, California.

About the DOE Office of Science
With a Fiscal Year 2009 budget of $4.8 billion, the DOE Office of Science is the largest funder of basic research in the physical sciences in the United States. The steward of ten National Laboratories, the Office of Science funds research at National Laboratories and over 300 universities nationwide through programs in advanced scientific computing research, basic energy sciences, biological and environmental research, fusion energy sciences, high energy physics, and nuclear physics.

About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves almost 10,000 scientists at national laboratories and universities researching a wide range of problems in climate, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. »Learn more about computing sciences at Berkeley Lab.