NERSCPowering Scientific Discovery for 50 Years

Petaflops Power to NERSC

Energy Department’s Primary Scientific Computing Facility Accepts a New Supercomputer

May 31, 2011

by Linda Vu

Directors in front of Hopper

From left to right: Horst Simon (Berkeley Lab deputy director), Kathy Yelick (NERSC division director), Dan Hitchcock (DOE), Paul Alivisatos (Berkeley Lab director) standing in front of the Hopper supercomputer system recently accepted by NERSC.

The National Energy Research Scientific Computing Center (NERSC) recently marked a major milestone, putting its first petascale supercomputer into the hands of its 4,000 scientific users. The flagship Cray XE6 system is called “Hopper” in honor of American computer scientist Grace Murray Hopper; it is capable of more than one quadrillion floating point operations per second, or one petaflops, and is currently the second most powerful supercomputer in the United States, according to the TOP500 list.

Before this milestone, several pioneering users put the machine through its paces while making scientific discoveries in a broad set of areas, from fundamental science to clean energy alternatives and severe storm modeling.

“We are very excited to make this unique petascale capability available to our users, who are working on some of the most important problems facing the scientific community and the world.” says Kathy Yelick, NERSC Director. “With its 12-core AMD processor chips, the system reflects an aggressive step forward in the industry-wide trend toward increasing the core counts, combined with the latest innovations in high-speed networking from Cray. The result is a powerful instrument for science. Our goal at NERSC is to maximize performance across a broad set of applications, and by our metric, the addition of Hopper represents an impressive five-fold increase in the application capability of NERSC.”

Peter Ungaro, president and CEO of Cray, emphasizes the collaboration between the two organizations in tailoring the machine for NERSC. “Our partnership with NERSC has been important in further increasing these capabilities in our Cray XE6 supercomputer, not only for NERSC but for all our customers around the world. We worked together to improve the functionality and performance of our external services offerings from our Custom Engineering organization, as well as put our new Gemini system interconnect to use across an amazingly broad set of scalable applications that needed all the performance they could get to achieve their scientific goals. We’re proud that Hopper is the first petascale system at NERSC and I’m convinced it will be a great tool for their users as they strive toward the next scientific breakthroughs.” 

Hopper Gives Scientists a Head Start on Future Research

“Uptake of the Hopper system by early science users was quite impressive,” said Jonathan Carter, Computer Sciences Deputy and lead on the Hopper procurement project. “We started with a select set of users and gradually opened the system up to all interested users as the testing period progressed. From the earliest access, the system was heavily utilized and very popular.” Here is a selection of their stories: 

Hopper Paves Way for Understanding DNA Replication

Our DNA is damaged every day from exposure to the sun’s ultraviolet light, secondhand smoke, toxins released by mold, and various other factors. Fortunately, nature has equipped us all with genes that repair and replicate DNA. But when these repair systems go awry, the result may be fatal cancerous tumors or degenerative diseases. A team of researchers from Georgia State University ran simulations on Hopper to understand the basic mechanisms of repair, which could lead to new preventions and treatments. 

“My group has been very pleased with our experience running on Hopper,” says Ivaylo Ivanov, an assistant professor of chemistry at Georgia State University who is part of the team. “Our parallel jobs seem to scale much better on Hopper compared to similar systems. I suspect it has to do with the better processor interconnect network,” he says. “This makes it possible to run on many more processor cores and make quicker progress on our simulations.”

Improving the Accuracy of Storm Surge Forecasts

Hurricane Katrina

Hurricane Katrina on August 26, 2005.

 In the event of a hurricane, storm surges are the greatest threat to life and property along the coast. In fact, experts estimate that most of the 1500 deaths caused by Hurricane Katrina were the result of a surge that occurred when winds that moved cyclonically around the storm in the Gulf of Mexico pushed water toward the shore. The ability to simulate these events on a computer is a powerful tool for evaluating risk, designing hurricane protection systems, planning evacuations and analyzing the physics of a storm.

Today, supercomputers built on parallel architectures with fast networks allow researchers to get these simulations with a fast turnaround. But their codes need to run at peak efficiency on tens of thousands of processors to get detailed results as fast as possible. After all, emergency planners typically need to get anywhere from two to four days of storm simulations in an hour of computer time to plan for human safety.

Researchers Seizo Tanaka and Patrick Kerr of the University of Notre Dame’s Department of Civil Engineering and Geological Sciences, and Jay Ratcliff of the US Army Corps of Engineers tested the scalability of their high-resolution storm surge code using Hopper pre-acceptance time.

“Advance time on Hopper has helped greatly in the scaling benchmarks as well in the simulations I've been performing related to hurricane modeling and sea level rise impacts,” says Ratcliff.

Harnessing Wind Power with Hopper

Wind Turbine in the San Gorgonio Pass

California's San Gorgonio pass wind farm

Although wind power technology is close to being cost-competitive with fossil fuel plants for generating electricity, wind turbine installations still provide less than one percent of all U.S. electricity. Because scientists don’t have detailed knowledge about how unsteady flows interact with wind turbines, many turbines underperform, suffer permanent failures or break down sooner than expected.

Since standard meteorological datasets and weather forecasting models do not provide detailed information on the variability of conditions needed for the optimal design and operation of wind turbines, researchers at the National Center for Atmospheric Research (NCAR) developed a massively parallel large-eddy simulation (LES) code for modeling turbulent flows in the planetary boundary layer —the lowest part of the atmosphere, which interacts with the shape and ground cover of the land.

 With approximately 16,000 processor cores on Hopper, the NCAR team simulated the turbulent wind flows over hills in unprecedented resolution and increased the scalability of his code to ensure that it will be able to take advantage of peta- and exascale computer systems.

“The best part of Hopper is the ability to put previously unavailable computing resources toward investigations that would otherwise be unapproachable,” says Ned Patton of  NCAR, who heads the investigation. “We seriously couldn't make the progress we have been without NERSC's support. We find NERSC's services to be fantastic and truly appreciate being able to compute there.”


About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves almost 10,000 scientists at national laboratories and universities researching a wide range of problems in climate, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. »Learn more about computing sciences at Berkeley Lab.