Perlmutter, NERSC's newest supercomputer, is located in the center's facility in Shyh Wang Hall at Berkeley Lab. The system is named in honor of Saul Perlmutter, an astrophysicist at Berkeley Lab who shared the 2011 Nobel Prize in Physics for the ground-shaking discovery that the rate at which the universe expands is accelerating. Dr. Perlmutter has been a NERSC user for many years, and part of his Nobel Prize-winning work was carried out on NERSC machines. The system name reflects NERSC's commitment to advancing scientific research.
Perlmutter, an HPE Cray EX supercomputer, features both GPU-accelerated and CPU-only nodes. Its projected performance is three to four times that of NERSC's current flagship system, Cori. The system was installed in two phases. Phase 1, which includes the system's GPU-accelerated nodes and scratch file system, has been available for early science campaigns since the summer of 2021. 2022's Phase 2 installation added CPU-only nodes.
Innovations to Support the Diverse Needs of Science
Perlmutter includes a number of innovations designed to meet the diverse computational and data analysis needs of NERSC’s users and to speed their scientific productivity.
The system derives performance from advances in hardware and software, including a new Cray system interconnect, code-named Slingshot. Designed for data-centric computing, Slingshot’s Ethernet compatibility, advanced adaptive routing, first-of-a-kind congestion control, and sophisticated quality of service capabilities improve system utilization and performance, as well as scalability of supercomputing and AI applications and workflows.
The system also features NVIDIA A100 GPUs with Tensor Core technology and direct liquid cooling. Perlmutter is also NERSC’s first supercomputer with an all-flash scratch filesystem. The 35-petabyte Lustre filesystem is capable of movingd data at a rate of more than 5 terabytes/sec, making it the fastest storage system of its kind.
Phase 1 is made up of 12 GPU-accelerated cabinets housing over 1,500 nodes and 35 petabytes of all-flash storage. Phase 2 adds 12 CPU cabinets with more than 3,000 nodes.
Each of Phase 1's GPU-accelerated nodes has four NVIDIA A100 Tensor Core GPUs based on the NVIDIA Ampere GPU architecture alongside 256GB of memory for a total of over 6,000 GPUs. Each Phase 1 node also has a single AMD EPYC CPU. The Phase 1 system also includes Non-Compute Nodes (NCNs), 40 User Access Nodes (NCN-UANs – login nodes) and service nodes. Some NCN-UANs will be used to deploy containerized user environments using Kubernetes for orchestration.
The Phase 1 system achieved 70.9 Pflop/s, putting it at No. 5 in the Top500 list in November 2021.
Each of Phase 2's CPU nodes has two AMD EPYC CPUs with 512GB of memory per node.
The programming environment features NVDIA HPC SDK (Software Development Kit) in addition to the familiar CCE, GNU, and AOCC (AMD Optimizing C/C++ and Fortran Compilers) to support diverse parallel programming models such as MPI, OpenMP, CUDA, and OpenACC for C, C++ and Fortran codes. NERSC is building another programming environment built for the LLVM compilers.
Preparing for Perlmutter
In preparing for Perlmutter, NERSC implemented a robust application readiness plan for simulation, data, and learning applications through the NERSC Exascale Science Applications Program (NESAP). One outcome of this effort is the webpage Transitioning Applications to Perlmutter, with recommendations for applications developers and users who are preparing for the new system. Support for complex workflows through new scheduling techniques and support for Exascale Computing Project (ECP) software is also planned for the new system.
NERSC is the DOE Office of Science’s (SC’s) mission high-performance computing facility, supporting more than 8,000 scientists and 2,000 projects annually. The Perlmutter system represents SC’s ongoing commitment to extreme-scale science, developing new energy sources, improving energy efficiency, discovering new materials, and analyzing massive datasets from scientific experimental facilities.
Building Software and Running Applications
Several HPE-Cray-provided base compilers are available on Perlmutter, with varying levels of support for GPU code generation: HPE Cray, GNU, AOCC, and NVIDIA. All suites provide compilers for C, C++, and Fortran. Additionally, NERSC plans to provide the LLVM compilers there. Information on how to build software can be found in the technical documentation on compilers.
(For NERSC users: Learn how to launch parallel jobs on GPU-accelerated compute nodes.)
- November 2020 - July 2021: Cabinets containing GPU compute nodes and service nodes for the Phase 1 system arrived on-site and are being configured and tested.
- Summer 2021: When the Phase 1 system installation was completed, NERSC started to add users in several phases, starting with NESAP teams.
- June and November 2021: The Phase 1 system was ranked No. 5 in the Top500 lists in June and November 2021.
- June 2, 2021, and January 5-7, 2022: User trainings were held to teach NERSC users how to build and run jobs on Perlmutter.
- January 19, 2022: The system was made available to all users who want to use GPUs with the start of the allocation year 2022.
- August 16, 2022: Having delivered 4.2 million GPU node hours and 2 million CPU node hours to science applications under its free “early science” phase, it is announced that the system will transition to charged service in September.
- NERSC Played Key Role in Nobel Laureate's Discovery
- NERSC Nobel Lecture Series: Saul Perlmutter Lecture, “Data, Computation, and the Fate of the Universe,” June 11, 2014
- Discovery of Dark Energy Ushered in a New Era in Computational Cosmology
- NERSC and the Fate of the Universe
- Computational Cosmology Comes of Age