Berkeley Lab Issues Request for Proposals for Next-gen HPC System, NERSC-10

N10placeholder future systems

On March 13, 2024, Lawrence Berkeley National Laboratory (“LBNL”) released a Request for Proposal (RFP) for the next generation high performance computing (HPC) system, NERSC-10, to be delivered in the 2026 time frame. 

Prior to the RFP’s release, the University posted draft Technical Requirements for the NERSC-10 contract:

Responses to the RFP should target the following lengths: 

  • Build Technical Proposal (150-page limit total)
  • Non-Recurring Engineering (NRE) Technical Proposal (50-page limit total)
  • Benchmark Attachments: Performance of the System (no page limit)

NOTE: These limits are not finalized and are subject to change.

Email us with comments, questions, and other correspondence. Any information provided by industry to the University is strictly voluntary, and the information obtained from responses to this notice may be used in the development of an acquisition strategy and future solicitation.

NERSC-10 Benchmark Suite

Benchmarks will play a critical role in evaluation of the system. The NERSC-10 Benchmark Suite comprises tests for varying levels of the system architecture that range from microbenchmarks to workflow component benchmarks.

The NERSC-10 benchmark run rules should be reviewed before running benchmarks to understand allowed modifications to the benchmarks and result submission.

All benchmark results must be reported in the accompanying “NERSC-10 Benchmark Results” worksheet

The Workflow-SSI metric will be used to evaluate system performance.

The metric is described by the Workflow SSI document and can be computed using the “NERSC-10 Benchmark Results” worksheet.

This site will be maintained with updated information throughout the proposal response period, including updates to instructions for build and execution. Updates will be recorded in the CHANGELOG.

Questions related to the benchmarks should be sent to N10benchmarks@lbl.gov.

Workflow component benchmarks

The workflow component benchmarks have been carefully chosen to represent performance-critical components of the expected NERSC-10 scientific workflows, which include simulation of complex scientific problems using diverse computational techniques at high degrees of parallelism, large-scale analysis of experimental or observational data, machine learning, and the data-flow and control-flow needed to couple these activities in productive and efficient workflows.

Each benchmark distribution contains a README file that provides links to the benchmark source code distribution, instructions for compiling, executing, verifying numerical correctness, and reporting results for each benchmark. Multiple input problems and sample output from existing NERSC systems (i.e., Perlmutter) are included to facilitate profiling at reduced scale. The README files specify a target problem size that must be used to report benchmark performance.

Micro-benchmarks 

The microbenchmarks are simple, focused tests that are easily compiled and executed. The results allow a uniform comparison of features and provide an estimation of system balance. Descriptions and requirements for each test are included in the source distribution for each microbenchmark.

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is the mission computing facility for the U.S. Department of Energy Office of Science, the nation’s single largest supporter of basic research in the physical sciences.

Located at Lawrence Berkeley National Laboratory (Berkeley Lab), NERSC serves 11,000 scientists at national laboratories and universities researching a wide range of problems in climate, fusion energy, materials sciences, physics, chemistry, computational biology, and other disciplines. An average of 2,000 peer-reviewed science results a year rely on NERSC resources and expertise, which has also supported the work of six Nobel prize-winning individuals and teams. 

NERSC is a U.S. Department of Energy Office of Science User Facility.

Media Contact

Email our communications team