NERSCPowering Scientific Discovery Since 1974

NERSC-10 Benchmark Suite

NERSC-10 Benchmark Suite

Benchmarks will play a critical role in evaluation of the system. The NERSC-10 Benchmark Suite comprises tests for varying levels of the system architecture that range from microbenchmarks to workflow component benchmarks.

This site will be maintained with updated information throughout the proposal response period, including updates to instructions for build and execution.  Updates will be recorded in the CHANGELOG.

Questions related to the benchmarks should be sent to [email protected].

Workflow Component Benchmarks

The workflow component benchmarks have been carefully chosen to represent performance-critical components of the expected NERSC-10 scientific workflows, which include simulation of complex scientific problems using diverse computational techniques at high degrees of parallelism, large-scale analysis of experimental or observational data, machine learning, and the data-flow and control-flow needed to couple these activities in productive and efficient workflows.

Each benchmark distribution contains a README file that provides links to the benchmark source code distribution, instructions for compiling, executing, verifying numerical correctness and reporting results for each benchmark. Multiple input problems and sample output from existing NERSC systems (i.e. Perlmutter) are included to facilitate profiling at reduced scale. The README files specify a target problem size which must be used to report benchmark performance.

The NERSC-10 benchmark run rules should be reviewed before running benchmarks to understand allowed modifications to the benchmarks and result submission.

Microbenchmarks (Full list and details TBD)

The microbenchmarks are simple, focused tests that are easily compiled and executed. The results allow a uniform comparison of features and provide an estimation of system balance. Descriptions and requirements for each test are included in the source distribution for each microbenchmark.

  • Babel Stream
  • OSU Micro-Benchmark (OMB)
  • IOR
  • MDTest
  • (TBD)