NERSCPowering Scientific Discovery Since 1974

Compute Nodes

MC-proc.png

Compute Node Configuration

  • 6,384 nodes
  • 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below)
  • 24 cores per node (153,216 total cores)
  • 32 GB DDR3 1333-MHz memory per node (6,000 nodes)
  • 64 GB DDR3 1333-MHz memory per node (384 nodes)
  • Peak Gflop/s rate:
    • 8.4 Gflops/core
    • 201.6 Gflops/node
    • 1.28 Peta-flops for the entire machine
  • Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively
  • One 6-MB L3 cache shared between 6 cores on the Magny-Cours processor
  • Four DDR3 1333-MHz memory channels per twelve-core 'MagnyCours' processor

Compute Node Software

By default the compute nodes run a restricted low-overhead operating system optimized for high performance computing called "Cray Linux Environment" (CLE). This OS supports only a limited number of system calls and UNIX commands and does not system or user-created dynamic-load libraries.  The compute nodes can run a more fully featured OS as well.  See Running Shared and Dynamic Library applications.

A single given compute node is always allocated to run a single user job; multiple jobs never share a compute node.

Magny Cours Processor

Hopper Compute Nodes