NERSCPowering Scientific Discovery Since 1974

Configuration

Compute/Login Node
  • 1 node
  • eight 6 core Opteron 2.6 GHz processors on the node
  • 48 total cores sharing the same memory
  • 512GB memory
  • System theoretical peak performance of 499.2 GFlops
 
 
I/O Subsystem
  • NERSC Global Filesystem (NGF) used for global homes, scratch, and project
  • NGF scratch is shared with Carver and Hopper and has 873 TB of disk space, with a peak I/O bandwidth of 15 GB/s
User Environment
  • Compilers
    • Portland Group Fortran, C, and C++
    • GNU Fortran, C, and C++
  • Programming Models
    • MPI
    • Threads
    • OpenMP
  • Math Libraries
    • ACML (AMD's Math library)
    • PETSc
  • Development and Performance Tools
    • PAPI Performance API
    • IPM (Integrated Performance Monitoring)
  • NERSC-Provided Applications
    • Science domain applications for chemistry, materials sciences
    • I/O libraries
    • Visualization and analysis tools
    • Grid and HPSS software
Usability Features
  • Fully featured Linux OS