NERSCPowering Scientific Discovery Since 1974

Table of Current Computational Systems

NERSC Computational Systems

 

System Name System Type CPU Computational Pool Node Interconnect Scratch Disk Avg. Power (KW)
 Type  Speed (GHz)  Nodes  SMP Size  Total Cores Flops per Core (Gflops/sec) Peak Performance (Tflops/sec)  Aggregate Memory

 Avg. Memory/core

Edison

Cray XC30 Intel Ivy Bridge  2.4 5,576  24  133,824  19.2 2569.4  357 TB  2.67  Aries  7.56 PB (local) + 3.9 PB (global)  1,600
Cori Phase 1 Cray XC40 Intel Haswell 2.1 1,630 32 52,160 18.4   1966.1 203 TB 4 GB Aries 30 PB   

PDSF 1

Linux Cluster Opteron, Xeon 2.0, 2.27, 2.33, 2.67 232 8, 12, 16 2,632  8.0, 9.08, 9.32, 10.68  17.6 9.5 TB 4 GB Ethernet / InfiniBand 34.9 TB for batch nodes and 184 GB for interactive nodes  -

Genpool 2

 Various vendor systems Nehalem, Opteron 2.27, 2.67  547  8, 24, 32, 80  4,680  9.08, 10.68  42.8  33.7 TB  7.36 GB  Ethernet 3.9 PB (global)  -

1)  PDSF is a special-use system hosted by NERSC for the High Energy Physics and Nuclear Science community.

2)  Genepool is a cluster dedicated to the DOE Joint Genome Institute's computing needs. 

Get more information at NERSC computational systems.