NERSCPowering Scientific Discovery Since 1974

Computational Systems

NERSC Computational Systems

 

System Name System Type CPU Computational Pool Node Interconnect Scratch Disk Avg. Power (KW)
 Type  Speed (GHz)  Nodes  SMP Size  Total Cores Flops per Core (Gflops/sec) Peak Performance (Tflops/sec)  Aggregate Memory

 Avg. Memory/core

Edison


Cray XC30 Intel Ivy Bridge  2.4 5,576  24  133,824  19.2 2569.4  357 TB  2.67  Aries  7.56 PB (local) + 3.9 PB (global)  1,600
Hopper Cray XE6 Opteron 2.1 6,384 24 153,216 8.4 1287.0 211.5 TB 1.41 GB Gemini 2.2 PB (local) + 3.9 PB (global)   2,200
Carver IBM iDataPlex Nehalem, Westmere, Nehalem-EX 2.67, 2.00 1,202 8, 12, 32 9,984 10.68, 8.00 106.5 35.75 TB 3.67 GB QDR InfiniBand 3.9 PB (global) 266 

PDSF1)

Linux Cluster Opteron, Xeon 2.0, 2.27, 2.33, 2.67 232 8, 12, 16 2,632  8.0, 9.08, 9.32, 10.68  17.6 9.5 TB 4 GB Ethernet / InfiniBand 34.9 TB for batch nodes and 184 GB for interactive nodes  -

Genpool2)

 Various vendor systems Nehalem, Opteron 2.27, 2.67  547  8, 24, 32, 80  4,680  9.08, 10.68  42.8  33.7 TB  7.36 GB  Ethernet 3.9 PB (global)  -

1)  PDSF is a special-use system hosted by NERSC for the High Energy Physics and Nuclear Science community.

2)  Genepool is a cluster dedicated to the DOE Joint Genome Institute's computing needs. 

 See more information at NERSC computational systems.