NERSCPowering Scientific Discovery Since 1974

Computational Systems

NERSC Computational Systems

 

System Name System Type CPU Computational Pool Node Interconnect Scratch Disk Avg. Power (KW)
 Type  Speed (GHz)  Nodes  SMP Size  Total Cores Flops per Core (Gflops/sec) Peak Performance (Tflops/sec)  Aggregate Memory

 Avg. Memory/core

Edison


Cray XC30 Intel Ivy Bridge  2.4 5,576  24  133,824  19.2 2569.4  357 TB  2.67  Aries  7.56 PB (local) + 3.9 PB (global)  1,600
Hopper Cray XE6 Opteron 2.1 6,384 24 153,216 8.4 1287.0 211.5 TB 1.41 GB Gemini 2.2 PB (local) + 3.9 PB (global)   2,200
Carver IBM iDataPlex Nehalem, Westmere, Nehalem-EX 2.67, 2.00 1,202 8, 12, 32 9,984 10.68, 8.00 106.5 35.75 TB 3.67 GB QDR InfiniBand 3.9 PB (global) 266 
 

Dirac1)

NVIDIA GPUs on IBM iDataPlex Tesla C2050 (Fermi) and C1060 with Nehalem 1.15 or 1.30 for GPUs and 2.4 for CPUs 56 GPU nodes on 50 CPU nodes 448 or 240 for GPUs, and 8 for CPUs 23,424 GPU cores and 400 CPU cores  1.15 or 1.30 for GPU cores and 9.6 for CPU cores  25.4 for GPUs and 3.8 for CPUs 176 GB for GPUs and 1,344 GB for CPUs 7.7 MB for GPUs and 3.36 GB for CPUs QDR InfiniBand 3.9 PB (global)  -

PDSF2)

Linux Cluster Opteron, Xeon 2.0, 2.27, 2.33, 2.67 232 8, 12, 16 2,632  8.0, 9.08, 9.32, 10.68  17.6 9.5 TB 4 GB Ethernet / InfiniBand 34.9 TB for batch nodes and 184 GB for interactive nodes  -

Genpool3)

 Various vendor systems Nehalem, Opteron 2.27, 2.67  547  8, 24, 32, 80  4,680  9.08, 10.68  42.8  33.7 TB  7.36 GB  Ethernet 3.9 PB (global)  -

1)   Dirac is a testbed experimental system and is not considered a NERSC production system.

2)  PDSF is a special-use system hosted by NERSC for the High Energy Physics and Nuclear Science community.

3)  Genepool is a cluster dedicated to the DOE Joint Genome Institute's computing needs. 

 See more information at NERSC computational systems.