NERSCPowering Scientific Discovery Since 1974

Table of NERSC Resources

NERSC Computational Systems

 

System Name System Type CPU Computational Pool Node Interconnect Scratch Disk Avg. Power (KW)
 Type  Speed (GHz)  Nodes Cores per Node  Total Cores Flops per Core (Gflops/sec) Peak Performance (Tflops/sec)  Aggregate Memory

 Avg. Memory/core

Edison

Cray XC30 Intel Ivy Bridge  2.4 5,586  24

134,064

19.2 2,574  357 TB  2.67 GB  Aries 7.6 PB (local) + 28 PB (global)  1,900

Cori 

Cray XC40

Intel Haswell

Intel KNL

 2.3

 1.4

2,388

9,688

 32

 68

 76,416

658,784

 36.8

44.8

2,812

29,514

 298.5 TB 

  1.03 PB

 4 GB

DDR: 1.41 GB; MCDRAM: 0.23 GB

 Aries 28 PB (global)  4,200

PDSF 1

Linux  Intel Sandy/Ivy Bridge Haswell 2.6, 2.5, 2.3 111 16, 20, 32  2,304 Sandy Bridge: 21, Ivy Bridge: 20, Haswell: 37 62   8.8 TB  3.8 GB  InfiniBand 200 - 600 GB per node  -

Genepool 2

Various vendor systems Nehalem, Opteron 2.27, 2.67 547 8, 24, 32, 80  4,680  9.08, 10.68 42.8  33.7 TB  7.36 GB  Ethernet 3.9 PB (global)  -

1)  PDSF is a special-use system hosted by NERSC for the High Energy Physics and Nuclear Science community.

2)  Genepool is a cluster dedicated to the DOE Joint Genome Institute's computing needs. 

Get more information at NERSC computational systems.

NERSC Data Storage Resources

File System Path Type Peak Performance Default Quota Backed Up? Purge Policy
Global Homes $HOME GPFS Not intended IO jobs

40 GB
1,000,000 Inodes

Yes Not purged
Project  /project/projectdirs/projectname GPFS 130 GB/Second 1 TB
1,000,000 Inodes
No Not purged
Common /global/common/software/projectname GPFS Intended for software stacks 10 GB 1,000,000 inodes No No purged
Edison local scratch $SCRATCH (on Edison) Lustre 168 GB/Second (across 3 file systems) 10 TB
5,000,000 Inodes
No Files not accessed for 8 weeks are deleted
Cori local scratch $SCRATCH (on Cori), $CSCRATCH (from other systems) Lustre 700 GB/Second 20 TB
10,000,000 Inodes
No Files not accessed for 12 weeks are deleted
Cori Burst Buffer $DW_JOB_STRIPED, $DW_PERSISTENT_STRIPED_XXX DataWarp 1.7 TB/s, 28M IOP/s none No Data is deleted at the end of every job, or at the end of the lifetime of the persistent reservation
Archive (HPSS) Typically accessed via hsi or htar inside of NERSC HPSS 1 GB/s to disk cache Allocation dependent No Not purged

For more details on the best usage of these file systems see the NERSC Data Storage Resources page.