Configuration
NERSC's newest supercomputer, named Edison after U.S. inventor and businessman Thomas Alva Edison, will have a peak performance of more than 2 petaflops (PF, or 1015 floating point operations per second) when fully installed in 2013. The integrated storage system will have more than 6 petabytes (PB) of storage with a peak I/O bandwidth of 140 gigabytes (GB) per second. The product is known as a Cray XC30 (internal name "Cascade"), and the NERSC acquistion project is known as "NERSC 7."
Edison will be installed in two phases.
Phase I
Installation: 4Q 2012
Early User Access: Started in February 2013. All users were enabled March 2, 2013.
System Overview
- Cray Cascade supercomputer
- 664 computes nodes with 64 GB memory per node
- Two 8-core Intel "Sandy Bridge" processors per node (16 cores per node)
- 10,624 total physical compute cores
- Cray Aries high-speed interconnect (0.25 μs to 3.7 μs MPI latency, ~8 GB/sec MPI bandwidth)
- Scratch storage capacity: 1.62 PB
System Details
- Compute processor: 8-core Intel "Sandy Bridge" at 2.6 GHz
- Compute node: dual-socket Sandy Bridge with 64 GB DDR3 1600 MHz memory (8 GB DIMMs)
- Compute blade: 4 dual-socket nodes
- Number of compute nodes: 664
- "MOM" nodes (execute job scripts): 8 repurposed compute nodes
- High speed interconnect: Cray Aries with Dragonfly topology
- Scratch storage system: Cray Sonexion 1600 Lustre appliance
- Scratch storage maximum bandwidth: 36 GB/sec
- Login nodes: quad-core, quad-socket (16 total cores) 2.0 GHz Intel "Sandy Bridge" processors with 512 GB memory.
- Number of login nodes: 6
- Shared root server nodes: 8
- Lustre router nodes: 7
- DVS server nodes (for interface with NERSC Global File System): 16
- External gateway (network nodes): 4 nodes with 2 dual-port 10 GigE interfaces per node
Phase 2
Installation: second half of 2013
System Overview
- Cray Cascade supercomputer
- Sustained application performance on NERSC SSP codes: 236 Tflop/s (vs. 144 Tflop/s for Hopper)
- Aggregate memory: 333 TB
- 5,200 computes nodes with 64 GB memory per node
- Cray Aries high-speed interconnect (0.25 μs to 3.7 μs MPI latency, ~8GB/sec MPI bandwidth)
- Scratch storage capacity: 6.4 PB
System Details
- Intel multicore processors
- Compute blade: 4 dual-socket nodes
- Number of compute nodes: 5,200
- "MOM" nodes (execute job scripts): 8 repurposed compute nodes
- High speed interconnect: Cray Aries with Dragonfly topology
- Scratch storage system: Cray Sonexion 1600 Lustre appliance
- Scratch storage maximum aggregate bandwidth: 140 GB/sec
- Login nodes: quad-core, quad-socket (16 total cores) 2.0 GHz Intel "Sandy Bridge" processors with 512 GB memory.
- Number of login nodes: 12
- Shared root server nodes: 8
- Lustre router nodes: 7
- DVS server nodes (for interface with NERSC Global File System): 16
- External gateway (network nodes): 4 nodes with two dual-port 10 GigE interfaces per node


