NERSCPowering Scientific Discovery Since 1974


NERSC's newest supercomputer, named Edison after U.S. inventor and businessman Thomas Alva Edison, has a peak performance of 2.57 petaflops/sec, 133,824 compute cores for running scientific applications, 357 Terabytes of memory,  and 7.56 Petabytes of online disk storage with a peak I/O bandwidth of 168 gigabytes (GB) per second. The product is known as a Cray XC30 (internal name "Cascade"), and the NERSC acquistion project is known as "NERSC 7."

 System Overview

            Cray XC30 supercomputer
            Peak performance 2.57 Petaflops/sec
            Sustained application performance on NERSC SSP codes: 293 Tflop/s (vs. 144 Tflop/s for Hopper)
            5,576 computes nodes, 133,824 cores in total
            Cray Aries high-speed interconnect with Dragonfly topology (0.25 μs to 3.7 μs MPI latency, ~8GB/sec MPI bandwidth)
            Aggregate memory: 357 TB
            Scratch storage capacity: 7.56 PB

System Details

Category Quantity Description
Cabinets 30 Each cabinet has 3 chassis; each chassis has 16 compute blades, each compute blade has 4 dual socket nodes
Compute nodes 5576 12-core Intel "Ivy Bridge" processor at 2.4 GHz
    Each node has two sockets, each socket is populated with a 12-core Intel "Ivy Bridge" processor, 24 cores per node.
    Each core has 1 or 2 user threads, and a 256 bits wide vector unit
    19.2  Gflops/core; 460.8 Gflops/node; 2.57 Petaflops for the entire machine
    Each node has 64 GB DDR3 1866 MHz memory (four 8 GB DIMMs per socket)
    Each core has its own L1 and L2 caches, with 64 KB (32 KB instruction cache, 32 KB data) and 256 KB, respectively; A 30-MB L3 cache shared between 12 cores on the "Ivy Bridge" processor
    Cache bandwidth per core: L1/L2/L3 = 100/40/23 Gbytes/s
    Stream TRIAD bandwidth /Node = 103 Gbytes/s
 Interconnect   Cray Aries with Dragonfly topology with 23.7 TB/s global bandwidth
 Login nodes  12 Quad-core, quad-socket (16 total cores) 2.0 GHz Intel "Sandy Bridge" processors with 512 GB memory.
 MOM nodes  24 Repurposed compute nodes
Shared Root Server Nodes 32  
Lustre Router nodes 26  
DVS Server Nodes 16  
RSIP nodes 8  
Scratch storage system   Cray Sonexion 1600 Lustre appliance. Scratch storage maximum aggregate bandwidth: 168 GB/sec


System Software

Category Software Name Description
Operating System CNL on compute nodes Compute nodes run a lightweight kernel and run-time environment based on the SuSE Linux Enterprise Server (SLES) Linux distribution.
 Full SUSE Linux on Login nodes External login nodes run a standard SLES distribution similar to the internal service nodes. 
Batch System Torque/Moab  

Compute nodes

The Edison compute nodes are comprised of two 12-core Intel "Ivy Bridge" processors running at 2.4GHz. Read More »


Edison employs the "Dragonfly" topology for the interconnection network. This topology is a group of interconnected local routers connected to other similar router groups by high speed global links. The groups are arranged such that data transfer from one group to another requires only one route through a global link. This topology is composed of circuit boards, copper and optical cables. Routers (represented by the Aries ASIC) are connected to other routers in the chassis via a backplane. Read More »