NERSCPowering Scientific Discovery Since 1974

New Features of the Edison XC30 - Differences from Hopper

While the Edison and Hopper systems have similar programming environments and software, there are some key architectural differences between the two systems. This page describes those differences.

Compute nodes 

Edison Phase I has a total of 16 cores on each compute node, compared to Hopper's 24.  Edison, like Hopper, has two sockets on each compute node, but instead of four "NUMA" memory domains, Edison has only two.  Edison uses Intel processors, unlike Hopper which has processors from AMD. Edison's processors have Intel Hyper-Threading (HT) enabled, which means you can run with 32 virtual cores per node. At run time you can decide to run with 16 cores per node (the default setting) or 32 virtual cores per node. 

Edison (Phase 1)Hopper

16 cores per node  (32 virtual cores with Hyper-Threading) Dual-socket, 8-core node, Intel Xeon "Sandy Bridge" @ 2.6 GHz

24 cores per node. Dual-socket, 12-core node, AMD Opteron @2.1 GHz
64 GB memory per node (DDR3 1600 MHz memory) 32 GB memory per node (DDR3 1333 MHz memory)
 664 nodes   6384 nodes

Aires Interconnect

Edison uses Cray's new Aires interconnect for inter-node communication, while Hopper uses the Cray Gemini network.  Aires provides a higher bandwidth, lower latency interconnect than Gemini, and should exhibit reduced network congestion. Edison's Aires network is connected through a new "Dragonfly" topology, compared to Hopper's torus network. See the Technology section for more details.

External Login Nodes

Like Hopper, the Edison system has login nodes that are "external" to the main compute portion of the system. The login nodes on Edison have 512 GB of memory, compared to 128 GB on Hopper.

Summary

Edison Phase IHopper
6 quad-socket, quad-core nodes (16 cores per node w/ Hyper-Threading 32 virtual cores), Intel Xeon "Sandy Bridge" @ 2.0 GHz 12 quad-socket, quad-core nodes (16 cores per node), AMD Opteron @2.0 GHz
512 GB memory per login node  128 GB memory per login node
Ability to login when system undergoing maintenance Same
Ability to access /scratch, /project and /home file systm when system undergoing maintenance. Same
Ability to submit jobs when system undergoing maintenance. Jobs are managed by a centralized, external, queuing system. Yes, jobs are forwarded to the main system when it returns from maintenance.
1 scratch file system ($SCRATCH), 1.6 PB, 35 GB/sec I/O bandwidth 2 scratch file systems, each 1 PB, 35 GB/sec I/O bandwidth

File Systems

The Edison file systems, $SCRATCH,  $GSCRATCH, /project and $HOME do not rely on the main system being available . This allows you to access data when the system is down for maintenance. SCRATCH is a Lustre file system private to Edison. Two additional scratch file systems will be added when the Phase II system is delivered.