New Features of the Hopper XE6 - Differences from Franklin
While the Franklin and Hopper systems are both have similar programming environments and user software, there are some key architectural differences between the two systems. This page describes those differences and how they may improve your productivity.
More Cores per Node and Multiple Sockets per Node
Hopper has a total of 24 cores on each node. With more cores per node, you may want to explore adding OpenMP to applications. Hopper also has two sockets on each compute node whereas Franklin only has one. Please see the Hopper mutli-core FAQ page for a discussion of effectively using all 24 cores per node and the Hopper configuration page for more system specification details.
External Login Nodes
The Hopper system has login nodes that are "external" to the main compute portion of the system. This is different from Franklin, where the login nodes are a part of the main system. The login nodes on Hopper are more powerful than Franklin and provide several other capabilities.
| Hopper | Franklin |
|---|---|
| 12 quad-socket, quad-core nodes (16 cores per node) | 10 single-socket, dual-core nodes (2 cores per node) |
| 128 GB memory per login node with swap space | 4 GB memory per login node, no swap space |
| Ability to login when system undergoing maintenance | N/A |
| Ability to access /scratch, /project and /home file systm when system undergoing maintenance. | N/A |
| Ability to submit jobs when system undergoing maintenance. (Jobs are forwarded to the main system when it returns from maintenance.) | N/A |
External File Systems
The Hopper file systems, /scratch, /scratch2, /gscratch, /project and /home are external to the main system. This allows users to access data when the system is down for maintenance. See the file system configuration page for more details. See the I/O optimization page for information about tuning your code to get the best I/O performance
Support for Dynamic and Shared Libraries
The Hopper system supports system provided shared libraries on the compute nodes though a mechanism of forwarding a more fully featured Linux environment, like that on the login nodes, to the compute nodes. The forwarding system is enabled by Cray's Data Virtualization Service (DVS) which is an I/O forwarding mechanism. Any Cray provided software library that is available on the login nodes should have a shared library version that can be run on the compute nodes. Software provided by NERSC staff will not yet run through the DVS mechanism, however, users can still use their own shared libraries on Hopper.
Please see the Dynamic and Shared Libraries documentation page for more information.
Gemini Interconnect
Hopper uses the Gemini interconnect for inter-node communication, while Franklin uses the Seastar interconnect. Besides being a higher bandwidth, lower latency interconnect, the Gemini network offers a number of resiliency features such as adaptive routing. What this means to end users is that a interconnect component can fail, but the Hopper system will remain up and applications will continue to run.


