NERSCPowering Scientific Discovery Since 1974

Login Nodes

Login Node Quick Facts

  • When you ssh to hopper.nersc.gov, you are connecting to a "login node."
  • Login nodes are used to edit files, compile codes, and submit job scripts to the batch system to run on the  "compute nodes."
  • Hopper has 12 login nodes (this is largely transparent to users).
  • 4 quad-core AMD 2.4 GHz Opteron processers (16  cores total) on 8 of the login nodes
  • 4 8-core AMD 2.0 GHz Opteron processors (32 cores total) on 4 of the login nodes.
  • Each login node has 128 GB of memory.
  • The login nodes sit behind a load balancer.  New connections are assigned to a login node on a round robin fashion with the exception that if you've connected to Hopper recently, the load balancer will attempt to put you on the same login node as your previous connection.
  • The login nodes are external to the main Cray XE6 system so you can log in and work with data and submit jobs when the compute portion of Hopper is undergoing maintenance.

Process Limits and Appropriate Use of Login Nodes

Hopper does not  have process limits, but please be considerate of other users and do not attempt to run compute-intensive or large-memory jobs on the login nodes.  CPU- and memory-intensive applications should be run on the compute nodes and submitted through the batch system.   The following applications can be CPU and memory intensive and are popular applications on the login nodes:

  • IDL
  • Matlab
  • NCL
  • python

Launching any of the above applications for a short time (< 1 hour) on a small dataset should not cause any problems for other users on the login nodes.  If you need to run any of these applications for an extended period of time on large datasets, please run them in the batch queues.  Cluster Compatibility Mode (CCM) on Hopper supports IDL, Matlab , and NCL.

A few other tips:

  • If you need to do large number of data transfers to archival storage with hsi or htar, please use the xfer queue to avoid stressing the login nodes.  
  • The 'watch' command should be used sparingly.  
  • Long running user applications should never be run on the login nodes.  
  • Avoid long compiles with make -j [n] where n is greater than half the number of cores on a login node 
    • You can determine the number of cores with the command
    • %hopper cat /proc/cpuinfo | grep processor

 NERSC reserves the right to kill processes on the login nodes if responsiveness is being impacted.