NERSCPowering Scientific Discovery Since 1974

Getting Started

Welcome to NERSC

Welcome to the National Energy Research Scientific Computing Center, a high performance scientific computing center. This document will guide you through the basics of using NERSC's supercomputers, storage systems, and services.

What is NERSC?

NERSC provides High Performance Computing and Storage facilities and support for research sponsored by, and of interest to, the U.S. Department of Energy Office of Science. NERSC has the unique programmatic role of supporting all six Office of Science program offices: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, and Nuclear Physics. Scientists who have been awarded research funding by any of the offices are eligible to apply for an allocation of NERSC time. Additional awards may be given to non-DOE funded project teams whose research is aligned with the Office of Science's mission. Allocations of time and storage are made by DOE.

NERSC has about 6,000 active user accounts from across the U.S. and internationally.

NERSC is a national center, organizationally part of Lawrence Berkeley National Laboratory in Berkeley, CA. NERSC staff and facilities are primarily located at Berkeley Lab's Shyh Wang Hall on the Berkeley Lab campus.

Computing & Storage Resources

NERSC's major computing resources are:

A Cray XC40 with 76,416 compute cores of Intel Xeon ("Haswell") and 658,784 compute cores of Intel Xeon Phi ("Knights Landing"). The Xeon nodes have a total of 307 TB of memory, and the Xeon Phi nodes have a total of nearly 1.1 PB of memory. Cori has 30 PB of disk, 1.8 PB of flash-based storage in a burst buffer, and features the Cray "Aries" high-speed internal network.
A Cray XC30 with 133,824 compute cores, 357 TB of memory, 7.56 PB of disk, and the Cray "Aries" high-speed internal  network. Edison is optimized for running high-performance parallel scientific codes.
Major storage systems are:
Local Scratch
Edison and Cori also have local scratch file systems. The default user quota on Edison is 10 TB and on Cori is 20 TB.
The project file system provides permanent storage to groups of users who want to share data. The default quota is 1 TB and can be increased by request. Project is available from all NERSC compute systems.
HPSS Archival Storage
NERSC's archival storage system provides up to 240 PB of permanent, archival data storage.

To see which of these systems best fits your needs see Computational Systems and Data and File Systems.

How to Get Help

With an emphasis on enabling science and providing user-oriented systems and services, NERSC encourages you to ask lots of questions. There are lots of way to do just that.

Your primary resources are the NERSC web site and the HPC Consulting and Account Support staff. The consultants can be contacted by phone, email, or the web during working hours Pacific Time. NERSC's consultants are HPC experts and can answer just about all of your questions.

The NERSC Operations staff is available 24x7, seven days a week to give you status updates and reset your password. The NERSC web site is always available with a rich set of documentation, tutorials, and live status information.

Technical questions, computer operations, passwords, and account support

1-800-666-3772 (or 1-510-486-8600)
Computer Operations = menu option 1 (24/7)
Account Support = menu option 2,
HPC Consulting = menu option 3, or
Online Help Desk =

Computer operations (24x7) can reset your password and give you machine status information. Account Support and HPC Consulting are available 8-5 Pacific Time on business days. See Contacting NERSC.

NERSC Web Site

You're already here so you know where to find NERSC's web site: The web site has a trove of information about the NERSC center and how to use its systems and services. The "For Users" section is designed just for you and it's a good place to start looking around.

New NERSC Accounts

In order to use the NERSC facilities you need:

  1. Access to an allocation of computational or storage resources as a member of a project account called a repository.
  2. A user account with an associated user login name (also called username).

If you are not a member of a project that already has a NERSC award, you may apply for an allocation. If you need to get a new user account that will be associated with an existing NERSC award, you should submit a request for a new NERSC account . See:


Each person has a single password associated with their login account.  This password is known by various names:  NERSC password, NIM password, and NERSC LDAP password are all commonly used.  As a new user, you will receive an email with a link to set your initial password. You should also answer the security questions; this will allow you to reset your password yourself should you forget it.  See Passwords.

Login Failures

If you fail to type your correct password five times in a row when accessing a NERSC system, your account on that system will be locked.  To clear these failed logins, you should login to NIM.  The simple act of logging in to NIM will clear all your login failures on all NERSC systems.

Accounting Web Interface (NIM)

You log into the NERSC Information Management (NIM) web site at to manage your NERSC accounts. In NIM you can check your daily allocation balances, change your password, run reports, update your contact information, change your login shell, etc.  See NIM Web Portal.

Connecting to NERSC

In order to login to NERSC computational systems, you must use the SSH protocol.  This is provided by the "ssh" command on Unix-like systems (including Mac OSX) or by using an SSH-compatible application (e.g. PuTTY or git bash on Microsoft Windows). Login in with your NERSC username and password. Your can use tools based on certificate authentication (e.g. gridftp); please ask the NERSC consultants for details.

We recommend that you "forward" X11 connections when initiating an SSH session to NERSC.  For example, when using the ssh command on Unix-based systems, provide the "-Y" option.

In the following example, a user logs in to Cori, with NERSC username "elvis", and requests X11 forwarding:

myhost% ssh -Y
Password: enter NIM password for user elvis
Last login: Tue May 15 11:32:10 2012 from

---------------------------- Contact Information ------------------------------
NERSC Contacts      
NERSC Status        
NERSC: 800-66-NERSC (USA)     510-486-8600 (outside continental USA)

------------------- Systems Status as of 2016-02-25 12:39 PDT ------------------
Cori:       System available.
Edison:      System available.
Genepool:    System available.
PDSF:        System available.

Global Filesystems:
DNA: Available.
Global Common: Available.
Global Homes:    Available.
Project:         Available.
ProjectA: Available.
ProjectB:        Available.

Mass Storage Systems:
HPSS Backup:     Available.
HPSS User:       Available.

------------------- Service Status as of 2016-02-25 12:39 PDT ------------------
All services available.

-------------------------------- Planned Outages -------------------------------

--------------------------------- Past Outages ---------------------------------
For past outages, please see

cori02 e/elvis>


NERSC and each system's vendor supply a rich set of HPC utilities, applications, and programming libraries. If there is something missing that you would like to have on our systems, please submit a ticket on with your request and we will evaluate it for appropriateness, cost, effort, and benefit to the community.

For a list of available software, see NERSC Software. Popular applications include VASP and Gaussian; libraries include PETSc and HDF5.

More information about how you use software is included in the next section.

Computing Environment

When you log in to any NERSC computer (not HPSS), you are in your global $HOME directory. You initially land in the same place no matter what machine you connect to: Cori or Edison - their home directories are all the same (with the exception of PDSF). This means that if you have files or binary executables that are specific to a certain system, you need to manage their location. Many people make subdirectories for each system in their home directory. Here is a listing of my home directory.

cori02% ls
edison/  datatran/  cori/ 

Customizing Your Environment

The way you interact with the NERSC computers can be controlled via certain startup scripts that run when you log in and at other times.  You can customize some of these scripts, which are called "dot files," by setting environment variables and aliases in them. 

There are several "standard" dot-files that are symbolic links to read-only files that NERSC controls. Thus, you should NEVER modify or try to modify such files as .bash_profile, .bashrc, .cshrc, .kshrc, .login, .profile, .tcshrc, or .zprofile. Instead, you should put your customizations into files that have a ".ext" suffix, such as .bashrc.ext, .cshrc.ext, .kshrc.ext, .login.ext, .profile.ext, and .tcshrc.ext. Which of those you modify depends on your choice of shell. 

The table below contains examples of basic customizations. Note that when making changes such as these it's always a good idea to have two terminal sessions active on the machine so that you can back out changes if needed!

Customizing Your Dot Files
bash csh
export ENVAR=value setenv ENVAR value
export PATH=$PATH:/new/path set PATH = ( $PATH /new/path)
alias ll='ls -lrt’ alias ll “ls –lrt”

Note, too, that you may want certain customizations for just one NERSC platform and not others, but your "dot" files are the same on all NERSC platforms and are executed upon login for all.  The solution to this problem is to test the value of a preset environment variable $NERSC_HOST. Here is an example for .cshrc.ext

if ($NERSC_HOST == "edison") then
setenv FC ifort

and an example for .bashrc.ext

if [ "$NERSC_HOST == "edison" ] then
   export FC=ifort

If you accidentally delete the symbolic links to the standard dot-files or otherwise damage your dot-files to the point that it becomes difficult to do anything you can recover the original dot-file configuration by running the NERSC command fixdots


Easy access to software is controlled by the modules utility. With modules, you can easily manipulate your computing environment to use applications and programming libraries. In many cases, you can ignore modules because NERSC has already loaded a rich set of modules for you when you first log in. If you want to change that environment you "load," "unload," and "swap" modules. A small set of module commands can do most of what you'll want to do.

module list

The first command of interest is "module list", which will show you your currently loaded modules. When you first log in, you have a number of module loeaded for you. Here is an example from Hopper.

cori02% module list
Currently Loaded Modulefiles:
1) nsg/1.2.0 13) gni-headers/4.0-1.0502.10859.7.8.ari
2) modules/ 14) xpmem/0.1-2.0502.64982.5.3.ari
3) eswrap/1.1.0-1.020200.1231.0 15) dvs/2.5_0.9.0-1.0502.2188.1.116.ari
4) switch/1.0-1.0502.60522.1.61.ari 16) alps/5.2.4-2.0502.9774.31.11.ari
5) intel/ 17) rca/1.0.0-2.0502.60530.1.62.ari
6) craype-network-aries 18) atp/1.8.3
7) craype/2.4.2 19) PrgEnv-intel/5.2.82
8) cray-libsci/13.2.0 20) craype-haswell
9) udreg/2.3.2-1.0502.10518.2.17.ari 21) cray-shmem/7.2.5
10) ugni/6.0-1.0502.10863.8.29.ari 22) cray-mpich/7.2.5
11) pmi/5.0.9-1.0000.10911.0.0.ari 23) slurm/cori
12) dmapp/7.0.1-1.0502.11080.8.76.ari 24) darshan/3.0.0-pre3

You don't have to be concerned with most of these most of the time. The most important one to you is called "PrgEnv-intel", which let you know that the environment is set up to use the Intel compiler suite.

module avail

Let's say you want to use a different compiler. The "module avail" command will list all the available modules. It's a very long list, so I won't list it here. But you can use the module's name stem to do a useful search. For example

cori02% module avail PrgEnv
--------------------------- /opt/modulefiles -------------------------------
PrgEnv-cray/5.2.56 PrgEnv-gnu/5.2.56 PrgEnv-intel/5.2.56
PrgEnv-cray/5.2.82(default) PrgEnv-gnu/5.2.82(default) PrgEnv-intel/5.2.82(default)

Here you see that five programming environments are available using the Cray, GNU, and Intel compilers. (The word "default" is confusing here; it does not refer to the default computing environment, but rather the default version of each specific computing environment.)

module swap

Let's say I want to use the Cray compiler instead of Intel. Here's how to make the change

cori02% module swap PrgEnv-intel PrgEnv-cray

Now you are using the Cray compiler suite. That's all you have to do. You don't have to change your makefiles, or anything else in your build script unless they contain Intel or Cray-specific options or features. Note that modules doesn't give you any feedback about whether the swap command did what you wanted it to do, so always double-check your environment using the "module list" command.

module load

There is plenty of software that is not loaded by default. You can consult the NERSC web pages to see a list, or you can use the "module avail" command ot see what module are available ("module list" output can be a bit cryptic, so check the web site if you are in doubt about a name).

For example, if you want to use the NAMD molecular dynamics application. Try "module avail namd"

cori02% module avail python

-------------------------------------------- /usr/common/software/modulefiles ---------------------------------------------

python/2.7-anaconda python/2.7.10(default) python/3.4-anaconda python_base/2.7.10(default)

The default version is 2.7.10, but say you'd rather use some features available only in version 3.4. In that case, just load that module.

cori02% module load python/3.4-anaconda

Now you can invoke python with the "python" command in the proper way (see Running Jobs below).

If you want to use the default version, you can type either "module load python" or "module load python/2.7.10", either will work. (The word "default" is not part of the name.)

Compiling Code

Let's assume that we're compiling code that will run as a parallel application using MPI for internode communication and the code is written in Fortran, C, or C++. In this case, it's easy because you will use standard compiler wrapper script that bring in all the include file and library paths and set linker options that you'll need.

On the Cray systems (Cori and Edison) you should use the following wrappers:  ftn, cc, or CC

Parallel Compilers
Platform Fortran C C++
Cray ftn cc CC

Here's a "Hello World" program to illustrate.

cori02% cat hello.f90 
program hello

        implicit none

        include "mpif.h"

        integer:: myRank
        integer:: ierror

        call mpi_init(ierror)

        call mpi_comm_rank(MPI_COMM_WORLD,myRank)

        print *, "MPI Rank ",myRank," checking in!"

        call mpi_finalize(ierror)

end program hello

To compile on Cori (a Cray), use

cori02% ftn -o hello.x hello.f90 

That's all there is to it. No need to put thing like -I/path/to/mpi/include/files or -L/path/to/mpi/libraries on the compile line. The "ftn" wrapper does it all for you. (For fun, add a -v flag to the compile line to see all the things you'd have to specify by hand if the wrappers weren't there to help. You don't want to do that! In addition, when system software is updated, you don't have to change your compile line to point to new directories.)

Using Programming Libraries


If you want to use a programming library, all you have to do on the Crays is load the appropriate module. Let's compile an example code that uses the HDF5 I/O library. (The code is HDF5 Example.) First let's try it in the default environment.

cori2% cc -o hd_copy.x hd_copy.c
PGC-F-0206-Can't find include file hdf5.h (hd_copy.c: 39)

The example code include the line

#include "hdf5.h"

and compiler doesn't know where it is. Now let's load the hdf5 module and try again.

cori02% module load hdf5
cori02% cc -o hd_copy.x hd_copy.c

We're all done and ready to run the program! No need to manually add the path to HDF5; it's all taken care of by the scripts.


Running Jobs

Almost all work on an HPC cluster is done not by logging in and starting an application as you would on a desktop computer, but by preparing a submit a job script to a batch system.

NERSC Cray clusters use the SLURM batch system - please read the corresponding pages for Edison and Cori for details of how this works and how to use it, including some example job scripts. The PDSF cluster uses UGE and is described in Using the PDSF batch system.

Improving Code Performance on NERSC Computing Systems

Each NERSC computing system has more than one compiler available and these compilers have a wide variety of optimization options. In addition there are a large number of mathematical libraries available with different performance characteristics. There are also run time options that have an impact on code performance. 

Choosing a Compiler

The gnu and Intel compilers are installed on all NERSC systems. Cori and Edison also have the Cray vendor supplied compiler available.

These compilers have different characteristics, and there is no way of predicting which compiler will give you the best performance for your code on a particular system. There are some generalizations that can be made about them. 

The Intel and Cray compilers are specifically targetted for the computer architecture of the systems on which they are installed, and in general produce faster running code. The default level of optimization produced by these compilers when no specific optimization arguments are given is very high.

The gnu compilers run on a wide variety of architectures and are more concerned about functionality and portability than performance, although it is often the case that a code compiled with one of these compilers will outperform the same code compiled with the Intel or Cray compilers.  When no optimization arguments are provided, the gnu compilers do no optimization.

On each of our computing systems we have run a set of benchmarks comparing the performance of these compilers with different optimization options and comparing the performance of the compilers against the other compilers on that system using  the "best" optimization options for each compiler on that system.  See Cori and Edison.

Compiler Optimizations

These are some common compiler optimizations and the types of code that they work best with.


The registers and arithmetic units on Edison are capable of performing the same operation on up to 4 double precision operands or 8 single precision operands simultaneously in a SIMD (Single Instruction Multiple Data) fashion.  This is often referred to as vectorization because of its similarities to the much larger vector registers and processing units of the Cray systems of the pre-MPP era.  Intel has promised to increase the number of operands in its vectors in subsequent processor releases, so the relative performance of well vectorized codes should increase on these future processors.

Vector optimization is most useful for large loops with in which each successive operation has no dependencies on the results of the previous operations.

Loops can be vectorized by the compiler or by compiler directives in the source code.  All of the compiler optimization options recommended by NERSC for all compilers include automatic vectorization of the code.

Interprocedural Optimization

This is defined as the compiler optimizing over subroutine, function, or other procedural boundaries

This can have many levels ranging from inlining, the replacement of a function call with the corresponding source code at compile time, up to treating the entire program as one routine for the purpose of optimization.

This can be the most compute intensive of all optimizations at compile time, particularly for large applications and can result in an increase in the compile time of an order of magnitude or more without any significant speedup and can even cause a compile to crash.  For this reason none of the NERSC recommended compiler optimization options include any significant interprocedural optimizations.

It is most suitable when there are function calls embedded within large loops.

Relaxation of IEEE Floating-point Precision

Full implementation of IEEE Floating-point precision is often very expensive.  There are many floating point optimization techniques that significantly speed up a code's performance by relaxing some of these requirements.

Since most codes do not require an exact implementation of these rules, all of the NERSC recommended optimizations include relaxed floating point techniques.

Pattern Matching

This is specific to the Cray compiler.  It can recognize source code patterns that correspond to highly optimized routines in its libsci math library and uses the library code when it creates the executable program.  It is on by default on the NERSC Cray recommended compiler options.

Optimization Arguments

This table shows how to invoke these optimizations for each compiler.  Some of the options have numeric levels with the higher the number, the more extensive the optimizations, and with a level of 0 turning the optimization off.  For more information about these optimizations, see the compiler on-line man pages.

Optimization Intel Cray gfortran/gcc  
Vectorization -vec -h vectorn [n=0,1,2,3] -ftree-vectorize  
Interprocedural -ipo -h ipan [n=0,1,2,3,4,5] -finline-[opt],-fipa[-opt]  
IEEE FP relaxation -mno-ieee-fp -h fpn [n=0,1,2,3,4] -ffast-math  
Pattern Matching NA -h pattern NA  

Using Libraries to Optimize Performance

NERSC systems have high performance math libraries installed.  On Cori and Edison the Cray libsci library and Intel's MKL library provide a wide variety of standard math libraries like BLAS, BLACS, LAPACK, and ScaLAPACK, optimized for very high performance on those systems.  By using these libraries you can overcome may of the performance limitations of a given compiler and get close to the maximum possible level of optimization.

All compilers on the Cray and gnu compilers on Cori and Edison link the libsci library by default so no explicit library or include arguments are necessary to use this library.  To use the MKL library with the Intel compiler on Cori or Edison follow the instructions at  MKL.

Library Optimization Example

As an example of how library usage can speed up a code significantly consider the following example, a double precision matrix-matrix multiply.

In Fortran:

do i=1,idim
do j=1,idim
do k=1,idim

In C:

for (j=0; j<idim; j++) {
for (i=0; i<idim; i++) {
for (k=0; k<idim; k++)

Using the BLAS DGEMM routine:

call dgemm("N","N",idim,idim,idim,scale,a,idim,b,idim,scale,c,idim)

This table shows the improvement in performance obtained when the source code is replaced by the dgemm library call on different systems with different compilers at the NERSC recommended compiler performance options.  The matrices are 4000 by 4000 (ilim=4000).  The dgemm routine is called from the Cray compilers on Edison.  It is called with the Intel compiler on Cori and Edison.

For the Fortran runs, the performance of the Fortran intrinsic matmul is also shown.

System Compiler Optimization dgemm Fortran C dgemm/F dgemm/C matmul
Edison Intel -fast -no-ipo 24.24 GF 13.53 GF 15.15 GF 1.79 1.60 9.94 GF
Edison Cray default 23.68 GF 23.66 GF 23.62 GF 1.00 1.00 23.61 GF
Edison gnu -Ofast 23.52 GF .15 GF .65 GF 155.91 36.08 2.17 GF

Because of its pattern matching optimization capability described in the compiler optimization section, the Cray compiler identified the loop as being equivalent to a dgemm blas call and has used the libsci dgemm blas optimizations for both C and Fortran source code as well as the matmul Fortran intrinsic routine.

Although the Intel compiler is released with the MKL library, neither the source code versions nor even the matmul intrinsic approach the MKL dgemm performance.

The gnu source code is quite slow relative both to the dgemm MKL/libsci performance as well as the matmul performance.  This is probably due to the fact that the gnu compiler does not restructure loops to take advantage of memory caching even at the very high -Ofast optimization level.

Run Time Optimizations

Some codes with certain characteristics can have their performance improved at run time.

Optimal Thread Choice and Placement for Hybrid Codes

Hybrid codes are those that take advantage of multi-core nodes to allow for less memory usage and possibly faster communication by putting OpenMP parallellization regions into MPI routines.  OpenMP is generally implemented by means of source code directives instead of subroutine calls as with MPI.  Whether a code is run with more than 1 OpenMP thread is determined at run time by means of environment variables.

The optimal method of running hybrid codes is different for each NERSC system and are described at Cori and Edison

Efficient I/O

For I/O intensive code, it is important to run on the correct file systems.  NERSC file systems describes the file systems available to users on each NERSC system.

Generally speaking you should avoid using the home file system for running jobs, particularly if they are I/O intensive, since this file system is tuned for efficiently reading and writing small to moderate files such as those created during compiles.  The user quota on this file system is comparatively small, 40 GB, and NERSC almost never grants quota increase requests for this system.  Running over quota in your home space can have many bad effects on your run time environment, since many applications like X windows need the ability to write files into the user's home space in order to run.

The various scratch file systems have much larger quotas and are tuned for efficiently reading and writing large files, so user codes should use these file systems for their I/O.

The local scratch file systems on Cori and Edison are Lustre based.  IO performance on these systems for large files can be improved by means of file striping which allows I/O operations on the same file to be done in parallel on different file servers.

Transferring Data

We provide several ways for transferring data both inside and outside NERSC. To transfer files from/to NERSC, we suggest using the dedicated Data Transfer Nodes, which are optimized for bandwidth and have access for most of the NERSC file systems.

Tools for data transfer include:

  • SCP/SFTP: for smaller files (<1GB).

  • Globus Online: for large files, with features for auto-tuning and auto-fault recovery without a client install

  • BaBar Copy (bbcp): for large files

  • GridFTP: for large files

  • HSI: can be an efficient way to transfer files already in the HPSS system

For more detailed information on data transfer, see Transferring Data.

Archiving Files with HPSS

The High Performance Storage System (HPSS) is a modern, flexible, performance-oriented mass storage system. It has been used at NERSC for archival storage since 1998. It is a valuable resource for permanently archiving user's data.

Users can access NERSC's HPSS machines through a variety of clients such as hsi, htar, ftp, pftp, and grid clients. On NERSC systems users typically archive their files using either hsi (for individual files) or htar (for aggregates of files). Since HPSS is a tape archive system, it's best to group many small files together using htar (or tar). The ideal file size for archiving to HPSS is a few hundred GBs.

For more information about the specific features of HPSS see Getting Started with HPSS.