SC16 OpenMP Tutorial
First logon to your training to your training account. Although you can change your password through the interface at nim.nersc.gov, we recommend that you do not. The training account will enable you to access a reserved set of nodes for this tutorial.
ssh -X -ltrain171 edison.nersc.gov
(Type your password -- carefully please!)
Then recursively copy the tutorial exercises to your home directory. Use the period at the end of the command to specify the destination.
%cp -r /project/projectdirs/training/SC16/OpenMP/ .
The tutorial exercises run interactively on both the Edison and Cori systems. To use Edison's compute nodes you must request one node and have the batch system allocate resources from the pool of free nodes. Please use only one node for your exercises. The following command requests one using either the debug partition or the special training partition. (Change debug to sc16 to use the reservation nodes, but this will only work on tutorial day.) You can compile on the login nodes, which are different from the compute nodes.
During the actual tutorial use this box below. After the tutorial, use the debug queue (2nd box). The reservation will only work with training accounts, not your NERSC account.
edison% salloc -N 1 --reservation=sc16
(Wait for your resource)
Skip this box below during the tutorial. You should already have a node from the reservation.
edison% salloc -N 1 -p debug
(Wait for your resource)
The -p flag specifies the name of the partition and the -N option specifies the number of nodes to allocate for your job. We are doing OpenMP only codes, so just grab one node (IMPORTANT not to hog nodes). Since Edison has 24 cores per compute node, you can run 24 tasks/threads per node. If you use Hyperthreading, you may run up to 48 tasks/threads in total. The debug partition has a 30 minute wall clock limit, so your interactive job will be terminated after 30 minutes from its start. If you do not need full 30 minutes, you can type the exit command to release the node.
Assuming there are free nodes, and the salloc command returns your shell prompt, you will land on the requested compute node, and will be in your work directory where the salloc command was executed. From the shell prompt, you can start your program on the compute nodes using the "srun" command, the parallel job launcher.
edison% srun -N 1 ./pi
edison% export OMP_NUM_THREADS=6
edison% srun -n 6 ./pi_recur
edison% make test
Your training accounts should have the bash shell (which can be changed) however for this page instructions are given for that shell.
A copy of the tutorial slides is at the bottom of this page.