NERSCPowering Scientific Discovery Since 1974

Your First Program on Cori

To run a program on Cori you must either compile your own code or use a pre-built application. In the following example, we'll guide you through compiling and running the "Hello World" code shown below. Cori has two kinds of compute nodes: "Intel Haswell" and "Intel Xeon Phi". This pages refers to the Haswell nodes only, for simplicity.

"Hello World" Example

Using your favorite text editor (e.g. vi, emacs) open a new file called "helloWorld.f90". You can copy the example below by using  "view source" button Copy Icon that will appear in the upper right hand corner of the example when you move your cursor there and paste the contents into your file. 

program helloWorld
implicit none
include "mpif.h"
integer :: myPE, numProcs, ierr
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, myPE, ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, numProcs, ierr)
print *, "Hello from Processor ", myPE
call MPI_FINALIZE(ierr)
end program helloWorld

 Compile the Program

Use the compiler "wrappers" to compile codes on Cori: use ftn for Fortran, cc for C, and CC for C++. The following example will compile the text file "helloWorld.f90" into the binary executable "helloWorld".

% ftn -o helloWorld helloWorld.f90

Run the Program

To run a code on Cori you (1) submit a request to the batch system (SLURM at NERSC); and (2)  launch your job on to the compute nodes using the 'srun' command.  There are two ways to submit a request to the batch system: (1) You can request an interactive  session using the salloc command or your can submit a batch script (see below).  (Note that there is no 'mpirun' command – which is used by many MPI implementations – on Cori.)

Create a Batch Script

Create and open a text file called my_batch_script with a text editor like vi or emacs and paste in the contents below.   The batch script is used to tell the Cori system to reserve compute node resources for your job and how it should launch your application. In this example, two nodes are requested in the Cori Haswell partition and runs 64 MPI tasks spread over the two 32-core nodes.

#!/bin/bash -l
#SBATCH -p debug
#SBATCH -N 2
#SBATCH -C haswell #SBATCH -t 00:10:00 #SBATCH -J my_job srun -n 64 ./helloWorld

Submit Your Job to the Queue

The sbatch command is used  to submit your batch script to run your code on the Cori compute nodes.

% sbatch my_batch_script

A job number will be returned, such as 13479.

Monitor Your Job in the Queue

After you submit your job, the system scheduler will check to see if there are compute nodes available to run it. If there are compute nodes available, your job will start running. If there are not, your job will wait in the queue until there are enough resources to run your application. You can monitor your position in the queue using the squeue command:

cori% squeue -u username

For more information, see Monitoring Jobs.

Examine Your Job's Output

When your job has completed you should see a file called slurm-jobid.out that contains:

 Hello from Processor            1
 Hello from Processor            2
 Hello from Processor           10
 Hello from Processor            8
 Hello from Processor           11
 Hello from Processor            3
 Hello from Processor            9
 Hello from Processor            4
 Hello from Processor            6
 Hello from Processor            5
 Hello from Processor            7
 Hello from Processor           12
 Hello from Processor           17
 Hello from Processor           20
 Hello from Processor           14
 Hello from Processor           22
 Hello from Processor           15
 Hello from Processor           16
 Hello from Processor           19
 Hello from Processor           18
 Hello from Processor            0 etc.

Accessing job information

To access detailed information about your job, use the scontrol command with your jobid number:

% scontrol show job jobid

This will give you a detailed output, such as: 

 JobId=3932 JobName=my_job
UserId=username(00000) GroupId=username(00000)
Priority=1003 Nice=0 Account=(null) QOS=normal
JobState=COMPLETED Reason=None Dependency=(null)
Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0
RunTime=00:00:06 TimeLimit=00:10:00 TimeMin=N/A
SubmitTime=2015-10-12T11:01:00 EligibleTime=2015-10-12T11:01:00
StartTime=2015-10-12T11:01:01 EndTime=2015-10-12T11:01:07
PreemptTime=None SuspendTime=None SecsPreSuspend=0
Partition=debug AllocNode:Sid=cori08:10424
ReqNodeList=(null) ExcNodeList=(null)
NodeList=nid0[2285-2286]
BatchHost=nid02285
NumNodes=2 NumCPUs=128 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
TRES=cpu=128,mem=249856,node=2
Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
MinCPUsNode=1 MinMemoryNode=122G MinTmpDiskNode=0
Features=(null) Gres=craynetwork:1 Reservation=(null)
Shared=0 Contiguous=0 Licenses=(null) Network=(null)
Command=/global/u1/username/Cori/my_batch_script.sh
WorkDir=/global/u1/username/Cori
StdErr=/global/u1/username/Cori/slurm-3932.out
StdIn=/dev/null
StdOut=/global/u1/username/Cori/slurm-3932.out
Power= SICP=0
 

 For more information about SLURM please see the SLURM pages