NERSCPowering Scientific Discovery Since 1974

Getting Started

Logging in

In order to follow this page, you will need an account, a username and a password. If you do not have all of these things please visit the Accounts Page.

Users can log into the Carver system using the Secure Shell (SSH) protocol 2 with the following command:

mydesktop% ssh -l username carver.nersc.gov

When you successfully log in you will land in your $HOME directory.

There are typically four login nodes available on Carver.  The hostname "carver.nersc.gov" will invoke a load-balancer that will select an appropriate login node.  Login nodes should be used for compiling/linking applications, preparing input files, submitting batch jobs, and viewing/processing results.

Users who require access to a specific login node (e.g., for some network applications) should use the name "carvergrid.nersc.gov":

mydesktop% ssh -l username carvergrid.nersc.gov

First Program Code: Parallel Hello World

Open a new file called helloWorld.f90 with a text editor such as emacs or vi. Paste the contents of the below code into the file.

program helloWorld
 implicit none
 include "mpif.h"
 integer :: myPE, numProcs, ierr
 call MPI_INIT(ierr)
 call MPI_COMM_RANK(MPI_COMM_WORLD, myPE, ierr)
 call MPI_COMM_SIZE(MPI_COMM_WORLD, numProcs, ierr)
 print *, "Hello World from ", myPE
 call MPI_FINALIZE(ierr)
end program helloWorld

Compile the Program

Compile the program with the mpif90 Fortran compiler wrapper:

carver% mpif90 -o helloWorld helloWorld.f90

Create a Batch Script

Open a file called my_batch_script with a text editor like vi or emacs and paste in the contents below. The batch script is used to tell the batch system to reserve compute node resources for your job and how it should launch your application on the compute nodes it has reserved.

#PBS -q debug 
#PBS -l nodes=1:ppn=8
#PBS -l walltime=00:10:00
#PBS -N my_job
#PBS -j oe

cd $PBS_O_WORKDIR
mpirun -np 8 ./helloWorld

Submit Your Job to the Queue

Submit your batch script to the compute nodes using the qsub command:

carver% qsub my_batch_script
123456.cvrsvc09-ib

In the above example, the qsub command returned a "jobid" of 123456.cvrsvc09-ib.  It is important to keep track of your jobid; it can be used to monitor your job and to troubleshoot any problems your job might encounter.

Monitor Your Job in the Queue

After you submit your job, the system scheduler will check to see if there are compute nodes available to run the job. If there are compute nodes available, your job will start running. If there are not, your job will wait in the queue until there are enough resources to run your application. You can monitor your position in the queue with the showq, qs, or qstat command:

carver% showq
carver% qs
carver% qstat -u username

Examine Your Job's Output

When your job has completed you should see a file called my_job.o[jobid]

carver% cat my_job.o123456
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
Hello World from 1
Hello World from 2
Hello World from 6
Hello World from 5
Hello World from 0
Hello World from 4
Hello World from 3
Hello World from 7

----------------------------------------------------------------
Jobs exit status code is 0
Job my_job/123456.cvrsvc09-ib completed Tue Mar  1 15:59:14 PST 2011
Submitted by dpturner/dpturner using mpccc
Job Limits: neednodes=1:ppn=8,nodes=1:ppn=8,walltime=00:10:00
Job Resources used: cput=00:00:00,mem=0kb,vmem=0kb,walltime=00:00:01
Nodes used: c0446-ib

Killing any leftover processes...

Job completed.