NERSCPowering Scientific Discovery Since 1974

Running Jobs

MPI Launch Overview

Open MPI provides the orterun command to launch MPI applications. However, most people prefer to use one of two aliases: mpirun or mpiexec. Read More »

Submitting Batch Jobs

A batch job is the typical way users run production applications on NERSC machines. The user submits a batch script to the batch system. This script specifies, at the very least, how many nodes and cores the job will use, how long the job will run, and the name of the application to run. Carver's batch system is based on the PBS model, implemented with the Moab scheduler and Torque resource manager. Read More »

Interactive Jobs

There are two types of interactive jobs. The first type runs on a login node. These applications are typically pre- and post-processing jobs, data management programs, or some other type of "tool". Note that it is not possible to run an MPI application on Carver login nodes. The second type of interactive job runs on one or more Carver compute nodes. Because the only way to gain access to the compute nodes is through the batch system, these types of jobs may more accurately be called "interactive batch" jobs. Read More »

Using OpenMP

Overview OpenMP provides a standardized, threaded, shared-memory programming model, accessed via compiler directives that are embedded in the application source code.  More details on OpenMP (such as the standard specification and tutorials) can be found at the OpenMP Web Site.  There are two approaches to using OpenMP on Carver: "Pure" OpenMP applications run on a single node, and are thus limited to 8 threads on Nehalem nodes. "Hybrid" MPI/OpenMP applications use OpenMP on individual nodes,… Read More »

Queues and Policies

This page describes Carver's queue configuration. Jobs must be submitted to a valid Submit Queue. Upon submission the job is routed to the appropriate Execution Queue. Users can not directly access the Execution Queues. Read More »

Monitoring Jobs

Once a job is submitted it can be monitored, held, deleted and in some cases altered. Read More »

Memory Considerations

Carver login nodes each have 48GB of physical memory. Most compute nodes have 24GB; however, 80 compute nodes have 48GB. Not all of this memory is available to user processes. Some memory is reserved for the Linux kernel. Furthermore, since Carver nodes have no disk, the "root" file system (including /tmp) is kept in memory ("ramdisk"). The kernel and root file system combined occupy about 4GB of memory. Therefore users should try to use no more than 20GB on most compute nodes, or 44GB on the large-memory compute nodes. Read More »

Extra-Large Memory Nodes

How to use the 1TB compute nodes on Carver. Read More »

How Usage Is Charged

When a job runs on a NERSC MPP system charges accrue against one of the user's repository allocations. Read More »

Using Hadoop

Using Hadoop Quick Start Guide This guide describes how to Run a sample job. How to access the web-based status pages for the NERSC Hadoop cluster. Please consult the Hadoop website for general information on using Hadoop. Running an Example First, load the following modules: carver% module load tigcarver% module load testbed Then submit an interactive batch job. For example: carver% qsub -I -l nodes=4:ppn=8 -l walltime=01:00:00 -q regular Once your batch job starts: c0217% module load… Read More »

Using Job Arrays on Carver

Job arrays are a way to simplify writing scripts for batch jobs. One can write one script that will submit a very large number of jobs simultaneously. Read More »