Queues and Policies
Queue Classes
Jobs must be submitted to a valid Submit Queue. Upon submission the job is routed to the appropriate Execution Queue. You can not directly submit a job to an Execution Queue.
| Submit Queue | Nodes | Available Processors | Max Wallclock | Relative Priority | Run Limit |
|---|---|---|---|---|---|
| dirac_int | 1 | 1-8 | 30 mins | 1 | 1 |
| dirac_reg | 1-12 | 1-96 | 6 hrs | 2 | 2 |
| dirac_small | 1 | 1-8 | 6 hrs | 2 | 4 |
| dirac_special | 1-48 | 1-384 | arrange | arrange | arrange |
Special Queue for higher concurrency jobs
For jobs that need more than 32 nodes, please contact consult@nersc.gov with the subject "Special queue request for Dirac". Note that these jobs might take some time to run depending on the load on Dirac.
Requesting Special Resources
Multi-GPU Nodes
We have two additional nodes each with 4 GPUs in them. To request a node with 4 Fermi C 2050 GPUs use resource mfermi:
% qsub -I -V -q dirac_int -l nodes=1:ppn=8:mfermi
To request a node with 4 Tesla C 1060 GPUs, use resource mtesla:
% qsub -I -V -q dirac_int -l nodes=1:ppn=8:mtesla
Note that these two nodes only support single node jobs.
FusionIO Cards
There are 2 FusionIO cards on the system (dirac01-dirac02). The drive is mounted on /fscratch. To request these nodes use the following in your batch submission script:
% qsub -I -V -q dirac_int -l nodes=2:ppn=8:fermi,ssd
Local Hard Drive
There is a local hard drive on each node. Access /lscratch.


