Edison Queues and Scheduling Policies
Users submit jobs to a partition and wait in line until nodes become available to run a job. NERSC's queue structures are intended to be fair and to allow jobs of various sizes to run efficiently. Note that the intended use of each system differs. Edison's purpose is to run large jobs, so the queue policy significantly favors large jobs using more than 682 nodes. If your workload requires smaller jobs (using less than 682 nodes), we encourage you to run on Cori Phase I, which is intended for smaller and/or data intensive jobs.
The following is the current queue structure on Edison. Since Edison has just migrated to use Slurm as the workload manager, the queue configuration may need to be adjusted as we gain more insight about how Slurm works for NERSC workloads. Please send questions, feedback, or concerns about the queue structures to the consultants.
|Partition||Nodes||Physical Cores||Max Wallclock||QOS1)||Run Limit||Submit Limit||Relative Priority||Charge Factor2)|
Users with a low individual MPP balance would be put on to the scavenger QOS automatically if running the job would make the repo's MPP balance negative. Jobs in the scavenger QOS will wait in the queue longer. Note, users with low individual balances but sufficient repo balance to cover the job will have their jobs rejected at submission time.
Note: Edison does not support serial workload anymore until further notice. Please run your serial workload on Cori Phase I using the "shared" partition. For more detailed information about partitions, use the "scontrol show partition" command.
Notes about queue policies
- The debug partition is to be used for code development, testing, and debugging. Production runs are not permitted in the debug partition. User accounts are subject to suspension if they are determined to be using the debug partition for production computing. In particular, job "chaining" in the debug partition is not allowed. Chaining is defined as using a batch script to submit another batch script.
- The intent of the regular partition with the premium QOS is to allow for faster turnaround before conferences and urgent project deadlines. It should be used with care, since it costs twice the normal QOS.
- The intent of the scavenger QOS is to allow users with a zero or negative balance in one of their repositories to continue to run on Edison. This applies to both total repository balances as well as per-user balances. The scavenger QOS is not available for jobs submitted against a repository with a positive balance. The charging rate for this QOS is 0 and the priority on all systems is lower than the “low” queue.
Tips for getting your job through the queue faster
- Submit shorter jobs. If your application has the capability to checkpoint and restart, consider submitting your job for shorter time periods. On a system as large as Edison there are many opportunities for backfilling jobs. Backfill is a technique the scheduler uses to keep the system busy. If there is a large job at the top of the queue the system will need to drain resources in order to schedule that job. During that time, short jobs can run. Jobs that request short walltimes are good candidates for backfill.
- Make sure the wall clock time you request is accurate. As noted above, shorter jobs are easier to schedule. Many users unnecessarily enter the largest wall clock time possible as a default.
- Run jobs before a system maintenance. A system must drain all jobs before a maintenance so there is an opportunity for good turn around for shorter jobs.
Reserving a Dedicated Time Slot for Running Jobs
You can request dedicated access to a pool of nodes up to the size of the entire machine time on Edison by filling out the
Please submit your request at least 72 hours in advance. Your account will be charged for all the nodes dedicated to your reservation for the entire duration of the reservation.