NERSCPowering Scientific Discovery Since 1974

Queues and Policies

Queues

Queue NamePurposeUser RequestableSlot LimitMemory LimitWall Clock LimitOther Limits
normal.q Production workloads. Default queue No,  queue assignment based on resource requests (ram.c, h_rt)

none

42G

12 hours

 
long.q Production workflows that need more than 12 hours No,  queue assignment based on resource requests (ram.c, h_rt)

320

42G 

240 hours

 
normal_excl.q Production workloads - exclusive node scheduling.   No, queue assignment based on resource requests (ram.c, h_rt, exclusive.c)

none

>42G

12 hours

 
long_excl.q Production workloads that require more than 12 hours. No, queue assignment based on resource requests (ram.c, h_rt, exclusive.c) none >42G 240 hours  
high.q High priority jobs and debugging jobs Yes, request either "-q high.q" or "-l high.c" (deprecated) 8 120GB

240 hours
default 12 hours

 
interactive.q For light-weight interactive jobs; default for qlogin
Only with qlogin (default) or special services none 120GB 240 hours  
timelogic.q Access to Timelogic accelerated blast nodes Yes, request either "-q timelogic.q" or "-l timelogic.c" (deprecated) none 800MB none  
xfer.q Data Transfer Queue on genepool; Use this to transfer data to /global/dna Yes, request either "-q xfer.q" or "-l xfer.c" (deprecated). 2 3.25GB 72 hours
  • 1 slot per job.
  • Maximum of 6 hours CPU time.

Policies and Best Practices

  • Jobs running longer than 12 hours will be routed to a limited pool of nodes for long jobs
  • The high.c resource is intended for fast turn around jobs.  Users are limited to 2 slots for 12 hours.
  • Always request the shortest wall clock time and the smallest amount of memory you need.  This will allow your job to get scheduled faster.
  • The long queue has a limit of 320 slots per user
  • xfer.q has a long h_rt time limit.  This is to prevent walltime from killing short IO jobs.  Please target your data transfers to < 12 hours.  The CPU limit is intended to enforce this practice.

Genepool Shares

Each project may determine the subshares appropriate for their project.  For issues concerning share distribution, please talk to the proxy of the appropriate sponsor.

You are assigned to a default project when your account is created.  If you need to a job with a different project to which you have access, please add the following to your batch script (or just use -P if submitting to qsub from the command line):

#$ -P <user requested project>

Where the user-requested project is one of the sub-projects listed in the table below. 

SponsorShare%User-requested Project

Len Pennacchio
(Proxy: Rob Egan)

10.1%
  Relative Share % Total Share %
gentech-rnd.p 40% 4.1%
gentech-reseq.p 30% 3%
gentech-rna.p 30% 3%

Len Pennacchio
(Proxy: Rob Egan)

1.7%
  Relative Share % Total Share %
gentech-pi.p 0% 0%
gentech-sdm.p 82.3% 1.4%
gentech-pacbio.p 17.7% 0.3%

Len Pennacchio
(Proxy: Rob Egan)

6.4%
  Relative Share % Total Share %
gentech-qaqc.p 0% 0%
gentech-rqc.p 100% 6.4%
Nikos Kyrpides
(Proxy: Amrita Pati)
37.3%
  Relative Share % Total Share %
prok-IMG.p 36% 13.4%
prok-assembly.p 8% 3%
prok-annotation.p 45% 16.8%
prok-meco.p 5% 1.9%
prok-scell.p 6% 2.2%

Dan Rokhsar
(Proxy: David Goodstein)

 

3.6%
  Relative Share % Total Share %
fungal-assembly.p 100% 3.6%

Dan Rokhsar
(Proxy: David Goodstein) 

9.6%
  Relative Share% Total Share %
fungal-annotation.p 100% 9.6%
Dan Rokhsar
(Proxy: David Goodstein)
 23.4%
  Relative Share % Total Share %
plant-analysis.p 17.1% 4.0%
plant-diversity.p 19.2% 4.5%
plant-assembly.p 32.0% 7.5%
plant-annotation.p 28.8% 6.7%
plant-support.p 2.8% 0.66%
 Ray Turner 3.9%
  Relative Share % Total Share %
vista.p 74.4% 2.9%
system.p 25.6% 1%
Ray Turner 3.8%
  Relative Share % Total Share %
reserve.p 100% 3.8%