NERSCPowering Scientific Discovery Since 1974

Configuration

Compute Nodes

The Genepool system is made up of a heterogeneous collection of nodes to serve the diverse workload of the JGI users.  Below is a table of the current configuration.

# NodesCores/nodeMemory/nodeLocal diskProcessorHostnameVendor
2 32 1000GB 3.6TB Xeon E5-4650L mndlhm0205-ib,mndlhm0405-ib.nersc.gov Appro
5 32 500GB 3.6TB Xeon E5-4650L mndlhm[01-05]03.nersc.gov  Appro
222 16 120GB 1.8TB Xeon E5-2670 mc01[55-72],mc02[01-68],mc04[01-68],mc05[01-64]-ib.nersc.gov Appro
450 8 48 GB 1 TB Intel Xeon L5520 2.27 GHz sgi[01a01-06b40].nersc.gov SGI
64 8 48 GB 500 GB Intel Xeon L5520 2.27 GHz quad[01-64].nersc.gov SuperMicro
20 8 144 GB 512 GB Intel Xeon L5520 2.27 GHz x4170a[01-20].nersc.gov Sun
4 32 512 GB 1 TB AMD Opteron 2.28 GHz gpht-[01-04].nersc.gov Sun
1 80 2 TB 300 GB Intel Xeon X7560 2.27 GHz b2r2ibm1t-02.nersc.gov IBM
1 32 1 TB 600 GB Intel Xeon X7560 2.27 GHz gptb-01.nersc.gov Dell
7 24 256 GB 600 GB Intel x7542 2.67 GHz uv10-[1-7].nersc.gov SGI

Login Nodes

Genepool currently has four login nodes.  Users land on one of the four login nodes when they ssh to genepool.nersc.gov.  The different login nodes sit behind a load balancer and youyou will land on the login node with the least number of connections, or if you already have active connections, you will be directed to the same login node.   The names of the login nodes are genepool01, genepool02, genepool03 and genepool04, however you should always access the login nodes with ssh username@genepool.nersc.gov.  The login nodes have 32 GB of RAM, have 2.3GHz processors and have 8 cores each.

"gpint" Analysis Nodes

Genepool also has nodes which are used for pipeline control and pre- and post-processing of jobs as well as analysis jobs. At present, these nodes are allocated per group and are called gpintNN (NN = 01,02,...). Users should refer to the table below to determine which node belongs to their group and only use the nodes allocated to their group.

Node Name

Legacy JGI Name

Assigned Group

Cores

Memory

Per-process

Memory Limit

gpint01.nersc.gov one.jgi-psf.org R & D 24 252.4 GB 214.56 GB
gpint02.nersc.gov bcg1.jgi-psf.org Comparative Genomics (Vista) 16 31.5 GB N/A
gpint03.nersc.gov bcg2.jgi-psf.org Comparative Genomics (Vista) 16 31.5 GB N/A
gpint04.nersc.gov merced.jgi-psf.org IMG 16 63.0 GB 53.59 GB
gpint05.nersc.gov img-worker.jgi-psf.org IMG 32 505 GB 429.23 GB
gpint06.nersc.gov zeus.jgi-psf.org GBP 16 63.0 GB 53.59 GB
gpint07.nersc.gov ranger.jgi-psf.org Plant 24 252.4 GB 214.56 GB
gpint08.nersc.gov boiler.jgi-psf.org Plant 24 252.4 GB 214.56 GB
gpint09.nersc.gov sedona.jgi-psf.org Plant 24 252.4 GB 214.56 GB
gpint10.nersc.gov willow.jgi-psf.org Plant 24 252.4 GB 214.56 GB
gpint11.nersc.gov actinium.jgi-psf.org Plant 32 505 GB 429.23 GB
gpint12.nersc.gov bat.jgi-psf.org R & D 24 239.9 GB 203.91 GB
gpint13.nersc.gov quarter.jgi-psf.org Fungal 48 252.4 GB 214.54 GB
gpint14.nersc.gov chekov.jgi-psf.org Hardware Failure - OFFLINE N/A N/A N/A
gpint15.nersc.gov stimpy.jgi-psf.org General purpose 8 47.3 GB 40.17 GB
gpint16.nersc.gov ren.jgi-psf.org General purpose  8 62.3 GB 52.91 GB
gpint17.nersc.gov thallium.jgi-psf.org General purpose 16 108.4 GB 92.16 GB
gpint18.nersc.gov indium.jgi-psf.org General purpose 16 126.2 GB 107.24 GB
gpint19.nersc.gov gallium.jgi-psf.org General purpose 16 126.2 GB 107.24 GB
gpint20.nersc.gov cadmium.jgi-psf.org General purpose 16 126.2 GB 107.24 GB
gpint21.nersc.gov itchy.jgi-psf.org General purpose 16 165.7 GB 140.83 GB
gpint22.nersc.gov wesley.jgi-psf.org General purpose 16 70.9 GB 60.29 GB
gpint23.nersc.gov n/a SDM - PacBio 8 47.3 GB 40.17 GB
gpint24.nersc.gov rqc-dev.jgi-psf.org RQC 8 47.3 GB 40.17 GB
gpint25.nersc.gov rqc-prod.jgi-psf.org RQC 8 47.3 GB 40.17 GB
gpint26.nersc.gov sdm-dev.jgi-psf.org SDM 8 47.3 GB 40.17 GB
gpint27.nersc.gov sdm-prod.jgi-psf.org SDM 8 47.3 GB 40.17 GB

 

Other Special Purpose Nodes

Nodes which perform specialized tasks such as running a web-server and/or databases are also part of Genepool, although they may not have all the features of login or analysis nodes (such as NGF filesystems, login access, etc.). These nodes are also currently assigned on a per-group basis. Use of these nodes is restricted to the groups to which they have been assigned.

Node Name

Legacy JGI Name

Other Names

Group

gpweb01.nersc.gov vista.jgi-psf.org

vista.jgi-psf.org, genome.lbl.gov, pga.lbl.gov, 

www-pga.lbl.gov, gsd.lbl.gov, www.gsd.lbl.gov,

www-gsd.lbl.gov

 

Comparative Genomics (Vista)
gpweb02.nersc.gov hazelton.jgi-psf.org

hazelton.jgi-psf.org, atgc.lbl.gov, chr16.lbl.gov,

enhancer.lbl.gov, enhancer-test.lbl.gov,

genome-test.lbl.gov, hazelton.lbl.gov,

pipeline-test.lbl.gov, regprecise.lbl.gov

regpredict.lbl.gov, rviewer.lbl.gov

Comparative Genomics (Vista)
gpweb03.nersc.gov helix.jgi-psf.org

pyrotagger.jgi-psf.org

General web server
gpweb04.nersc.gov img-edge1.jgi-psf.org   IMG web server
gpweb05.nersc.gov img-edge2.jgi-psf.org   IMG web server
gpweb06.nersc.gov img-edge3.jgi-psf.org   IMG web server
gpweb07.nersc.gov img-edge4.jgi-psf.org   IMG web server
gpweb08.nersc.gov athena.jgi-psf.org

geneprimp.jgi-psf.org, coal.jgi-psf.org,

clams.jgi-psf.org, gold.jgi-psf.org,

gold-dev.jgi-psf.org

IMG/GBP web server
gpweb09.nersc.gov galaxy.jgi-psf.org   Galaxy web server
gpweb10.nersc.gov galaxy-dev.jgi-psf.org   Galaxy Development web server
gpdb01.nersc.gov lemur.jgi-psf.org lemur.jgi-psf.org Comparative Genomics (Vista)
gpdb02.nersc.gov RESERVED RESERVED RESERVED
gpdb03.nersc.gov RESERVED RESERVED RESERVED
gpdb04.nersc.gov RESERVED RESERVED RESERVED
gpdb05.nersc.gov polonium.jgi-psf.org polonium.jgi-psf.org Plant

Interconnect

The majority of the compute nodes are connected with a 1 Gb/sec Gigabit Ethernet switch.  A few nodes (details forthcoming) are connected via 10Gb/sec ethernet.

File Systems

Genepool will mount a number of file systems.  See the file systems page for more details.

  • Global homes
  • /usr/common
  • JGI 2.7PB GPFS file system "projectb"
  • $SCRATCH
  • /house
  • /jgi/tools

Batch System

Genepool/Phoebe will use a fair share batch scheuler called "UGE".  See our documentation on submitting jobs and queues and policies for more details.