NERSCPowering Scientific Discovery Since 1974


Compute Nodes

The Genepool system is made up of a heterogeneous collection of nodes to serve the diverse workload of the JGI users.  Below is a table of the current configuration.

# NodesCores/nodeMemory/nodeLocal diskProcessorHostnameVendor
2 32 1000GB 3.6TB Xeon E5-4650L mndlhm0205-ib, Appro/Cray
5 32 500GB 3.6TB Xeon E5-4650L mndlhm[01-05]  Appro/Cray
8 16 248GB 1.8TB Xeon E5-2670 Appro/Cray
212 16 120GB 1.8TB Xeon E5-2670 Appro/Cray
14 20 120GB     mc1322-33,mc1344  
14 32 120GB     mc1535-48  
1 20 248GB     mc1359  
100 32 248GB     mc1637-48,mc1705-60,mc1601-22  
12 32 500GB     mc1625-36  
1 80 2 TB 300 GB Intel Xeon X7560 2.27 GHz IBM

Login Nodes

Genepool currently has two login nodes.  Users land on one of the two login nodes when they ssh to  The different login nodes sit behind a load balancer and you will land on the login node with the least number of connections, or if you already have active connections, you will be directed to the same login node.   The names of the login nodes are genepool13 and genepool14, however you should always access the login nodes with ssh  

"gpint" Analysis Nodes

Genepool also has nodes which are used for pipeline control and pre- and post-processing of jobs as well as analysis jobs. At present, these nodes are allocated per group and are called gpintNNN (NN = 200,201,...). Users should refer to the table below to determine which node belongs to their group and only use the nodes allocated to their group.  

Node Name

Assigned Group




Memory Limit R & D 20 256G 208G R & D 20 256G 208G Plant 20 256G 208G Plant 20 256G 208G IMG 20 128G 101G IMG 20 128G 101G GBP 20 128G 101G SDM 20 128G 101G RQC 20 128G  101G Assembly 20 128G  101G Assembly 20 128G  101G Vista 20 128G  101G IMG 20 128G 101G Fungal 20 128G 101G IMG  20 512G Plant  20 512G Fungal  20 512G  


Other Special Purpose Nodes

Nodes which perform specialized tasks such as running a web-server and/or databases are also part of Genepool, although they may not have all the features of login or analysis nodes (such as NGF filesystems, login access, etc.). These nodes are also currently assigned on a per-group basis. Use of these nodes is restricted to the groups to which they have been assigned.

Oracle Databases

Oracle Database  gpodb05
Oracle Database  gpodb01
Oracle Database  gpodb04
Oracle Database
Oracle Database  gpodb03
Oracle Database
Oracle Database
Oracle Database
Oracle Database
Oracle Database
Oracle Database
Oracle Databse
Oracle Database
PostgreSQL Database

MySQL Databases

MySQL Database
MySQL Database portal
MySQL Database portal
PostgreSQL Database Plant database
PostgreSQL Database

MySQL Database for Portal group

gpdb07  portal

MySQL Database for Portal group


MySQL Database


MySQL Database for Plant Group


MySQL Database for Plant Group


Web Services




Legacy Name

Web server  gpweb01

Web server  gpweb02

IMG web nodes gpweb04-07 img-edge[1-4]
Web server gpweb08, geneprimp, coal, clams, gold, gold-dev, ani
Galaxy server gpweb09
Galaxy dev server gpweb10
Portal server gpweb11
Portal server gpweb12
Portal web nodes gpweb13-16 gp-edge[1-4]
Portal server gpweb17
Portal server gpweb18
Portal server gpweb19
Plant web nodes gpweb20-22 zome-edge[1-3]
Plant dev web node gpweb23
Plant web node gpweb24
Plant web node gpweb25
Web server gpweb26,,,,
2nd IP for gpweb26 gpweb27
Portal load balanced website genome-lb gpweb13 - gpweb16 -,
Plant load balanced website zome-lb gpweb20-22 and gpweb24-25 -
IMG load balanced website img-lb gpweb04-gpweb07 -


The majority of the compute nodes are connected with a 1 Gb/sec Gigabit Ethernet switch.  A few nodes (details forthcoming) are connected via 10Gb/sec ethernet.

File Systems

Genepool will mount a number of file systems.  See the file systems page for more details.

  • Global homes
  • /usr/common
  • JGI 2.7PB GPFS file system "projectb"

Batch System

Genepool/Phoebe will use a fair share batch scheuler called "UGE".  See our documentation on submitting jobs and queues and policies for more details.