NERSCPowering Scientific Discovery Since 1974

Extra-Large Memory Nodes

Extra-Large Memory Nodes Overview

Carver has two "extra-large" memory nodes; each node has four 8-core Intel X7550 ("Nehalem EX") 2.0 GHz processors (32 cores total) and 1TB memory.  These nodes are available through the queue "reg_xlmem". They can be used for interactive and batch jobs that require large amount of memory (16GB per core or more). 

reg_xlmem queue

Please refer to the "Queues and Policies" page for the detailed configuration of the reg_xlmem queue. 

Shared Resource

Please note the 1TB nodes are shared between multiple users at any given time. Please limit the behavior of your code to only use the amount of resource (CPU cores and memory) requested.

Note:  Use the mem PBS keyword to specify total memory for all the tasks of the job not the pvmem keyword to specify per-task memory.

Wait Time

The wait time is much shorter for jobs that's requesting less than 8 cores and less than 96GB of memory in total. So please request for only the resource that you need. 

Interactive Jobs with X-windows Forwarding

To use large analytical tools such as Matlab, you can request an interactive job with X-windows forwarding enabled. Running the following command on a Carver login node will get you an interactive session with access to 1 processor core and 16GB memory for 1 hour. The "-X" option, will enable X-windows forwarding. 

qsub -I -X -q reg_xlmem -l nodes=1:ppn=1 -l walltime=01:00:00 -l mem=16GB

Use of NX

X-windows forwarding is normally slow due to the network latency. NERSC provide the NX service which  can significantly improve the X-windows experience. We strongly encourage you to use NX if you are using X-windows based interactive software. 

Running Batch Jobs

The following PBS script will allocate 4 cores and 300GB total memory (not per core memory):

#PBS -q reg_xlmem
#PBS -l mem=300GB
#PBS -l nodes=1:ppn=4

mpirun -np 4 ./my_xlmem_executable