Using Carver for PDSF jobs
The table below compares PDSF to Carver (serial queue) and shows that Carver is quite similar to PDSF but has higher clock speeds and allows for longer jobs. The operating systems are just different versions of SL5. The biggest difference in the two systems from the user's perspective is probably the batch systems. PDSF is capable of a higher throughput in principle but is often filled to capacity whereas there are usually cycles available immediately on Carver.
|PDSF||Carver Serial Queue|
|Operating System||SL 5.3||SL 5.5|
|Memory/Core||2, 3, or 4 GB||3.5 GB|
|Clock Speed||2.1-2.3 GHz||2.67 GHz|
|Maximum Wallclock||24 hours||48 hours|
|Maximum Total Jobs||30000||1500|
Another advantage to working on Carver is that you will have access to global homes and global scratch. Your home directory quota will be considerably larger than on PDSF and you get TB's of space on global scratch (note purging details, though). For more details see the Carver File Storage page.
To run jobs on Carver you need to create a batch script (we'll call it my_job.pbs) like the one shown below
#PBS -q serial
#PBS -l walltime=4:00:00
#PBS -l pvmem=2GB
and replace ./a.out with the command you want to run. This job requests the serial queue, 4 hours of walltime and 2GB of memory. Then you just use qsub to submit it:
carver% qsub my_job.pbs
For more details see the Running Jobs on Carver page.
PBS, like SGE on PDSF, uses qstat to show the status of the jobs in the system. However, PBS qstat options and SGE qstat options, while similar, are not always the same. Some useful PBS qstat commands are listed below:
|Action||How to do it||Comments|
|Show all serial jobs||qstat -a serial -n||long ouput - consider piping to more|
|Show all jobs of one user||qstat -u <username>||long output - consider piping to more|
|Show details of one particular job||qstat -f <jobID>|
|Show properties of the serial queue||qstat -f -Q serial|
|Show properties/status of all queues||qstat -q or qstat -Q||Two different forms of output|
|Delete a job||qdel <jobID>|
For more details see the Monitoring Jobs on Carver page.
There are a number of important PDSF file systems that are not available on Carver. These include /home, /common, all of the elizas and /usr/local/pkg (modules software). On Carver you will login to global home instead, /project or global scratch can be used instead of /common and the elizas, and you'll need to use the modules available on Carver instead of those on PDSF. If there is some software you need either file a ticket if you think it's worth creating a module or install it yourself on /project. For more details see the Carver File Systems page.
As on PDSF you should not use the Carver interactive nodes for bulk data transfers, whether from PDSF to Carver or from remote sites to Carver. Instead use the NERSC data transfer nodes, dtn01.nersc.gov and dtn02.nersc.gov. See the NERSC Transferring Data page for more details.
On PDSF allocations are not used. Instead, a job's priority is determined by the number of shares its group has and the recent history of what jobs have been running. On most other NERSC systems, including Carver, allocations are used. Your jobs are charged against your mpp repo's allocation which is not the same as your PDSF repo. You can find out what mpp repo you are in and how much of your allocation you have used in NIM. See the Accounts and Allocations page for more details.
If your allocation is getting used up and you want more you should file a ticket describing your needs. There is a special allocation set aside to support PDSF users running on Carver and you will probably get what you need.
These pages are kept under the group-specific part of the PDSF webpages and are maintained by PDSF staff. Then contain any group-specific notes for running jobs on Carver.