NERSCPowering Scientific Discovery Since 1974

Exciting new PDSF developments

February 25, 2014

I'm pleased to announce that PDSF successfully deployed new login nodes last week. Some of you may already have noticed that you are now landing on nodes named pdsf[6-8] when you ssh to Our new login nodes use the faster Mendel IB network and more modern hardware. We've gone from four nodes to three but, because each node has a higher core count, the processing power is staying the same. The old interactive nodes will be available for a short while, you can access them by sshing directly to pdsf[1-4] If you have any crons running on these old nodes, please migrate them to the new login nodes.

Along with the new login nodes, PDSF is now mounting the NERSC global scratch file system. This file system is intended for temporary uses like storage of checkpoint information or short term storage of application output. Each user on PDSF has a directory in the global scratch file system. The path to your global scratch directory is stored in the $GSCRATCH environment variable. The default quota for global scratch directories is 20 TB and 4 million inodes (the number of files and directories). Files in global scratch are PURGED (i.e. deleted) if they have not been accessed for 12 weeks. The global scratch directories are not intended for long term storage of data! Please use your groups project or eliza directory or HPSS for long term storage of data. Once global scratch files have been purged, they cannot be retrieved.
In order to submit jobs that access global scratch you must use the gscratchio resource flag:

qsub -l gscratchio=1 <your_executable>

Global scratch is only mounted on the newer compute nodes, so if you don't use this flag your job is likely to fail.

Many thanks to everyone who worked hard behind the scenes to make these improvements happen.

If you encounter any issues, please file a ticket by visiting or emailing