PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of Physics, Astrophysics, and Nuclear Science collaborations. These collaborations are often data intensive and use grid technologies both for remote job submission and data transfer. They are also among the heaviest users of NERSC HPSS. Historically PDSF workflows have been almost completely serial jobs so the cluster is not optimized for parallel jobs but they are not expressly prohibited either.
PDSF is different from typical NERSC systems in that it does not rely on allocations for cpu time. Instead, each collaboration or group receives shares in the batch system proportional to their contributions to shared PDSF resources such as compute nodes or other infrastructure, including PDSF staffing. However, HPSS usage is managed through the NERSC allocations process so Principal Investigators still need to prepare an annual ERCAP request specifying their HPSS needs.
If you think you might be interested in using PDSF please contact PDSF staff either by filing a trouble ticket or filling out the PDSF account request form and specifying for your project "New Group Request". It is not necessary to contribute shared resources in order to use PDSF for an initial evaluation period.