NERSCPowering Scientific Discovery Since 1974

File Storage and I/O

Disk Quota Change Request Form

SCRATCH3 Directory Request Form

Edison File Systems

The Edison system has 4 different file systems; they provide different levels of disk storage and I/O performance.  The table below describes these systems.

File SystemHomeLocal ScratchGlobal ScratchProject
  • Global home file system shared with other NERSC systems.
  • All NERSC machines mount the same home directory.
  • GPFS filesystem.
  • Where users land when they log into the system.
  • Three Lustre file systems, with 7.5 PB of total storage disk space.
  • Local means the files cannot be viewed on other NERSC systems.
  • Large (3.9 PB) GPFS file system for temporary storage.
  • Currently mounted on all NERSC systems except PDSF.
  • GPFS global file system mounted on all NERSC systems.
Default Quota
  • 40 GB
  • 1 million inodes
  • 10 TB*
  • 10 million inodes
  • 20 TB
  • 2 million inodes
  • 1 TB
  • 1 million inodes
Intended Purpose
  • Shell initializations
  • Storing source code
  • Compiling codes
  • Not intended for IO intensive applications
  • Running production applications
  • I/O intensive jobs
  • Temporary storage of large files
  • Alternative file system to run production applications
  • Aid users whose workflow requires running on multiple platforms
  • Temorary storage of large files
  • Running production applications
  • Groups needing shared data access
  • Projects running on multiple NERSC machines
Peak Performance Low, ~100 MB/sec 168 GB/sec  80 GB/sec 40 GB/sec
Purged? No Yes, files older than 12 weeks are purged. Yes, files older than 8 weeks are purged No

*) Edison quota is set on /scratch1 and /scratch2 but not /scratch3. The /scratch3 file sytem is also subject to purging.

Lustre Scratch Directories

Edison has three scratch file systems named /scratch1, /scratch2, and /scratch3. The first two file systems have 2.1 PB disk space and 48 GB/sec IO bandwidth each, while the third one has 3.2 PB disk and 72 GB/sec IO bandwidth. Users are assigned to either /scratch1 or /scratch2 in a round-robin fashion, so a user will be able to use one or the other but not both. The third file system is reserved for users who need large IO bandwidth, and the access is granted by request. If you need large IO bandwidth to conduct more efficient compuations and data analysis at NERSC, please submit your request by filling out the SCRATCH3 Directory Request Form.

The /scratch1 or /scratch2 file systems should always be referenced using the environment variable $SCRATCH (which expands to /scratch1/scratchdirs/YourUserName or /scratch2/scratchdirs/YourUserName on Edison). The scratch file systems are available from all nodes (login, MOM, and compute nodes) and are tuned for high performance. We recommend that you run your jobs, especially data intensive ones, from the scratch file systems.  

All users have 10 TB of quota for the scratch file system. If your $SCRATCH usage exceeds your quota, you will not be able to submit batch jobs until you reduce your usage. We have not set the quotas on the /scratch3 file system. The batch job submit filter checks only the usage of the /scratch1 or /scratch2, but not /scratch3.

The "myquota" command (with no options) will display your current usage and quota.  NERSC sometimes grants temporary quota increases for legitimate purposes. To apply for such an increase, please use the Disk Quota Increase Form.

The scratch file systems are subject to purging. Files in your $SCRATCH directory that are older than 12 weeks (defined by last access time) are removed. Please make sure to back up your important files (e.g. to HPSS).   Instructions for HPSS are here.

The /scratch3 file system is subject to purging as well. Currently the same purging policy appies to the /scratch3 file system.  Starting Feb 4, 2015, files that are older than 8 weeks will be deleted from the /scartch3 file systems.

Scratch Filesystem Configuration

  Size PBAggregate Peak Performance# of Disks# IO Servers (OSSs)OSTsFile System SoftwareDisk Array Vendor



2.1 48 GB/sec  12 24 96 Lustre Cray



2.1 48 GB/sec 12 24 96 Lustre Cray
/scratch3 3.2 72 GB/sec 18 36 144 Lustre Cray

The  table shows the Edison scratch file system configuraitons. The /scratch1 and /scratch2 file systems each have 96 OSTs, the lowest I/O layer with which users need to interact.  The third file system has 144 OSTs. There are different default striping sizes for these file sytems. When a file is created in /scratch1 or /scratch2, it is "striped" or split across two different OSTs by default, while a file created on the /scratch3 is striped across eight different OSTs by default.  Striping is a technique to increase I/O performance.  Instead of writing to a single disk, striping to two disks allows the user to potentially double read and write bandwidth.  Lustre file systems at other computing centers may set a different default based on their workload. 

Do Not Use /tmp explicitly

WARNING: Do not attempt to explicitly use a file system named /tmp. Some software tools (editors, compilers, etc.) use the location specified by the $TMPDIR environment variable to store temporary files. Additionally, Fortran codes which open files with status="scratch" will write those files into $TMPDIR. On many Unix systems, $TMPDIR is set to /tmp. NERSC has set $TMPDIR to be $SCRATCH. Please do not redefine $TMPDIR!


Edison /scratch3 Directory Request Form

Use this form to request the /scratch3 directory space on Edison. The /scratch3 file system is reserved for users who need large I/O bandwidth. Please provide a few sentences describing the planned use of the /scratch3 file system, and explain why a higher I/O bandwidth is needed. Please note, users have no quota on the /scratch3 file system, but files that are older than 8 weeks will be purged from the /scratch3 file system. Read More »