NERSCPowering Scientific Discovery Since 1974

File Storage and I/O

Disk Quota Change Request Form

SCRATCH3 Directory Request Form

Edison File Systems

The Edison system has three different file systems; they provide different levels of disk storage and I/O performance. The table below describes these systems.

File SystemHomeLocal ScratchProject
$HOME $SCRATCH for /scratch1 or /scratch2

None for /scrach3
None.  Must use
  • Global home file system shared with other NERSC systems.
  • All NERSC machines mount the same home directory.
  • GPFS filesystem.
  • Where users land when they log into the system.
  • Three Lustre file systems, with 7.5 PB of total storage disk space.
  • Local means the files cannot be viewed on other NERSC systems.
  • GPFS global file system mounted on all NERSC systems.
Default Quota
  • 40 GB
  • 1 million inodes
  • 10 TB*
  • 10 million inodes
  • 1 TB
  • 1 million inodes
Intended Purpose
  • Shell initializations
  • Storing source code
  • Compiling codes
  • Not intended for IO intensive applications
  • Running production applications
  • I/O intensive jobs
  • Temporary storage of large files
  • Running production applications
  • Groups needing shared data access
  • Projects running on multiple NERSC machines
Peak Performance Low, ~100 MB/sec 168 GB/sec  40 GB/sec
Purged? No Yes, files older than 12 weeks are purged on /scratch1 and /scratch2;
Files older than 8 weeks are purged on /scratch3

*) Edison quota is set on /scratch1 and /scratch2 but not /scratch3.

Lustre Scratch Directories

Edison has three scratch file systems named /scratch1, /scratch2, and /scratch3. The first two file systems have 2.1 PB disk space and 48 GB/sec IO bandwidth each, while the third one has 3.2 PB disk, the peak IO bandwidth used to be 72G/s, but currently due to a known bug, scratch3 performance is degraded. Users are assigned to either /scratch1 or /scratch2 in a round-robin fashion, so a user will be able to use one or the other but not both. The third file system is reserved for users who need large IO bandwidth, and the access is granted by request. If you need large IO bandwidth to conduct more efficient computations and data analysis at NERSC, please submit your request by filling out the SCRATCH3 Directory Request Form.

The /scratch1 or /scratch2 file systems should always be referenced using the environment variable $SCRATCH (which expands to /scratch1/scratchdirs/YourUserName or /scratch2/scratchdirs/YourUserName on Edison). The scratch file systems are available from all nodes (login, and compute nodes) and are tuned for high performance. We recommend that you run your jobs, especially data intensive ones, from the scratch file systems.  

All users have 10 TB of quota for the scratch file system. If your $SCRATCH usage exceeds your quota, you will not be able to submit batch jobs until you reduce your usage. We have not set the quotas on the /scratch3 file system. The batch job submit filter checks only the usage of the /scratch1 or /scratch2, but not /scratch3.

The "myquota" command (with no options) will display your current usage and quota.  NERSC sometimes grants temporary quota increases for legitimate purposes. To apply for such an increase, please use the Disk Quota Increase Form.

The scratch file systems are subject to purging. Files in your $SCRATCH directory that are older than 12 weeks (defined by last access time) are removed. Please make sure to back up your important files (e.g. to HPSS).   Instructions for HPSS are here.

The /scratch3 file system is subject to purging as well. Currently the same purging policy applies to the /scratch3 file system.  Starting Feb 4, 2015, files that are older than 8 weeks will be deleted from the /scartch3 file systems.

Scratch Filesystem Configuration

  Size PBAggregate Peak Performance# of Disks# IO Servers (OSSs)OSTsFile System SoftwareDisk Array Vendor



2.1 48 GB/sec  12 24 96 Lustre Cray



2.1 48 GB/sec 12 24 96 Lustre Cray
/scratch3* 3.2 72 GB/sec" 18 36 36 Lustre Cray

*) The /scratch3 file system used to have 144 OSTs. It was upgraded to Grid Raid in December 2015. The number of OSTs was reduced to  36 but with 4 times of disk space per OST. The total storage capacity was not changed. 

") Due to a known bug, scratch3 performance is degraded, (Updated Apr 7 2016)

The  table shows the Edison scratch file system configurations. The /scratch1 and /scratch2 file systems each have 96 OSTs, the lowest I/O layer with which users need to interact. Each OST has about 2.2 TB disk space. The third file system has 36 OSTs, and each OST has 9.0 TB disk space. The default striping size for all three file systems is two, meaning when a file is created, it is "striped" or split across two different OSTs by default.  Striping is a technique to increase I/O performance.  Instead of writing to a single disk, striping to two disks allows the user to potentially double read and write bandwidth.  Lustre file systems at other computing centers may set a different default based on their workload. 

Do Not Use /tmp explicitly

WARNING: Do not attempt to explicitly use a file system named /tmp. Some software tools (editors, compilers, etc.) use the location specified by the $TMPDIR environment variable to store temporary files. Additionally, Fortran codes which open files with status="scratch" will write those files into $TMPDIR. On many Unix systems, $TMPDIR is set to /tmp. NERSC has set $TMPDIR to be $SCRATCH. Please do not redefine $TMPDIR!


Edison /scratch3 Directory Request Form

Use this form to request the /scratch3 directory space on Edison. The /scratch3 file system is reserved for users who need large I/O bandwidth. Please provide a few sentences describing the planned use of the /scratch3 file system, and explain why a higher I/O bandwidth is needed. Please note, users have no quota on the /scratch3 file system, but files that are older than 8 weeks will be purged from the /scratch3 file system. Read More »