NERSCPowering Scientific Discovery Since 1974

Edison File Storage and I/O

Disk Quota Change Request Form

SCRATCH3 Directory Request Form

Edison File Systems

The Edison system has four different file systems; they provide different levels of disk storage and I/O performance. The table below describes these systems.

File SystemHomeLocal ScratchProjectGlobal Scratch
$HOME $SCRATCH for /scratch1 or /scratch2

None for /scratch3
None.  Must use
$CSCRATCH for $cscratch1
  • Global home file system shared with other NERSC systems.
  • All NERSC machines mount the same home directory.
  • GPFS filesystem.
  • Where users land when they log into the system.
  • Three Lustre file systems, with 7.5 PB of total storage disk space.
  • Local means the files cannot be viewed on other NERSC systems.
  • GPFS global file system mounted on all NERSC systems.
  • Lustre file systems, with 30 PB of disk space.
  • Shared by Cori and Edison
Default Quota
  • 40 GB
  • 1 million inodes
  • 10 TB*
  • 10 million inodes
  • 1 TB
  • 1 million inodes
  • 20TB
  • 20 million inodes
Intended Purpose
  • Shell initializations
  • Storing source code
  • Compiling codes
  • Not intended for IO intensive applications
  • Running production applications
  • I/O intensive jobs
  • Temporary storage of large files
  • Running production applications
  • Groups needing shared data access
  • Projects running on multiple NERSC machines
  • Running production applications
  • I/O intensive jobs
  • Temporary storage of large files
Peak Performance Low, ~100 MB/sec 168 GB/sec  40 GB/sec >700 GB/sec
Purged? No Yes, files older than 8 weeks are purged on /scratch1 and /scratch2;
Files older than 8 weeks are purged on /scratch3
No Yes, files older than 12 weeks since last access are purged.

*) Edison quota is set on /scratch1 and /scratch2 but not /scratch3.

Lustre Scratch Directories

Edison has three local scratch file systems named /scratch1, /scratch2, and /scratch3. The first two file systems have 2.1 PB disk space and 48 GB/sec IO bandwidth each, while the third one has 3.2 PB disk, the peak IO bandwidth is 72G/s. Users are assigned to either /scratch1 or /scratch2 in a round-robin fashion, so a user will be able to use one or the other but not both. The third file system is reserved for users who need large IO bandwidth, and the access is granted by request. If you need large IO bandwidth to conduct more efficient computations and data analysis at NERSC, please submit your request by filling out the SCRATCH3 Directory Request Form.

The /scratch1 or /scratch2 file systems should always be referenced using the environment variable $SCRATCH (which expands to /scratch1/scratchdirs/YourUserName or /scratch2/scratchdirs/YourUserName on Edison). The scratch file systems are available from all nodes (login, and compute nodes) and are tuned for high performance. We recommend that you run your jobs, especially data intensive ones, from the scratch file systems.  

All users have 10 TB of quota for the scratch file system. If your $SCRATCH usage exceeds your quota, you will not be able to submit batch jobs until you reduce your usage. We have not set the quotas on the /scratch3 file system. The batch job submit filter checks only the usage of the /scratch1 or /scratch2, but not /scratch3.

The "myquota" command (with no options) will display your current usage and quota.  NERSC sometimes grants temporary quota increases for legitimate purposes. To apply for such an increase, please use the Disk Quota Increase Form.

The scratch file systems are subject to purging. Files in your $SCRATCH directory that are older than 12 weeks (defined by last access time) are removed. Please make sure to back up your important files (e.g. to HPSS).   Instructions for HPSS are here.

The /scratch3 file system is subject to purging as well. Currently the same purging policy applies to the /scratch3 file system.  Starting Feb 4, 2015, files that are older than 8 weeks will be deleted from the /scratch3 file systems.

The newly mounted /cscratch1 is shared by Cori and Edison. More information can be found at Cori's storage page.

Scratch Filesystem Configuration

  Size PBAggregate Peak Performance# IO Servers (OSSs)OSTsFile System SoftwareDisk Array Vendor



2.1 48 GB/sec  24 24 Lustre Cray



2.1 48 GB/sec 24 24 Lustre Cray
/scratch3 3.2 72 GB/sec 36 36 Lustre Cray



 30  744 GB/sec 248 248  Lustre  Cray

The  table shows the Edison scratch file system configurations. The /scratch1 and /scratch2 file systems each have 24 OSTs, the lowest I/O layer with which users need to interact. Each OST has about 84.5 TB disk space. The third file system has 36 OSTs, and each OST has 84.5 TB disk space. The default striping size for all three file systems is m (under investigation, Sep.8, 2016), meaning when a file is created, it is "striped" or split across m different OSTs by default. Striping is a technique to increase I/O performance.  Instead of writing to a single disk, striping to two disks allows the user to potentially double read and write bandwidth.  Lustre file systems at other computing centers may set a different default based on their workload.  

Edison /scratch3 Directory Request Form

Use this form to request the /scratch3 directory space on Edison. The /scratch3 file system is reserved for users who need large I/O bandwidth. Please provide a few sentences describing the planned use of the /scratch3 file system, and explain why a higher I/O bandwidth is needed. Please note, users have no quota on the /scratch3 file system, but files that are older than 8 weeks will be purged from the /scratch3 file system. Read More »