File Storage and I/O
Hopper File Systems
The Hopper system has 5 different file systems mounted which provide different levels of disk storage, I/O performance and file permanence. The table below describes the various Hopper file systems:
|File System||Home||Local Scratch||Global Scratch||Project|
|$GSCRATCH||None. Must use
|Peak Performance||Low, ~100 MB/sec||35 GB/sec for each||80 GB/sec peak||40 GB/sec|
|Purged?||No||Yes, files older than 12 weeks are purged||Yes, files older than 12 weeks are purged||No|
Hopper is configured with two distinct scratch file systems named /scratch and /scratch2. Each user has access to two scratch directories that should always be referenced using the environment variables $SCRATCH and $SCRATCH2. Both of these file systems are available from all nodes and are tuned for high performance. You may run using both scratch file systems but are encouraged to choose one or the other for your primary work.
There is a single (large) quota (space and inode) for each user that applies to the combined contents of $SCRATCH and $SCRATCH2. If your combined usage of $SCRATCH and $SCRATCH2 exceeds your quota, you will not be able to submit batch jobs until you reduce your combined usage.
The "myquota" command (with no options) will display your current usage and quota. NERSC sometimes grants temporary quota increases for legitimate purposes. To apply for such an increase, please use the Disk Quota Increase Form.
Purging of "old" files from $SCRATCH and $SCRATCH2 began on Wednesday, March 14. Files in your $SCRATCH and $SCRATCH2 that are older than 12 weeks (defined by last access time) are removed. Please make sure to back up your important files (e.g. to HPSS). Instructions for HPSS are here.
All of NERSC's global file systems are available on Hopper. Additionally, Hopper has 2 PB of locally attached high-performance /scratch disk space For information on the NERSC file systems, see the link at right.
Scratch Filesystem Configuration
|Size TB||Aggregate Peak Performance||# of Disks||# IO Servers (OSSs)||OSTs||File System Software||Disk Array Vendor|
|$SCRATCH||1 PB||35 GB/sec||13||26||156||Lustre||LSI|
|$SCRATCH2||1 PB||35 GB/sec||13||26||156||Lustre||LSI|
SCRATCH and SCRATCH2 both have the same configuration.
- 13 LSI 7900 disk controllers
- Each disk controller is served by 2 I/O servers called OSSs (Object Storage Servers)
- Each OSS host 6 OSTs (Object Storage Target) which a user can think of as a software abstraction of a physical disk
- Fiber Channel 8 connectivity from OSSs to the LSI disk controllers
- Infiniband connects the Lustre router nodes in the 3d torus through a QDR switch to the OSSs
In total each /scratch file system has 156 OSTs which is the lowest layer with which users need to interact. When a file is created in /scratch it is "striped" or split across two different OSTs, which is the default. Lustre file systems at other computing centers may set a different default based on their workload. Striping is a technique to increase I/O performance. Instead of writing to a single disk, striping to two disks allows the user to potentially double read and write bandwidth.
Do Not Use /tmp explicitly
WARNING: Do not attempt to explicitly use a file system named /tmp. Your job may fail or be deleted if it writes to /tmp. Some software tools (editors, compilers, etc.) use the location specified by the $TMPDIR environment variable to store temporary files. Additionally, Fortran codes which open files with status="scratch" will write those files into $TMPDIR. On many Unix systems, $TMPDIR is set to /tmp. NERSC has set $TMPDIR to be $SCRATCH. Please do not redefine $TMPDIR!