NERSCPowering Scientific Discovery Since 1974

I/O Resources for Scientific Applications at NERSC

Introduction

NERSC provides a range of online resources to assist users developing, deploying, understanding, and tuning their scientific I/O workloads, supplemented by direct support from the NERSC Consultants and the Data Analytics Group. Here, we provide a consolidated summary of these resources, along with pointers to relevant online documentation.

Getting started

One key resource, and a particularly good starting point for new users, is our general background discussion on the various filesystems available at NERSC. This resource includes valuable information, such as the intended use-case for which the filesystems were designed, as well as the particular type of underlying filesystem.

Libraries and tools available at NERSC

NERSC provides a number of I/O programming libraries, as well as tools for profiling I/O performed by your jobs and monitoring system status. These resources include:

  • High-level I/O libraries available at NERSC, including the popular HDF5 and NetCDF libraries
  • Tools for monitoring I/O activity:
    • The Darshan job-level I/O profiling tool may be used to examine the I/O activity of your own jobs (available on Cori and Edison).
    • Real-time aggregate Cori scratch I/O activity can give an estimate of overall aggregate-bandwidth utilization for Cori's scratch (click on any of your jobs, then click Lustre LMT).
  • Filesystem status reflected by timing of representative tasks (file creation, directory listing) can be found on MyNERSC.

Please also refer to the resources below which present more detailed introductions to some of these topics in tutorial form.

Best practices for scientific I/O

While there is clearly a wide range of I/O workloads associated with the many scientific applications deployed at NERSC, there are a number of general guidelines for achieving good performance when accessing our filesystems from parallel codes. Some of the most important guidelines include:

  • Use filesystems for their intended use-case; for example, avoiding your home directory for production I/O (more details on intended use case may be found on our page providing general background on NERSC filesystems).
  • Know what fraction of your wall-clock time is spent in I/O; for example, with estimates provided by Darshan, profiling of critical I/O routines (such as with Craypat's trace groups), or explicit timing / instrumentation.
  • When algorithmically possible:
    • Avoid workflows that produce large numbers of small files (e.g. a "file-per-process" access model at high levels of concurrency).
    • Avoid random-access I/O workloads in favor of contiguous access.
    • Prefer I/O workloads that perform large transfers that are similar in size or larger than, and are also aligned with, the underlying filesystem storage granularity (e.g. blocksize on GPFS-based filesystems, stripe width on Lustre-based filesystems).
  • Use high-level libraries for data management and parallel I/O operations (as these will often apply optimizations in line with the above suggestions, such as MPI-IO collective buffering to improve aggregate transfer size, alignment, and contiguous access).

With these suggestions in mind, there are also filesystem-specific tuning parameters which may be used to enable high-performance I/O. These can affect both the layout of your files on the underlying filesystem (as is the case in Lustre), as well as the manner is which your I/O operations will be routed to the storage system (as in the case of GPFS over DVS on the Crays). These can broadly be classified by filesystem type:

Tutorials, support, and resource allocation

Here, we list additional support resources for NERSC users, as well as pointers to previous and ongoing research projects associated with NERSC staff and LBL researchers to support high-performance scientific I/O.

  • Online tutorials at NERSC
  • User support at NERSC
  • Resource requests
    • Quota increases (space or inodes) for NGF project and global scratch, as well as the Lustre local scratch filesystems, may be submitted here
    • New NGF project directories may be requested here
    • Edison /scratch3 access may be requested here
  • Previous and ongoing I/O research projects contributed to by NERSC and LBL researchers
    • The ExaHDF5 group is working to develop next-generation I/O libraries and middleware to support scientific I/O (focused in particular on the HDF5 data format)