NERSCPowering Scientific Discovery Since 1974


Darshan has been turned on by default on Edison and Cori


Darshan is a light weight IO profiling tool capable of profiling POSIX IO, MPIIO and HDF5 IO. We encourage all users to turn on Darshan for their application running on Hopper. Darshan will not only help the users to identify IO bottleneck and improve performance, but also help NERSC to better understand the IO usage of its users and shape its future plans. 


The Darshan module is loaded by default for all Edison and Cori users. Simply recompile your code to allow Darshan to collect statistics. 

The default version on Edison and Cori is 2.3.1 and 3.0.0-pre3. Once 3.0.0 is in a more stable release, it will be the default on both of Edison and Cori. 

Exam the Darshan Results


Completed Jobs Page

On the Completed Jobs page, a Darshan summary is automatically generated for all jobs logged by Darshan. Below shows an example of a job's I/O summary:







Locating Logs

Edison: /scratch1/scratchdirs/darshanlogs/year/month/day/user_jobname_idSLURM_JOB_ID_xxx.darshan.gz

Cori: /global/cscratch1/sd/darshanlogs/year/month/day/user_jobname_idSLURM_JOB_ID_xxx.darshan

Note that each srun command will produce a separate log file. The raw log files are kept for 1 month before deleted from the central location. The log summary is stored in a database for 24/7 access via the web. 

Understanding Results

You can use the darshan-parser tool to analyze your log. To show how much data was read or written in a run:

darshan-parser --total logfile | grep BYTES_READ
darshan-parser --total logfile | grep BYTES_WRITTEN

To show the number of Read/Write operations in the run:

darshan-parser --total logfile | grep POSIX| grep READS
darshan-parser --total logfile | grep POSIX| grep WRITES 

You can also get a distribution of the transaction size, for POSIX read/write:

% darshan-parser --total logfile | grep SIZE_READ|grep -v AGG 

total_CP_SIZE_READ_0_100: 44
total_CP_SIZE_READ_100_1K: 12
total_CP_SIZE_READ_1K_10K: 16
total_CP_SIZE_READ_10K_100K: 12
total_CP_SIZE_READ_100K_1M: 28
total_CP_SIZE_READ_1M_4M: 8
total_CP_SIZE_READ_4M_10M: 8
total_CP_SIZE_READ_10M_100M: 12
total_CP_SIZE_READ_100M_1G: 12
total_CP_SIZE_READ_1G_PLUS: 4 

Or for MPIIO:

% darshan-parser --total logfile | grep SIZE_READ|grep AGG 

total_CP_SIZE_READ_AGG_0_100: 4
total_CP_SIZE_READ_AGG_100_1K: 12
total_CP_SIZE_READ_AGG_1K_10K: 16
total_CP_SIZE_READ_AGG_10K_100K: 12
total_CP_SIZE_READ_AGG_100K_1M: 20
total_CP_SIZE_READ_AGG_1M_4M: 8
total_CP_SIZE_READ_AGG_4M_10M: 8
total_CP_SIZE_READ_AGG_10M_100M: 12
total_CP_SIZE_READ_AGG_100M_1G: 12

Case Studies/Frequent I/O Problems


Overhead of Darshan

The overhead of Darshan is negligible to most of jobs. The plot below shows the overhead of darshan with respect to the job concurrency. The blue markers shows the overhead when each MPI task is reading/writing its own file. The red markers shows the overhead when all MPI tasks are read/writing the same file. For big jobs, it is suggested to use shared file IO (E.g. MPIIO) instead of file pre process. "Overhead" is defined as the time that Darshan uses to communicate the result and write the log file. The overhead of darshan should be un-noticeable for jobs with less than 10000 MPI tasks.  


If you have any question about Darshan at NERSC, please email