Darshan has been turned on by default on Edison and Cori
Darshan is a light weight IO profiling tool capable of profiling POSIX IO, MPIIO and HDF5 IO in MPI applications. We encourage all users to turn on Darshan for their application running on Edison/Cori. Darshan will not only help the users to identify the IO bottleneck and improve the performance, but also help NERSC to better understand the IO usage of its users and shape its future plans.
The Darshan module is loaded by default for all Edison and Cori users. Simply recompile your code to allow Darshan to collect statistics. The default version on Edison and Cori is 2.3.1 and 3.1.4.
Examine the Darshan Results
Completed Jobs Page
On the Completed Job/IO Statistics,(Available for Edison, Cori's page is under development) a Darshan summary is automatically generated for all jobs logged by Darshan. Below shows an example of a job's I/O summary:
The format is user_jobname_idSLURM_JOB_ID_xxx.darshan.gz
The format is user_jobname_idSLURM_JOB_ID_xxx.darshan
Note that each srun command will produce a separate log file. The raw log files are kept for 1 month before being deleted from the central location. The log summary is stored in a database for 24/7 access via the web.
Understanding the Results
You can use the darshan-parser tool to analyze your logs.
To show how much data was read or written in a run:
darshan-parser --total logfile | grep BYTES_READ
darshan-parser --total logfile | grep BYTES_WRITTEN
To show the number of read/write operations in the run:
darshan-parser --total logfile | grep POSIX| grep READS
darshan-parser --total logfile | grep POSIX| grep WRITES
You can also get a distribution of the transaction size, for POSIX read/write:
% darshan-parser --total logfile | grep SIZE_READ|grep -v AGG
Or for MPIIO:
% darshan-parser --total logfile | grep SIZE_READ|grep AGG
Overhead of Darshan
Darshan's overhead is negligible to most jobs. The plot below shows the Darshan's overhead with respect to the job concurrency. The blue markers shows the overhead when each MPI task is reading/writing its own file. The red markers shows the overhead when all MPI tasks are reading/writing the same file. For big jobs, it is suggested to use shared file IO (E.g. MPIIO) instead of file pre process. "Overhead" is defined as the time that Darshan uses to communicate the result and write the log file. Darshan's overhead should be undetectable for jobs with fewer than 10,000 MPI tasks.
If you have any question about Darshan at NERSC, please email email@example.com