TOKIO: Total Knowledge of I/O
The Total Knowledge of I/O (TOKIO) project is developing algorithms and a software framework to analyze I/O performance and workload data from production HPC resources at multiple system levels. This holistic I/O characterization framework provides a clearer view of system behavior and the causes of deleterious behavior to application scientists, facility operators and computer science researchers in the field. TOKIO is a collaboration between the Lawrence Berkeley and Argonne National Laboratories and is funded by the DOE Office of Science through the Office of Advanced Scientific Computing Research, and its reference implementation is open for contributions and download on GitHub.
The framework combines a multitude of component-level I/O characterization utilities to continuously monitor I/O at various levels including application profiling with Darshan and back-end storage server monitoring using file system-specific tools.
Data from these component-level monitoring tools is retained on disk in its native format, and TOKIO normalizes and indexes the data across the different component-level monitoring outputs to minimize the need for expert understanding of how each tool expresses its view of the I/O subsystem components. The complete TOKIO architecture is described in a paper presented at the 2018 Cray User Group meeting.
TOKIO also provides a simple, portable API that allows sophisticated visualization and analysis to be developed on top of these component-level monitoring tools. For example, TOKIO includes the tools necessary to create Unified Monitoring and Metrics Interfaces (UMAMI) which provide a simple visualization of how different components of the I/O subsystem were behaving on a day of interest.
Similar analyses can be quickly built upon the Python implementation of the TOKIO framework, pytokio, available on GitHub. To give pytokio a try, you can download all of the code and data necessary to reproduce a paper presented at SC'18, A Year in the Life of a Parallel File System.
- Nicholas J. Wright (LBNL) - Lead Principal Investigator
- Philip Carns (ANL) - Institutional Principal Investigator
- Suren Byna (LBNL) - Co-investigator
- Rob Ross (ANL) - External collaborator
- Prabhat (LBNL) - External collaborator
- Glenn K. Lockwood (LBNL)
- Shane Snyder (ANL)
- Teng Wang (LBNL)
- Glenn K. Lockwood, Shane Snyder, Teng Wang, Suren Byna, Philip Carns, Nicholas J. Wright. "A Year in the Life of a Parallel File System." In Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC'18). Dallas, TX. November 2018. (Slides)
- Jakob Luttgau, Shane Snyder, Philip Carns, Justin Wozniak, Julian Kunkel, and Thomas Ludwig. "Toward Understanding I/O Behavior in HPC Workflows." In Proceedings of the 3rd Joint International Workshop on Parallel Data Storage & Data Intensive Scalable Computing Systems (PDSW-DISCS'18). Dallas, TX. November 2018. (Slides)
- Teng Wang, Shane Snyder, Glenn K. Lockwood, Philip Carns, Nicholas J. Wright, and Suren Byna. "IOMiner: Large-Scale Analytics Framework for Gaining Knowledge from I/O Logs." In Proceedings of the 2018 IEEE International Conference on Cluster Computing (CLUSTER). Belfast, UK. September 2018.
- Glenn K. Lockwood, Shane Snyder, George Brown, Kevin Harms, Philip Carns, Nicholas J. Wright. "TOKIO on ClusterStor: Connecting Standard Tools to Enable Holistic I/O Performance Analysis." In Proceedings of the 2018 Cray User Group. Stockholm, SE. May 2018. (Slides)
- Glenn K. Lockwood, Wucherl Yoo, Suren Byna, Nicholas J. Wright, Shane Snyder, Kevin Harms, Zachary Nault, Philip Carns. "UMAMI: a recipe for generating meaningful metrics through holistic I/O performance analysis." In Proceedings of the 2nd Joint International Workshop on Parallel Data Storage & Data Intensive Scalable Computing Systems (PDSW-DISCS'17). Denver, CO. November 2017. (Slides)
- Shane Snyder, Philip Carns, Kevin Harms, Robert Ross, Glenn K. Lockwood, Nicholas J. Wright. "Modular HPC I/O Characterization with Darshan." In Proceedings of 5th Workshop on Extreme-scale Programming Tools (ESPT 2016). Salt Lake City, UT. November 2016.
- Cong Xu, Suren Byna, Vishwanath Venkatesan, Robert Sisneros, Omkar Kulkarni, Mohamad Chaarawi, and Kalyana Chadalavada, "LIOProf: Exposing Lustre File System Behavior for I/O Middleware." In Proceedings of the 2016 Cray User Group. London, UK. May 2016.
- Philip Carns. "Understanding and tuning HPC I/O: How hard can it be?" 4th annual HPC I/O in the Data Center Workshop (HPC-IODC) and Workshop on Performance and Scalability of Storage Systems (WOPSSS), ISC 2018, Frankfurt, DE. June 2018.
- Philip Carns, Julian Kunkel, Glenn K. Lockwood, Ross Miller, Eugen Betke, Wolfgang Frings. "Analyzing Parallel I/O." Birds of a Feather session, International Conference for High Performance Computing, Networking, Storage and Analysis (SC17), Denver, USA. November 2017.
- Philip Carns. "Characterizing data-intensive scientific applications with Darshan." CS/NERSC Data Seminar, National Energy Research Scientific Computing Center. June 2017.
- Philip Carns. "Characterizing HPC I/O: from Applications to Systems." ZIH Colloquium at Technische Universität Dresden, Dresden, DE. April 2017.
- Philip Carns. "TOKIO: Using Lightweight Holistic Characterization to Understand, Model, and Improve HPC I/O Performance." SIAM Conference on Computational Science and Engineering, Atlanta GA. March 2017.
- Shane Snyder. "Leveraging Holistic Characterization for Insights into HPC I/O Behavior." 2017 Understanding I/O Performance Behavior (UIOP) Workshop, DKRZ, Hamburg. March 2017.
- Glenn K. Lockwood, Nicholas J. Wright. "Understanding I/O performance on burst buffers through holistic I/O characterization." MCS Seminar, Argonne National Laboratory. May 2016.
- Glenn K. Lockwood. "Developing a holistic understanding of I/O workloads on future architectures." 2016 SIAM Conference on Parallel Processing for Scientific Computing, Paris. April 2016.
- Julian Kunkel, Philip Carns, Shane Snyder, Huong Luu, Matthieu Dorier, Wolfgang Frings, and Glenn K. Lockwood. "Analyzing Parallel I/O." Birds of a Feather session, International Conference for High Performance Computing, Networking, Storage and Analysis (SC15), Austin. November 2015.
- Sandeep Madireddy, Prasanna Balaprakash, Philip Carns, Robert Latham, Robert Ross, Shane Snyder, and Stefan M. Wild. "Machine Learning Based Parallel I/O Predictive Modeling: A Case Study on Lustre File Systems." In Proceedings of the 33rd International Conference, ISC High Performance 2018. Frankfurt, DE. 2018.
- Wahid Bhimji, Debbie Bard, Melissa Romanus, et al. "Accelerating science with the NERSC burst buffer early user program." 2016 Cray User Group, London. May 2016.