NERSCPowering Scientific Discovery for 50 Years

Lisa Gerhardt

Lisa Gerhardt headshot
Lisa Gerhardt
Big Data Architect
Phone: (510) 486-4680
Fax: (510) 486-6459
1 Cyclotron Road
Mail Stop: 59R4010A
Berkeley, CA 94720 us

Biographical Sketch

Lisa Gerhardt completed her Ph.D. in Physics from the University of Irvine in 2007.  Prior to coming to NERSC, Lisa worked in neutrino astrophysics with the IceCube group at Lawrence Berkeley National Laboratory.  She joins the NERSC DAS group to focus on big data and HPC analysis.

Journal Articles

M. G. Aartsen et al., "Flavor Ratio of Astrophysical Neutrinos above 35 TeV in IceCube", Physical Review Letters, February 11, 2015, doi: 10.1103/PhysRevLett.114.171102

IceCube Collaboration: M. G. Aartsen et al, "Energy Reconstruction Methods in the IceCube Neutrino Telescope", Journal of Instrumentation 9 P03009, March 2014,

M. G. Aartsen et al., "Search for non-relativistic Magnetic Monopoles with IceCube", European Physics Journal, February 14, 2014, doi: 10.1140/epjc/s10052-014-2938-8

IceCube Collaboration: M. G. Aartsen et al, "Improvement in Fast Particle Track Reconstruction with Robust Statistics", Nuclear Instruments and Methods A736 143-149, February 2014,

IceCube Collaboration: M. G. Aartsen et al, "Search for Time-Independent Neutrino Emission from Astrophysical Sources with 3 yr of IceCube Data", Astrophysical Journal 779 132, December 2013,

IceCube Collaboration: M. G. Aartsen et al, "Probing the Origin of Cosmic Rays with Extremely High Energy Neutrinos Using the IceCube Observatory", Physical Review D88 112008, December 2013,

IceCube Collaboration: M. G. Aartsen et al, "An IceCube Search for Dark Matter Annihilation in Nearby Galaxies and Galaxy Clusters", Physical Review D88 122001, December 2013,

IceCube Collaboration: M. G. Aartsen et al, "Evidence for High-Energy Extraterrestrial Neutrinos at the IceCube Detector", Science 342 1242856, November 2013,

IceCube Collaboration: M. G. Aartsen et al, "South Pole Glacial Climate Reconstruction from Multi-Borehole Laser Particulate Stratigraphy", Journal of Glaciology 59 1117-1128, October 2013,

IceCube Collaboration: M. G. Aartsen et al, "Measurement of the Cosmic Ray Energy Spectrum with IceTop-73", Physical Review D88 042004, August 28, 2013,

IceCube Collaboration: R. Abbasi et al, "Measurement of Atmospheric Neutrino Oscillations with IceCube", Physical Review Letters 111 081801, August 2013,

IceCube Collaboration: R. Abbasi et al, "First Observation of PeV-energy Neutrinos with IceCube", Physical Review Letters 111 021103, July 2013,

IceCube Collaboration: M. G. Aartsen et al., "Measurement of South Pole Ice Transparency with the IceCube LED Calibration System", Nuclear Instruments and Methods A711 73-89, May 2013,

Conference Papers

B Enders, D Bard, C Snavely, L Gerhardt, J Lee, B Totzke, K Antypas, S Byna, R Cheema, S Cholia, M Day, A Gaur, A Greiner, T Groves, M Kiran, Q Koziol, K Rowland, C Samuel, A Selvarajan, A Sim, D Skinner, R Thomas, G Torok, "Cross-facility science with the Superfacility Project at LBNL", IEEE/ACM WND Annual Workshop on Extreme-scale Experiment-in-the-Loop Computig (XLOOP), 2020, pp. 1-7, doi: 10.1109/XLOOP51963.2020.00006., November 12, 2020, 00:1-7, doi: 10.1109/XLOOP51963.2020.00006.

Glenn K. Lockwood, Kirill Lozinskiy, Lisa Gerhardt, Ravi Cheema, Damian Hazen, Nicholas J. Wright, "Designing an All-Flash Lustre File System for the 2020 NERSC Perlmutter System", Proceedings of the 2019 Cray User Group, Montreal, January 1, 2019,

New experimental and AI-driven workloads are moving into the realm of extreme-scale HPC systems at the same time that high-performance flash is becoming cost-effective to deploy at scale. This confluence poses a number of new technical and economic challenges and opportunities in designing the next generation of HPC storage and I/O subsystems to achieve the right balance of bandwidth, latency, endurance, and cost. In this paper, we present the quantitative approach to requirements definition that resulted in the 30 PB all-flash Lustre file system that will be deployed with NERSC's upcoming Perlmutter system in 2020. By integrating analysis of current workloads and projections of future performance and throughput, we were able to constrain many critical design space parameters and quantitatively demonstrate that Perlmutter will not only deliver optimal performance, but effectively balance cost with capacity, endurance, and many modern features of Lustre.

Wahid Bhimji, Debbie Bard, Kaylan Burleigh, Chris Daley, Steve Farrell, Markus Fasel, Brian Friesen, Lisa Gerhardt, Jialin Liu, Peter Nugent, Dave Paul, Jeff Porter, Vakho Tsulaia, "Extreme I/O on HPC for HEP using the Burst Buffer at NERSC", Journal of Physics: Conference Series, December 1, 2017, 898:082015,

Alex Gittens et al, "Matrix Factorization at Scale: a Comparison of Scientific Data Analytics in Spark and C+MPI Using Three Case Studies", 2016 IEEE International Conference on Big Data, July 1, 2017,

Jialin Liu, Evan Racah, Quincey Koziol, Richard Shane Canon,
Alex Gittens, Lisa Gerhardt, Suren Byna, Mike F. Ringenburg, Prabhat,
"H5Spark: Bridging the I/O Gap between Spark and Scientific Data Formats on HPC Systems", Cray User Group, May 13, 2016,

Book Chapters

Glenn K. Lockwood, Kirill Lozinskiy, Lisa Gerhardt, Ravi Cheema, Damian Hazen, Nicholas J. Wright, "A Quantitative Approach to Architecting All-Flash Lustre File Systems", ISC High Performance 2019: High Performance Computing, edited by Michele Weiland, Guido Juckeland, Sadaf Alam, Heike Jagode, (Springer International Publishing: 2019) Pages: 183--197 doi: 10.1007/978-3-030-34356-9_16

New experimental and AI-driven workloads are moving into the realm of extreme-scale HPC systems at the same time that high-performance flash is becoming cost-effective to deploy at scale. This confluence poses a number of new technical and economic challenges and opportunities in designing the next generation of HPC storage and I/O subsystems to achieve the right balance of bandwidth, latency, endurance, and cost. In this work, we present quantitative models that use workload data from existing, disk-based file systems to project the architectural requirements of all-flash Lustre file systems. Using data from NERSC’s Cori I/O subsystem, we then demonstrate the minimum required capacity for data, capacity for metadata and data-on-MDT, and SSD endurance for a future all-flash Lustre file system.

Sudip Dosanjh, Shane Canon, Jack Deslippe, Kjiersten Fagnan, Richard Gerber, Lisa Gerhardt, Jason Hick, Douglas Jacobsen, David Skinner, Nicholas J. Wright, "Extreme Data Science at the National Energy Research Scientific Computing (NERSC) Center", Proceedings of International Conference on Parallel Programming – ParCo 2013, ( March 26, 2014)

Presentation/Talks

Kirill Lozinskiy, Glenn K. Lockwood, Lisa Gerhardt, Ravi Cheema, Damian Hazen, Nicholas J. Wright, A Quantitative Approach to Architecting All‐Flash Lustre File Systems, Lustre User Group (LUG) 2019, May 15, 2019,

Kirill Lozinskiy, Lisa Gerhardt, Annette Greiner, Ravi Cheema, Damian Hazen, Kristy Kallback-Rose, Rei Lee, User-Friendly Data Management for Scientific Computing Users, Cray User Group (CUG) 2019, May 9, 2019,

Wrangling data at a scientific computing center can be a major challenge for users, particularly when quotas may impact their ability to utilize resources. In such an environment, a task as simple as listing space usage for one's files can take hours. The National Energy Research Scientific Computing Center (NERSC) has roughly 50 PBs of shared storage utilizing more than 4.6B inodes, and a 146 PB high-performance tape archive, all accessible from two supercomputers. As data volumes increase exponentially, managing data is becoming a larger burden on scientists. To ease the pain, we have designed and built a “Data Dashboard”. Here, in a web-enabled visual application, our 7,000 users can easily review their usage against quotas, discover patterns, and identify candidate files for archiving or deletion. We describe this system, the framework supporting it, and the challenges for such a framework moving into the exascale age.

Lisa Gerhardt, Jeff Porter, Nick Balthaser, Lessons Learned from Running an HPSS Globus Endpoint, 2016 HPSS User Forum, September 1, 2016,

The NERSC division of LBNL has been running HPSS in production since 1998. The archive is quite popular with roughly 100TB IO every day from the ~6000 scientists that use the NERSC facility. We maintain a Globus-HPSS endpoint that transfers over 1PB / month of data into and out of HPSS. Getting Globus and HPSS to mesh well can be challenging. This talk gives an overview of some of the lessons learned.

Nicholas Balthaser, Lisa Gerhardt, NERSC Archival Storage: Best Practices, Joint Facilities User Forum on Data-Intensive Computing, June 18, 2014,

Nick Balthaser, NERSC; Lisa Gerhardt, NERSC, Introduction to NERSC Archival Storage: HPSS, February 3, 2014,

Reports

GK Lockwood, D Hazen, Q Koziol, RS Canon, K Antypas, J Balewski, N Balthaser, W Bhimji, J Botts, J Broughton, TL Butler, GF Butler, R Cheema, C Daley, T Declerck, L Gerhardt, WE Hurlbert, KA Kallback-Rose, S Leak, J Lee, R Lee, J Liu, K Lozinskiy, D Paul, Prabhat, C Snavely, J Srinivasan, T Stone Gibbins, NJ Wright, "Storage 2020: A Vision for the Future of HPC Storage", October 20, 2017, LBNL LBNL-2001072,

As the DOE Office of Science's mission computing facility, NERSC will follow this roadmap and deploy these new storage technologies to continue delivering storage resources that meet the needs of its broad user community. NERSC's diversity of workflows encompass significant portions of open science workloads as well, and the findings presented in this report are also intended to be a blueprint for how the evolving storage landscape can be best utilized by the greater HPC community. Executing the strategy presented here will ensure that emerging I/O technologies will be both applicable to and effective in enabling scientific discovery through extreme-scale simulation and data analysis in the coming decade.

Posters

Annette Greiner, Evan Racah, Shane Canon, Jialin Liu, Yunjie Liu, Debbie Bard, Lisa Gerhardt, Rollin Thomas, Shreyas Cholia, Jeff Porter, Wahid Bhimji, Quincey Koziol, Prabhat, "Data-Intensive Supercomputing for Science", Berkeley Institute for Data Science (BIDS) Data Science Faire, May 3, 2016,

Review of current DAS activities for a non-NERSC audience.