NERSCPowering Scientific Discovery Since 1974

Greg Butler

Gregory (Greg) F. Butler
Storage Systems Group
National Energy Research Scientific Computing Center
Phone: (510) 486-8691
Fax: (510) 486-6459
Lawrence Berkeley National Laboratory
1 Cyclotron Road
Mailstop: 59R4010A
Berkeley, CA 94720 US

Biographical Sketch

Greg Butler has over 35 years experience in the computing field, with over 25 years experience in high performance scientific computing at DOE, DOD, and US EPA installations. His management experience has included responsibility for managing technical groups; geographically remote software development groups; the development and implementation of project, strategic, and technological plans; writing of white papers; and procurement activities. Greg has over 10 years experience in OS development on multiple high performance scientific computing platforms and operation systems, and over 30 years experience in system administration, software maintenance, and upgrades of high performance production scientific computing systems. His experience also includes technology assessment and recommendations for technical direction and implementation. Greg has a B.A. in Astronomy from Northwestern University.

Greg lead the investigation of the feasibility of deploying NERSC center-wide shared file systems beginning in 2000. He lead the evaluation of various candidate file systems and the subsequent deployment of /PROJECT, the first NERSC Global File System (NGF) in 2005.


Glenn K. Lockwood, Damian Hazen, Quincey Koziol, Shane Canon, Katie Antypas, Jan Balewski, Nicholas Balthaser, Wahid Bhimji, James Botts, Jeff Broughton, Tina L. Butler, Gregory F. Butler, Ravi Cheema, Christopher Daley, Tina Declerck, Lisa Gerhardt, Wayne E. Hurlbert, Kristy A. Kallback-
Rose, Stephen Leak, Jason Lee, Rei Lee, Jialin Liu, Kirill Lozinskiy, David Paul, Prabhat, Cory Snavely, Jay Srinivasan, Tavia Stone Gibbins, Nicholas J. Wright,
"Storage 2020: A Vision for the Future of HPC Storage", October 20, 2017, LBNL LBNL-2001072,

As the DOE Office of Science's mission computing facility, NERSC will follow this roadmap and deploy these new storage technologies to continue delivering storage resources that meet the needs of its broad user community. NERSC's diversity of workflows encompass significant portions of open science workloads as well, and the findings presented in this report are also intended to be a blueprint for how the evolving storage landscape can be best utilized by the greater HPC community. Executing the strategy presented here will ensure that emerging I/O technologies will be both applicable to and effective in enabling scientific discovery through extreme-scale simulation and data analysis in the coming decade.