NERSCPowering Scientific Discovery Since 1974

Jeff Broughton

Jeff Broughton
Deputy for Operations
Phone: (510) 486-5972
Fax: (510) 486-6459
1 Cyclotron Road
Mailstop: 59R4010A
Berkeley, CA 94720 US

Biographical Sketch

Jeff Broughton is the NERSC Deputy for Operations and has responsibility for acquiring, installing and operating all computational, networking and storage equipment for NERSC and the Joint Genome Institute. Current projects include NERSC-7 (Edison), the Computational Research and Theory Facility (CRT) which will be NERSC's new home, and DesignForward.  Jeff has 30 years of HPC and management experience, including positions at Lawrence Livermore National Laboratory, Amdahl, Sun Microsystems Laboratories, and the startups, Key Research and PathScale. He has tackled projects in multiple disciplines as both an engineer and manager, including networking, computer-aided design, processor design, compilers, and operating systems.  Jeff holds multiple patents. His inventions include optimistic concurrency protocols, distributed cache coherence protocols, domain partitioning mechanisms, and software methods for cycle-based logic simulation.


Jeff Broughton, NERSC, NERSC Update for NUG2014, February 6, 2014,


Glenn K. Lockwood, Damian Hazen, Quincey Koziol, Shane Canon, Katie Antypas, Jan Balewski, Nicholas Balthaser, Wahid Bhimji, James Botts, Jeff Broughton, Tina L. Butler, Gregory F. Butler, Ravi Cheema, Christopher Daley, Tina Declerck, Lisa Gerhardt, Wayne E. Hurlbert, Kristy A. Kallback-
Rose, Stephen Leak, Jason Lee, Rei Lee, Jialin Liu, Kirill Lozinskiy, David Paul, Prabhat, Cory Snavely, Jay Srinivasan, Tavia Stone Gibbins, Nicholas J. Wright,
"Storage 2020: A Vision for the Future of HPC Storage", October 20, 2017, LBNL LBNL-2001072,

As the DOE Office of Science's mission computing facility, NERSC will follow this roadmap and deploy these new storage technologies to continue delivering storage resources that meet the needs of its broad user community. NERSC's diversity of workflows encompass significant portions of open science workloads as well, and the findings presented in this report are also intended to be a blueprint for how the evolving storage landscape can be best utilized by the greater HPC community. Executing the strategy presented here will ensure that emerging I/O technologies will be both applicable to and effective in enabling scientific discovery through extreme-scale simulation and data analysis in the coming decade.