NERSCPowering Scientific Discovery Since 1974

Katie Antypas

Antypas headshot small
Katie Antypas
Division Deputy and NERSC-10 Project Director
Phone: (510) 486-5575
Fax: (510) 486-6459
1 Cyclotron Road
Mailstop: 59R4010A
Berkeley, CA 94720 US

Biographical Sketch

Katie Antypas is the NERSC Division Deputy and NERSC-10 Project Director.  Previously she served as the Data Department Head at the National Energy Research Scientific Computing (NERSC) Center, she has oversight of the Data Science Engagement, Data and Analytics Services, Storage Systems, and Infrastructure Services groups.

Katie is the Director of the Hardware and Integration are of the Exascale Computing Project. She is also the co-PI on a ASCR funded research project called ScienceSearch: Enabling Automated Metadata through Machine Learning. Katie has expertise in system architectures, parallel I/O, application performance, and user science requirements.

From 2012-2017, Katie led the NERSC-8 system procurement resulting in the deployment of the Cori system (named after Nobel Laureate Gerty Cori). The Cray XC system features 9300 Intel Knights Landing processors. The Knights Landing processors have over 60 cores with 4 hardware threads each and a 512-bit vector unit width. It is crucial that users can exploit both thread and SIMD vectorization to achieve high performance on Cori. Additionally, the Knights Landing architecture features high bandwidth on-package memory significantly faster than DRAM memory. The Cori system also features the Cray Aries interconnect, 28 PB for a Lustre-based file system, and a "burst buffer" layer of NVRAM that sits between the compute node memory and file system to accelerate I/O. Cori debuted as #6 on the Top500 list.

From 2010 to 2013 Katie was the group leader for User Services at NERSC, a team of consultants who work directly with scientists to help them use apply NERSC resources effectively to their research and to optimize applications.   

Prior to becoming the Group Leader of USG, Katie was a consultant in the group from 2006-2010. She was the co-implementation team lead on the Hopper system. Hopper was NERSC's first petaflop system, a Cray XE6 with over 150,000 compute cores which delivered more than 3 million computing hours to scientists each day.

Before coming to NERSC, Katie worked at the ASC Flash Center at the University of Chicago as a parallel programmer developing the FLASH code, a parallel adaptive mesh refinement astrophysics application. She also spent 2 years as a management consultant at Cambridge Strategic Management Group building financial models and conducting market research. She has a Master's degree in Computer Science from the University of Chicago and a Bachelor's degree in Physics from Wellesley College.

Journal Articles

Vetter, Jeffrey S.; Brightwell, Ron; Gokhale, Maya; McCormick, Pat; Ross, Rob; Shalf, John; Antypas, Katie; Donofrio, David; Humble, Travis; Schuman, Catherine; Van Essen, Brian; Yoo, Shinjae; Aiken, Alex; Bernholdt, David; Byna, Suren; Cameron, Kirk; Cappello, Frank; Chapman, Barbara; Chien, Andrew; Hall, Mary; Hartman-Baker, Rebecca; Lan, Zhiling; Lang, Michael; Leidel, John; Li, Sherry; Lucas, Robert; Mellor-Crummey, John; Peltz Jr., Paul; Peterka, Thomas; Strout, Michelle; Wilke, Jeremiah, "Extreme Heterogeneity 2018 - Productive Computational Science in the Era of Extreme Heterogeneity: Report for DOE ASCR Workshop on Extreme Heterogeneity", December 2018, doi: 10.2172/1473756

A. Dubey, K. Antypas, A. C. Calder, C. Daley, B. Fryxell, J. B. Gallagher, D. Q. Lamb, D. Lee, K. Olson, L. B. Reid, P. Rich, P. M. Ricker, K. M. Riley, R. Rosner, A. Siegel, N. T. Taylor, K. Weide, F. X. Timmes, N. Vladimirova, and J. ZuHone, "Evolution of FLASH, a multi-physics scientific simulation code for high-performance computing", International Journal of High Performance Computing Applications, May 2014, doi: 10.1177/1094342013505656

A. Dubey, K. Antypas, M.K. Ganapathy, L.B. Reid, K.M. Riley, D. Sheeler, A. Siegel, K. Weide, "Extensible Component Based Architecture for FLASH: A Massively Parallel, Multiphysics Simulation Code", Parallel Computing, July 1, 2009, 35 (10-1:512-522,

R. Fisher, S. Abarzhi, K. Antypas, S. M. Asida, A. C. Calder, F. Cattaneo, P. Constantin, A. Dubey, I. Foster, J. B. Gallagher, M. K. Ganapathy, C.C. Glendenin, L. Kadano, D.Q. Lamb, S. Needham, M. Papka, T. Plewa, L.B. Reid, P. Rich, K. Riley, and D. Sheeler., "Tera-scale Turbulence Computation on BG/L Using the FLASH3 Code", IBM Journal of Research and Development., March 1, 2008, Vol 52 (:127-136,

K.B. Antypas, A. C. Calder, A. Dubey, J. B. Gallagher, J. Joshi, D. Q. Lamb, T. Linde, E. Lusk, O. E. B. Messer, A. Mignone, H. Pan, M. Papka, F. Peng, T. Plewa, P. M. Ricker, K. Riley, D. Sheeler, A. Siegel, N. Taylor, J. W. Truran, N. Vladimirova, G. Weirs, D. Yu, Z. Zhang., "FLASH: Applications and Future.", Parallel Computational Fluid Dynamics 2005: Theory and Applications, edited by A. Deane, G. Brenner, A. Ecer, D. R. Emerson, j. McDonough, J. Periaux, N. Satofuka, D. Tromeur-Dervout., January 1, 2006, 325,

Conference Papers

Tina Declerck, Katie Antypas, Deborah Bard, Wahid Bhimji, Shane Canon, Shreyas Cholia, Helen (Yun) He, Douglas Jacobsen, Prabhat, Nicholas J. Wright, "Cori - A System to Support Data-Intensive Computing", Cray User Group Meeting 2016, London, England, May 2016,

W. Bhimji, D. Bard, M. Romanus, D. Paul, A. Ovsyannikov, B. Friesen, M. Bryson, J. Correa, G. K. Lockwood, V. Tsulaia, S. Byna, S. Farrell, D. Gursoy, C. Daley, V. Beckner, B. Van Straalen, D. Trebotich, C. Tull, G. Weber, N. J. Wright, K. Antypas, Prabhat, "Accelerating Science with the NERSC Burst Buffer Early User Program", Cray User Group, May 11, 2016, LBNL LBNL-1005736,

NVRAM-based Burst Buffers are an important part of the emerging HPC storage landscape. The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory recently installed one of the first Burst Buffer systems as part of its new Cori supercomputer, collaborating with Cray on the development of the DataWarp software. NERSC has a diverse user base comprised of over 6500 users in 700 different projects spanning a wide variety of scientific computing applications. The use-cases of the Burst Buffer at NERSC are therefore also considerable and diverse. We describe here performance measurements and lessons learned from the Burst Buffer Early User Program at NERSC, which selected a number of research projects to gain early access to the Burst Buffer and exercise its capability to enable new scientific advancements. To the best of our knowledge this is the first time a Burst Buffer has been stressed at scale by diverse, real user workloads and therefore these lessons will be of considerable benefit to shaping the developing use of Burst Buffers at HPC centers.

Zhengji Zhao, Katie Antypas, Nicholas J Wright, "Effects of Hyper-Threading on the NERSC workload on Edison", 2013 Cray User Group Meeting, May 9, 2013,

Zhengji Zhao, Mike Davis, Katie Antypas, Yushu Yao, Rei Lee and Tina Butler, "Shared Library Performance on Hopper", A paper presented in the Cray User Group meeting, Apri 29-May-3, 2012, Stuttgart, German., May 3, 2012,

Zhengji Zhao, Yun (Helen) He and Katie Antypas, "Cray Cluster Compatibility Mode on Hopper", A paper presented in the Cray User Group meeting, Apri 29-May-3, 2012, Stuttgart, Germany., May 1, 2012,

Yun (Helen) He and Katie Antypas, "Running Large Jobs on a Cray XE6 System", Cray User Group 2012 Meeting, Stuttgart, Germany, April 30, 2012,

A.C. Uselton, K.B. Antypas, D. Ushizima, J. Sukharev, "File System Monitoring as a Window into User I/O Requirements", CUG Proceedings, Edinburgh, Scotland, March 1, 2012,

Katie Antypas, Tina Butler, Jonathan Carter, "The Hopper System: How the Largest XE6 in the World went from Requirements to Reality", Cray User Group Proceedings, May 31, 2011,

K. Antypas, Y. He, "Transitioning Users from the Franklin XT4 System to the Hopper XE6 System", Cray User Group 2011 Procceedings, Fairbanks, Alaska, May 2011,

The Hopper XE6 system, NERSC’s first peta-flop system with over 153,000 cores has increased the computing hours available to the Department of Energy’s Office of Science users by more than a factor of 4. As NERSC users transition from the Franklin XT4 system with 4 cores per node to the Hopper XE6 system with 24 cores per node, they have had to adapt to a lower amount of memory per core and on- node I/O performance which does not scale up linearly with the number of cores per node. This paper will discuss Hopper’s usage during the “early user period” and examine the practical implications of running on a system with 24 cores per node, exploring advanced aprun and memory affinity options for typical NERSC applications as well as strategies to improve I/O performance.

A. Uselton, K. Antypas, D. M. Ushizima, J. Sukharev, "File System Monitoring as a Window into User I/O Requirements", Proceedings of the 2010 Cray User Group Meeting, Edinburgh, Scotland, Edinburgh, Scotland, May 24, 2010,

K. Antypas and A. Uselton, "MPI-I/O on Franklin XT4 System at NERSC", CUG Proceedings, Atlanta, CA, May 28, 2009,

H. Shan, K. Antypas, J.Shalf., "Characterizing and Predicting the I/O Performance of HPC Applications Using a Parameterized Synthetic Benchmark.", Supercomputing, Reno, NV, November 17, 2008,

A. C. Calder, N. T. Taylor, K. Antypas, and D. Sheeler, "A Case Study of Verifying and Validating an Astrophysical Simulation Code", Astronomical Society of the Pacific, March 26, 2006, 119,


Tina Declerck, Katie Antypas, Deborah Bard, Wahid Bhimji, Shane Canon, Shreyas Cholia, Helen (Yun) He, Douglas Jacobsen, Prabhat, Nicholas J. Wright, Cori - A System to Support Data-Intensive Computing, Cray User Group Meeting 2016, London, England, May 12, 2016,

Richard A. Gerber, Katie Antypas, Sudip Dosanjh, Jack Deslippe, Nick Wright, Jay Srinivasan, Systems Roadmap and Plans for Supporting Extreme Data Science, December 10, 2015,

Yun (Helen) He, Alice Koniges, Richard Gerber, Katie Antypas, Using OpenMP at NERSC, OpenMPCon 2015, invited talk, September 30, 2015,

Katie Antypas, The Cori System, February 24, 2015,

Katie Antypas, Best Practices for Reading and Writing Data on HPC Systems, NUG Meeting 2013, February 14, 2013,

Katie Antypas, NERSC-8 Project, NUG Meeting, February 12, 2013,

NERSC-8 Project Overview

Zhengji Zhao, Mike Davis, Katie Antypas, Yushu Yao, Rei Lee and Tina Butler, Shared Library Performance on Hopper, A talk in the Cray User Group meeting, Apri 29-May-3, 2012, Stuttgart, German., May 3, 2012,

Zhengji Zhao, Yun (Helen) He and Katie Antypas, Cray Cluster Compatibility Mode on Hopper, A talk in the Cray User Group meeting, April 29-May-3, 2012, Stuttgart, German., May 1, 2012,

K. Antypas, Parallel I/O From a User's Perspective, HPC Advisory Council, December 6, 2011,

Zhengji Zhao, Mike Davis, Katie Antypas, Rei Lee and Tina Butler, Shared Library Performance on Hopper, Oct. 26, 2011, Cray Quarterly Meeting at St Paul, MN, October 26, 2011,

Yun (Helen) He and Katie Antypas, Mysterious Error Messages on Hopper, NERSC/Cray Quarterly Meeting, July 25, 2011,

K. Antypas, The Hopper XE6 System: Delivering High End Computing to the Nation’s Science and Research Community, Cray Quarterly Review, April 1, 2011,

K. Antypas, Introduction to Parallel I/O, ASTROSIM 2010 Workshop, July 19, 2010,

K. Antypas, NERSC: Delivering High End Scientific Computing to the Nation's Research Community, November 5, 2009,

J. Shalf, K. Antypas, H.J. Wasserman, Recent Workload Characterization Activities at NERSC, Santa Fe Workshop, January 1, 2008,


Gerber, Richard; Hack, James; Riley, Katherine; Antypas, Katie; Coffey, Richard; Dart, Eli; Straatsma, Tjerk; Wells, Jack; Bard, Deborah; Dosanjh, Sudip, et al., "Crosscut report: Exascale Requirements Reviews", January 22, 2018,

Glenn K. Lockwood, Damian Hazen, Quincey Koziol, Shane Canon, Katie Antypas, Jan Balewski, Nicholas Balthaser, Wahid Bhimji, James Botts, Jeff Broughton, Tina L. Butler, Gregory F. Butler, Ravi Cheema, Christopher Daley, Tina Declerck, Lisa Gerhardt, Wayne E. Hurlbert, Kristy A. Kallback-
Rose, Stephen Leak, Jason Lee, Rei Lee, Jialin Liu, Kirill Lozinskiy, David Paul, Prabhat, Cory Snavely, Jay Srinivasan, Tavia Stone Gibbins, Nicholas J. Wright,
"Storage 2020: A Vision for the Future of HPC Storage", October 20, 2017, LBNL LBNL-2001072,

As the DOE Office of Science's mission computing facility, NERSC will follow this roadmap and deploy these new storage technologies to continue delivering storage resources that meet the needs of its broad user community. NERSC's diversity of workflows encompass significant portions of open science workloads as well, and the findings presented in this report are also intended to be a blueprint for how the evolving storage landscape can be best utilized by the greater HPC community. Executing the strategy presented here will ensure that emerging I/O technologies will be both applicable to and effective in enabling scientific discovery through extreme-scale simulation and data analysis in the coming decade.

K. Antypas, B.A Austin, T.L. Butler, R.A. Gerber, C.L Whitney, N.J. Wright, W. Yang, Z Zhao, "NERSC Workload Analysis on Hopper", Report, October 17, 2014, LBNL 6804E,

Antypas, K., Shalf, J., Wasserman, H., "NERSC‐6 Workload Analysis and Benchmark Selection Process", LBNL Technical Report, August 13, 2008, LBNL 1014E,

Science drivers for NERSC-6

K. Antypas, J. Shalf, H. Wasserman, "NERSC-6 Workload Analysis and Benchmark Selection Process", January 1, 2008,


Debbie Bard, Wahid Bhimji, David Paul, Glenn K. Lockwood, Nicholas J Wright, Katie Antypas, Prabhat, Steve Farrell, Andrey Ovsyannikov, Melissa Romanus, Brian Van Straalen, David Trebotich, Guenter Weber, "Experiences with the Burst Buffer at NERSC", Supercomputing Conference, November 16, 2016,


John Shalf, Honzhang Shan, Katie Antypas, I/O Requirements for HPC Applications, talk, January 1, 2008,