Katie is the NERSC Division Deputy and leads the Data Department at NERSC. As the Data Department Head at the National Energy Research Scientific Computing (NERSC) Center, Katie has oversight of the Data Science Engagement, Data and Analytics Services, Storage Systems, and Infrastructure Services groups.
Katie is the Project Director for the NERSC-9 supercomputing project, a project to deploy NERSC's next supercomputer in 2020. The system is expected to be announced in the fall of 2018. She is also the PI on a ASCR funded research project called ScienceSearch: Enabling Automated Metadata through Machine Learning. Katie has expertise in system architectures, parallel I/O, application performance and user science requirements.
From 2012-2017, Katie lead the NERSC-8 system procurement resulting in the deployment of the Cori system (named after Nobel Laureate Gerty Cori). The Cray XC system features 9300 Intel Knights Landing processors. The Knights Landing processors have over 60 cores with 4 hardware threads each and a 512 bit vector unit width. It will be crucial that users can exploit both thread and SIMD vectorization to achieve high performance on Cori. Additionally the Knights Landing architecture features high bandwidth on-package memory significantly faster than DRAM memory. The system also features the Cray Aries interconnect, 28 PB for a Lustre based file system, and a 'burst buffer' a layer of NVRAM that sits between the compute node memory and file system that will serve to accelerate I/O. Cori debuted as #6 on the Top500 list.
From 2010 to 2013 Katie was the group leader for User Services at NERSC, a team of consultants who work directly with scientists to help them use the NERSC resources effectively and optimize applications.
Prior to becoming the Group Leader of USG, Katie was a consultant in the group from 2006-2010. She was the co-implementation team lead on the Hopper system. Hopper was NERSC's first petaflop system, a Cray XE6 with over 150,000 compute cores which delivered more than 3 million computing hours to scientists each day.
Before coming to NERSC, Katie worked at the ASC Flash Center at the University of Chicago as a parallel programmer developing the FLASH code, a parallel adaptive mesh refinement astrophysics application. She also spent 2 years as a management consultant at Cambridge Strategic Management Group building financial models and conducting market research. She has an Masters degree in Computer Science from the University of Chicago and a Bachelor's degree in Physics from Wellesley College.
A. Dubey, K. Antypas, A. C. Calder, C. Daley, B. Fryxell, J. B. Gallagher, D. Q. Lamb, D. Lee, K. Olson, L. B. Reid, P. Rich, P. M. Ricker, K. M. Riley, R. Rosner, A. Siegel, N. T. Taylor, K. Weide, F. X. Timmes, N. Vladimirova, and J. ZuHone, "Evolution of FLASH, a multi-physics scientific simulation code for high-performance computing", International Journal of High Performance Computing Applications, May 2014, doi: 10.1177/1094342013505656
A. Dubey, K. Antypas, C. Daley, "Parallel algorithms for moving Lagrangian data on block structured Eulerian meshes", Parallel Computing, 2011, 37:101--113, doi: 10.1016/j.parco.2011.01.001
A. Dubey, K. Antypas, M.K. Ganapathy, L.B. Reid, K.M. Riley, D. Sheeler, A. Siegel, K. Weide, "Extensible Component Based Architecture for FLASH: A Massively Parallel, Multiphysics Simulation Code", Parallel Computing, July 1, 2009, 35 (10-1:512-522,
R. Fisher, S. Abarzhi, K. Antypas, S. M. Asida, A. C. Calder, F. Cattaneo, P. Constantin, A. Dubey, I. Foster, J. B. Gallagher, M. K. Ganapathy, C.C. Glendenin, L. Kadano, D.Q. Lamb, S. Needham, M. Papka, T. Plewa, L.B. Reid, P. Rich, K. Riley, and D. Sheeler., "Tera-scale Turbulence Computation on BG/L Using the FLASH3 Code", IBM Journal of Research and Development., March 1, 2008, Vol 52 (:127-136,
K.B. Antypas, A. C. Calder, A. Dubey, J. B. Gallagher, J. Joshi, D. Q. Lamb, T. Linde, E. Lusk, O. E. B. Messer, A. Mignone, H. Pan, M. Papka, F. Peng, T. Plewa, P. M. Ricker, K. Riley, D. Sheeler, A. Siegel, N. Taylor, J. W. Truran, N. Vladimirova, G. Weirs, D. Yu, Z. Zhang., "FLASH: Applications and Future.", Parallel Computational Fluid Dynamics 2005: Theory and Applications, edited by A. Deane, G. Brenner, A. Ecer, D. R. Emerson, j. McDonough, J. Periaux, N. Satofuka, D. Tromeur-Dervout., January 1, 2006, 325,
Tina Declerck, Katie Antypas, Deborah Bard, Wahid Bhimji, Shane Canon, Shreyas Cholia, Helen (Yun) He, Douglas Jacobsen, Prabhat, Nicholas J. Wright, "Cori - A System to Support Data-Intensive Computing", Cray User Group Meeting 2016, London, England, May 2016,
- Download File: Cori-CUG2016.pdf (pdf: 4.4 MB)
W. Bhimji, D. Bard, M. Romanus, D. Paul, A. Ovsyannikov, B. Friesen, M. Bryson, J. Correa, G. K. Lockwood, V. Tsulaia, S. Byna, S. Farrell, D. Gursoy, C. Daley, V. Beckner, B. Van Straalen, D. Trebotich, C. Tull, G. Weber, N. J. Wright, K. Antypas, Prabhat, "Accelerating Science with the NERSC Burst Buffer Early User Program", Cray User Group, May 11, 2016, LBNL LBNL-1005736,
NVRAM-based Burst Buffers are an important part of the emerging HPC storage landscape. The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory recently installed one of the first Burst Buffer systems as part of its new Cori supercomputer, collaborating with Cray on the development of the DataWarp software. NERSC has a diverse user base comprised of over 6500 users in 700 different projects spanning a wide variety of scientific computing applications. The use-cases of the Burst Buffer at NERSC are therefore also considerable and diverse. We describe here performance measurements and lessons learned from the Burst Buffer Early User Program at NERSC, which selected a number of research projects to gain early access to the Burst Buffer and exercise its capability to enable new scientific advancements. To the best of our knowledge this is the first time a Burst Buffer has been stressed at scale by diverse, real user workloads and therefore these lessons will be of considerable benefit to shaping the developing use of Burst Buffers at HPC centers.
Zhengji Zhao, Katie Antypas, Nicholas J Wright, "Effects of Hyper-Threading on the NERSC workload on Edison", 2013 Cray User Group Meeting, May 9, 2013,
- Download File: CUG13HTpaper.pdf (pdf: 2.3 MB)
Zhengji Zhao, Mike Davis, Katie Antypas, Yushu Yao, Rei Lee and Tina Butler, "Shared Library Performance on Hopper", A paper presented in the Cray User Group meeting, Apri 29-May-3, 2012, Stuttgart, German., May 3, 2012,
Zhengji Zhao, Yun (Helen) He and Katie Antypas, "Cray Cluster Compatibility Mode on Hopper", A paper presented in the Cray User Group meeting, Apri 29-May-3, 2012, Stuttgart, Germany., May 1, 2012,
Yun (Helen) He and Katie Antypas, "Running Large Jobs on a Cray XE6 System", Cray User Group 2012 Meeting, Stuttgart, Germany, April 30, 2012,
A.C. Uselton, K.B. Antypas, D. Ushizima, J. Sukharev, "File System Monitoring as a Window into User I/O Requirements", CUG Proceedings, Edinburgh, Scotland, March 1, 2012,
Katie Antypas, Tina Butler, Jonathan Carter, "The Hopper System: How the Largest XE6 in the World went from Requirements to Reality", Cray User Group Proceedings, May 31, 2011,
K. Antypas, Y. He, "Transitioning Users from the Franklin XT4 System to the Hopper XE6 System", Cray User Group 2011 Procceedings, Fairbanks, Alaska, May 2011,
- Download File: CUG2011Hopperpaper.pdf (pdf: 1.5 MB)
The Hopper XE6 system, NERSC’s first peta-flop system with over 153,000 cores has increased the computing hours available to the Department of Energy’s Office of Science users by more than a factor of 4. As NERSC users transition from the Franklin XT4 system with 4 cores per node to the Hopper XE6 system with 24 cores per node, they have had to adapt to a lower amount of memory per core and on- node I/O performance which does not scale up linearly with the number of cores per node. This paper will discuss Hopper’s usage during the “early user period” and examine the practical implications of running on a system with 24 cores per node, exploring advanced aprun and memory affinity options for typical NERSC applications as well as strategies to improve I/O performance.
A. Uselton, K. Antypas, D. M. Ushizima, J. Sukharev, "File System Monitoring as a Window into User I/O Requirements", Proceedings of the 2010 Cray User Group Meeting, Edinburgh, Scotland, Edinburgh, Scotland, May 24, 2010,
K. Antypas and A. Uselton, "MPI-I/O on Franklin XT4 System at NERSC", CUG Proceedings, Atlanta, CA, May 28, 2009,
H. Shan, K. Antypas, J.Shalf., "Characterizing and Predicting the I/O Performance of HPC Applications Using a Parameterized Synthetic Benchmark.", Supercomputing, Reno, NV, November 17, 2008,
A. C. Calder, N. T. Taylor, K. Antypas, and D. Sheeler, "A Case Study of Verifying and Validating an Astrophysical Simulation Code", Astronomical Society of the Pacific, March 26, 2006, 119,
Tina Declerck, Katie Antypas, Deborah Bard, Wahid Bhimji, Shane Canon, Shreyas Cholia, Helen (Yun) He, Douglas Jacobsen, Prabhat, Nicholas J. Wright, Cori - A System to Support Data-Intensive Computing, Cray User Group Meeting 2016, London, England, May 12, 2016,
Richard A. Gerber, Katie Antypas, Sudip Dosanjh, Jack Deslippe, Nick Wright, Jay Srinivasan, Systems Roadmap and Plans for Supporting Extreme Data Science, December 10, 2015,
Yun (Helen) He, Alice Koniges, Richard Gerber, Katie Antypas, Using OpenMP at NERSC, OpenMPCon 2015, invited talk, September 30, 2015,
- Download File: HelenHe-OpenMPCon-2015.pdf (pdf: 7.1 MB)
Katie Antypas, The Cori System, February 24, 2015,
- Download File: CoriforNUGFeb2015.pdf (pdf: 7.2 MB)
Katie Antypas, Best Practices for Reading and Writing Data on HPC Systems, NUG Meeting 2013, February 14, 2013,
- Download File: Readingand-WritingDataNUGFeb2013.pdf (pdf: 4.1 MB)
Katie Antypas, NERSC-8 Project, NUG Meeting, February 12, 2013,
- Download File: NERSC8-NUG-Feb-2013.pdf (pdf: 9.9 MB)
NERSC-8 Project Overview
Zhengji Zhao, Mike Davis, Katie Antypas, Yushu Yao, Rei Lee and Tina Butler, Shared Library Performance on Hopper, A talk in the Cray User Group meeting, Apri 29-May-3, 2012, Stuttgart, German., May 3, 2012,
Zhengji Zhao, Yun (Helen) He and Katie Antypas, Cray Cluster Compatibility Mode on Hopper, A talk in the Cray User Group meeting, April 29-May-3, 2012, Stuttgart, German., May 1, 2012,
K. Antypas, Best Practices for Reading and Writing Data on HPC Systems, February 1, 2012,
- Download File: Readingand-WritingDataNUGFeb2012.ppt (ppt: 5.1 MB)
K. Antypas, Parallel I/O From a User's Perspective, HPC Advisory Council, December 6, 2011,
- Download File: HPCAdvisoryCouncilIOTalkDec2011.pptx (pptx: 10 MB)
Zhengji Zhao, Mike Davis, Katie Antypas, Rei Lee and Tina Butler, Shared Library Performance on Hopper, Oct. 26, 2011, Cray Quarterly Meeting at St Paul, MN, October 26, 2011,
Yun (Helen) He and Katie Antypas, Mysterious Error Messages on Hopper, NERSC/Cray Quarterly Meeting, July 25, 2011,
K. Antypas, NERSC: National Energy Research Scientific Computing Center, July 18, 2011,
K. Antypas, The Hopper XE6 System: Delivering High End Computing to the Nation’s Science and Research Community, Cray Quarterly Review, April 1, 2011,
K. Antypas, Introduction to Parallel I/O, ASTROSIM 2010 Workshop, July 19, 2010,
- Download File: ASTROSIMParallelIO2010.pdf (pdf: 45 MB)
K. Antypas, NERSC: Delivering High End Scientific Computing to the Nation's Research Community, November 5, 2009,
- Download File: EducauseNov2009antypas.pdf (pdf: 17 MB)
J. Shalf, K. Antypas, H.J. Wasserman, Recent Workload Characterization Activities at NERSC, Santa Fe Workshop, January 1, 2008,
Glenn K. Lockwood, Damian Hazen, Quincey Koziol, Shane Canon, Katie Antypas, Jan Balewski, Nicholas Balthaser, Wahid Bhimji, James Botts, Jeff Broughton, Tina L. Butler, Gregory F. Butler, Ravi Cheema, Christopher Daley, Tina Declerck, Lisa Gerhardt, Wayne E. Hurlbert, Kristy A. Kallback-
Rose, Stephen Leak, Jason Lee, Rei Lee, Jialin Liu, Kirill Lozinskiy, David Paul, Prabhat, Cory Snavely, Jay Srinivasan, Tavia Stone Gibbins, Nicholas J. Wright,
"Storage 2020: A Vision for the Future of HPC Storage",
October 20, 2017,
- Download File: Storage-2020-A-Vision-for-the-Future-of-HPC-Storage.pdf (pdf: 3.6 MB)
K. Antypas, B.A Austin, T.L. Butler, R.A. Gerber, C.L Whitney, N.J. Wright, W. Yang, Z Zhao, "NERSC Workload Analysis on Hopper", Report, October 17, 2014, LBNL 6804E,
Antypas, K., Shalf, J., Wasserman, H., "NERSC‐6 Workload Analysis and Benchmark Selection Process", LBNL Technical Report, August 13, 2008, LBNL 1014E,
- Download File: NERSCWorkload.pdf (pdf: 5 MB)
Science drivers for NERSC-6