NERSCPowering Scientific Discovery Since 1974

David Skinner

David E. Skinner , Ph.D.
Strategic Partnerships Lead
Phone: (510) 486-4748
Mobile: (510) 847-2946
Fax: (510) 486-6459
1 Cyclotron Road
Mailstop: 59R4010A
Berkeley, CA 94720 US

Biographical Sketch

David Skinner earned his Ph.D. from UC Berkeley where his research focused on quantum and semi-classical approaches to chemical reaction dynamics and kinetics. At NERSC David leads strategic partnerships between NERSC and research communities, instrument/experiment data science teams and the private sector.  Previously at NERSC David has led efforts in HPC software, portals for science data, and working with communities newly interested in scientific computing. David was the lead technical advisor to first two rounds of INCITE projects, led the SciDAC Outreach Center, and is an author of the Integrated Performance Monitoring (IPM) framework. David's publications while at NERSC have focused on the performance analysis of HPC science applications and broadening the impact of HPC through Science Gateways.

Recently David has been involved in connecting light-source beamline instruments with HPC to accelerate data analysis. The ExaFEL project combines LCLS, ESnet, and NERSC for x-ray beamline experimenters needing all three resources at once. Bursts of data are processed at NERSC allowing the experimenters greater insight into their sample and the instrument. ExaFEL is a project to shorten the gap in how quickly such analyses can be done. 

 Published works via Google Scholar

Journal Articles

Abe Singer, Shane Canon, Rebecca Hartman-Baker, Kelly L. Rowland, David Skinner, Craig Lant, "What Deploying MFA Taught Us About Changing Infrastructure", HPCSYSPROS19: HPC System Professionals Workshop, November 2019, doi: 10.5281/zenodo.3525375

NERSC is not the first organization to implement multi-factor authentication (MFA) for its users. We had seen multiple talks by other supercomputing facilities who had deployed MFA, but as we planned and deployed our MFA implementation, we found that nobody had talked about the more interesting and difficult challenges, which were largely social rather than technical. Our MFA deployment was a success, but, more importantly, much of what we learned could apply to any infrastructure change. Additionally, we developed the sshproxy service, a key piece of infrastructure technology that lessens user and staff burden and has made our MFA implementation more amenable to scientific workflows. We found great value in using robust open-source components where we could and developing tailored solutions where necessary.

Anubhav Jain, Shyue Ping Ong, Geoffroy Hautier, Wei Chen, William Davidson Richards, Stephen Dacek, Shreyas Cholia, Dan Gunter, David Skinner, Gerbrand Ceder, Kristin A. Persson, "The Materials Project: A materials genome approach to accelerating materials innovation", APL Mat. 1, 011002 (2013);, July 2013, doi: 10.1063/1.4812323

M. Di Pierro, J. Hetrick, S. Cholia, D. Skinner, "Making QCD Lattice Data Accessible and Organized through Advanced Web Interfaces", Arxiv preprint arXiv:1112.2193, December 1, 2011, abs/1112,

J. Dongarra, P. Beckman, T. Moore, P. Aerts, G. Aloisio, J.C. Andre, D. Barkai, J.Y. Berthou, T. Boku, B. Braunschweig, others, "The international exascale software project roadmap", International Journal of High Performance Computing Applications, January 2011, 25:3--60,

K. Fuerlinger, N.J. Wright, D. Skinner, "Performance analysis and workload characterization with ipm", Tools for High Performance Computing 2009, January 1, 2010, 31--38,

K. Fuerlinger, N.J. Wright, D. Skinner, C. Klausecker, D. Kranzlmueller, "Effective Holistic Performance Measurement at Petascale Using IPM", Competence in High Performance Computing 2010, January 1, 2010, 15--26,

K. F\ urlinger, D. Skinner, "Performance profiling for OpenMP tasks", Evolving OpenMP in an Age of Extreme Parallelism, January 1, 2009, 132--139,

W. Kramer, D. Skinner, "Consistent Application Performance at the Exascale", International Journal of High Performance Computing Applications, January 1, 2009, 23:392,

W. Kramer, D. Skinner, "An Exascale Approach to Software and Hardware Design", International Journal of High Performance Computing Applications, January 1, 2009, 23:389,

D. Skinner, A. Choudary, "On the Importance of End-to-End Application Performance Monitoring and Workload Analysis at the Exascale", International Journal of High Performance Computing Applications, January 1, 2009, 23:357--360,

W.T.C. Kramer, H. Walter, G. New, T. Engle, R. Pennington, B. Comes, B. Bland, B. Tomlison, J. Kasdorf, D. Skinner, others, "Report of the workshop on petascale systems integration for large scale facilities", January 1, 2007,

H. Wang, D.E. Skinner, M. Thoss, "Calculation of reactive flux correlation functions for systems in a condensed phase environment: A multilayer multiconfiguration time-dependent Hartree approach", The Journal of chemical physics, January 1, 2006, 125:174502,

A. Aspuru--Guzik, R. Salom\ on--Ferrer, B. Austin, R. Perusqu\ \ia--Flores, M.A. Griffin, R.A. Oliva, D. Skinner, D. Domin, W.A. Lester Jr, "Zori 1.0: A parallel quantum Monte Carlo electronic structure package", Journal of Computational Chemistry, January 1, 2005, 26:856--862,

D. Skinner, "Performance monitoring of parallel scientific applications", January 1, 2005,

Leonid Oliker, Canning, Carter, Shalf, Skinner, Ethier, Biswas, Jahed Djomehri, Rob F. Van der Wijngaart, "Performance evaluation of the SX-6 vector architecture for scientific computations", Concurrency - Practice and Experience, January 1, 2005, 17:69-93,

D. Skinner, "Scaling up parallel scientific applications on the IBM SP", January 1, 2004,

D. Skinner, N. Cardo, "An Analysis of Node Asymmetries on seaborg. nersc. gov", January 1, 2003,

D.E. Skinner, W.H. Miller, "Application of the semiclassical initial value representation and its linearized approximation to inelastic scattering", Chemical physics letters, January 1, 1999, 300:20--26,

D.E. Skinner, T.C. Germann, W.H. Miller, "Quantum Mechanical Rate Constants for O+ OH⇌ H+ O2 for Total Angular Momentum J> 0", The Journal of Physical Chemistry A, January 1, 1998, 102:3828--3834,

D.E. Skinner, D.P. Colombo Jr, J.J. Cavaleri, R.M. Bowman, "Femtosecond investigation of electron trapping in semiconductor nanoclusters", The Journal of Physical Chemistry, January 1, 1995, 99:7853--7856,

Conference Papers

S. Parete-Koon, B. Caldwell, S. Canon, E. Dart, J. Hick, J. Hill, C. Layton, D. Pelfrey, G. Shipman, D. Skinner, J. Wells, J. Zurawski, "HPC's Pivot to Data", Conference, May 5, 2014,


Computer centers such as NERSC and OLCF have traditionally focused on delivering computational capability that enables breakthrough innovation in a wide range of science domains. Accessing that computational power has required services and tools to move the data from input and output to computation and storage. A pivot to data is occurring in HPC. Data transfer tools and services that were previously peripheral are becoming integral to scientific workflows.  Emerging requirements from high-bandwidth detectors, highthroughput screening techniques, highly concurrent simulations, increased focus on uncertainty quantification, and an emerging open-data policy posture toward published research are among the data-drivers shaping the networks, file systems, databases, and overall HPC environment. In this paper we explain the pivot to data in HPC through user requirements and the changing resources provided by HPC with particular focus on data movement. For WAN data transfers we present the results of a study of network performance between centers


K. Furlinger, N.J. Wright, D. Skinner, "Comprehensive Performance Monitoring for GPU Cluster Systems", Parallel and Distributed Processing Workshops and Phd Forum (IPDPSW), 2011 IEEE International Symposium on, 2011, 1377--1386,

Jack Dongarra, John Shalf, David Skinner, Kathy Yelick, "International Exascale Software Project (IESP) Roadmap, version 1.1", October 18, 2010,

Andrew Uselton, Howison, J. Wright, Skinner, Keen, Shalf, L. Karavanic, Leonid Oliker, "Parallel I/O performance: From events to ensembles", IPDPS, 2010, 1-11,

S. Cholia, D. Skinner, J. Boverhof, "NEWT: A RESTful service for building High Performance Computing web applications", Gateway Computing Environments Workshop (GCE), 2010, January 1, 2010, 1--11,

Karl F\ urlinger, J. Wright, David Skinner, "Effective Performance Measurement at Petascale Using IPM", ICPADS, January 1, 2010, 373-380,

John Shalf, Kamil, Oliker, David Skinner, "Analyzing Ultra-Scale Application Communication Requirements for a Reconfigurable Hybrid Interconnect", SC, January 1, 2005, 17,

J. Borrill, J. Carter, L. Oliker, D. Skinner, R. Biswas, "Integrated performance monitoring of a cosmology application on leading HEC platforms", Parallel Processing, 2005. ICPP 2005. International Conference on, January 1, 2005, 119--128,

S. Kamil, J. Shalf, L. Oliker, D. Skinner, "Understanding ultra-scale application communication requirements", Workload Characterization Symposium, 2005. Proceedings of the IEEE International, January 1, 2005, 178--187,

Leonid Oliker, Canning, Carter, Shalf, Skinner, Ethier, Biswas, Jahed Djomehri, Rob F. Van der Wijngaart, "Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations", SC, January 1, 2003, 38,

Book Chapters

Sudip Dosanjh, Shane Canon, Jack Deslippe, Kjiersten Fagnan, Richard Gerber, Lisa Gerhardt, Jason Hick, Douglas Jacobsen, David Skinner, Nicholas J. Wright, "Extreme Data Science at the National Energy Research Scientific Computing (NERSC) Center", Proceedings of International Conference on Parallel Programming – ParCo 2013, ( March 26, 2014)


David Skinner and Shane Canon, NERSC and High Throughput Computing, February 12, 2013,

Tools for Performance Debugging HPC Applications, February 16, 2012,


Richard A. Gerber et al., "High Performance Computing Operational Review: Enabling Data-Driven Scientific Discovery at DOE HPC Facilities", November 7, 2014,

Fox W., Correa J., Cholia S., Skinner D., Ophus C., "NCEM Hub, A Science Gateway for Electron Microscopy in Materials Science", LBNL Tech Report on NCEMhub, May 1, 2014,

Electron microscopy (EM) instrumentation is making a detector-driven transition to Big Data. High capability cameras bring new resolving power but also an exponentially increasing demand for bandwidth and data analysis. In practical terms this means that users of advanced microscopes find it increasingly challenging to take data with them and instead need an integrated data processing pipeline. in 2013 NERSC and NCEM staff embarked on a pilot to prototype data services that provide such a pipeline. This tech report details the NCEM Hub pilot as it concluded in May 2014.

Gabrielle Allen (LSU/CCT), Gene Allen (MSC Inc.), Kenneth Alvin (SNL), Matt Drahzal (IBM), David Fisher (DoD-Mod), Robert Graybill (USC/ISI), Bob Lucas (USC/ISI), Tim Mattson (Intel), Hal Morgan (SNL), Erik Schnetter (LSU/CCT), Brian Schott (USC/ISI), Edward Seidel (LSU/CCT), John Shalf (LBNL/NERSC), Shawn Shamsian (MSC Inc.), David Skinner (LBNL/NERSC), Siu Tong (Engeneous), "Frameworks for Multiphysics Simulation : HPC Application Software Consortium Summit Concept Paper", January 1, 2008,

J. Levesque, J. Larkin, M. Foster, J. Glenski, G. Geissler, S. Whalen, B. Waldecker, J. Carter, D. Skinner, H. He, H. Wasserman, J. Shalf, H. Shan, "Understanding and mitigating multicore performance issues on the AMD opteron architecture", March 1, 2007, LBNL 62500,

Over the past 15 years, microprocessor performance has doubled approximately every 18 months through increased clock rates and processing efficiency. In the past few years, clock frequency growth has stalled, and microprocessor manufacturers such as AMD have moved towards doubling the number of cores every 18 months in order to maintain historical growth rates in chip performance. This document investigates the ramifications of multicore processor technology on the new Cray XT4systems based on AMD processor technology. We begin by walking through the AMD single-core and dual-core and upcoming quad-core processor architectures. This is followed by a discussion of methods for collecting performance counter data to understand code performance on the Cray XT3and XT4systems. We then use the performance counter data to analyze the impact of multicore processors on the performance of microbenchmarks such as STREAM, application kernels such as the NAS Parallel Benchmarks, and full application codes that comprise the NERSC-5 SSP benchmark suite. We explore compiler options and software optimization techniques that can mitigate the memory bandwidth contention that can reduce computing efficiency on multicore processors. The last section provides a case study of applying the dual-core optimizations to the NAS Parallel Benchmarks to dramatically improve their performance.1


Jonathan Carter, Tony Drummond, Parry Husbands, Paul Hargrove, Bill Kramer, Osni Marques, Esmond Ng, Lenny Oliker, John Shalf, David Skinner, Kathy Yelick, "Software Roadmap to Plug and Play Petaflop/s", Lawrence Berkeley National Laboratory Technical Report, #59999, July 31, 2006,

L. Oliker, S. Kamil, A. Canning, J. Carter, C. Iancu, J. Shalf, H. Shan, D. Skinner, E. Strohmaier, T. Goodale, "Application Scalability and Communication Signatures on Leading Supercomputing Platforms", January 1, 2006,


Joaquin Correa, David Skinner, "BIG DATA BIOIMAGING: Advances in Analysis, Integration, and Dissemination", Keystone Symposia on Molecular and Cellular Biology, March 24, 2014,

A fundamental problem currently for the biological community is to adapt computational solutions known broadly in data-centric science toward the specific challenges of data scaling in bioimaging. In this work we target software solutions fit for these tasks which leverages success in large scale data-centric science outside of bioimaging.

A. Koniges, R. Gerber, D. Skinner, Y. Yao, Y. He, D. Grote, J-L Vay, H. Kaiser, and T. Sterling, "Plasma Physics Simulations on Next Generation Platforms", 55th Annual Meeting of the APS Division of Plasma Physics, Volume 58, Number 16, November 11, 2013,

The current high-performance computing revolution provides opportunity for major increases in computational power over the next several years, if it can be harnessed. This transition from simply increasing the single-processor and network performance to a different architectural paradigms forces application programmers to rethink the basic models of parallel programming from both the language and problem division standpoints. One of the major computing facilities available to researchers in fusion energy is the National Energy Research Scientific Computing Center. As the mission computing center for DOE, Office of Science, NERSC is tasked with helping users to overcome the challenges of this revolution both through the use of new parallel constructs and languages and also by enabling a broader user community to take advantage of multi-core performance. We discuss the programming model challenges facing researchers in fusion and plasma physics in for a variety of simulations ranging from particle-in-cell to fluid-gyrokinetic and MHD models.


John Shalf, Shoaib Kamil, David Skinner, Leonid Oliker, Interconnect Requirements for HPC Applications, talk, January 1, 2007,