NERSCPowering Scientific Discovery for 50 Years

New NESAP Teams Start Prepping Applications for Next-Generation Perlmutter Architecture

Focus is on simulations, data analysis, and machine learning

March 27, 2019

Contact: Kathy Kincade, kkincade@lbl.gov, +1 510 495 2124

The National Energy Research Scientific Computing (NERSC) Center has announced the latest round of NERSC Exascale Science Application Program (NESAP) teams that will focus on simulation, data analysis, and machine learning applications to prepare workloads for NERSC’s next supercomputer, Perlmutter.

Perlmutter, a pre-exascale Cray Shasta system slated to be delivered in 2020, will feature a number of new hardware and software innovations and is the first supercomputing system designed with both data analysis and simulations in mind.

“It is crucial that our broad user base can effectively use the Perlmutter system to run applications and complex workflows,” said Katie Antypas, NERSC Division Deputy and project director for Perlmutter. “We will have a large user engagement, training and readiness effort for simulation, data and learning applications. In addition, new software developed by the Exascale Computing Project will be deployed and supported on the new system.”

NESAP provides researchers an opportunity to prepare application codes for new architectures and to help advance the mission of the Department of Energy's Office of Science. NESAP partnerships allow projects to collaborate with NERSC and HPC vendors by providing access to early hardware, prototype software tools for performance analysis and optimization, special training and exclusive hack-a-thon events with vendor and NERSC staff.

Through NESAP, the participating teams will consider applications in three primary areas:

  • NESAP for Simulations (N4S): Cutting-edge simulation of complex physical phenomena requires increasing amounts of computational resources due to factors such as larger model sizes, additional physics and parameter space searches. N4S enables simulations to make effective use of modern high-performance computing platforms by focusing on algorithm and data structure development and implementation on new architectures such as GPUs, exposing additional parallelism and improving scalability.
  • NESAP for Data (N4D): To answer today’s most complex experimental challenges, scientists are collecting exponentially more data and analyzing it with new computationally intensive algorithms. N4D addresses data-analysis science pipelines that process massive datasets from experimental and observational science (EOS) facilities like synchrotron light sources, telescopes, microscopes, particle accelerators, or genome sequencers. The goal is seamless integration and data flow between EOS facilities and Perlmutter to enable scalable, real-time data analytics utilizing the GPU architecture on Perlmutter.
  • NESAP for Learning (N4L): Machine learning and deep learning are powerful approaches to solving complicated classification, regression, and pattern recognition problems. N4L focuses on developing and implementing cutting-edge machine/deep learning solutions to improve the potential for scientific discovery arising from experimental or simulation data, or in HPC applications by replacing parts of the software stack or algorithms with machine/deep learning solutions optimized for the Perlmutter system and GPU architecture.

The accepted teams are being paired with resources at NERSC, Cray, and NVIDIA, including access to:

  • NERSC Application Readiness staff assistance with code profiling and optimization
  • Collaboration with and assistance from NVIDIA and Cray engineers
  • Training sessions and hack-a-thons
  • Early access to GPU nodes on Cori
  • Early access to Perlmutter
  • Opportunity for a postdoctoral researcher to be placed within your application team (NERSC will fund up to 17 positions)

“We’ve built up a team of application-performance experts at NERSC through the NESAP process for Cori, and we are all really excited to engage a new set of application teams around preparing and optimizing codes for Perlmutter,” said Jack Deslippe, NERSC’s application performance group lead. “As the HPC community transitions to exascale like energy-efficient architectures, the goal of NESAP is to make sure that our user community is poised to make the most of the opportunities that come with new systems like Perlmutter.”

Brandon Cook, an application performance specialist at NERSC, described some of the opportunities and challenges in optimizing applications for Perlmutter. “Perlmutter is a really exciting system that offers the opportunity to accelerate scientific discovery,” he said. “However, taking advantage of all the new features Perlmutter has to offer while maintaining a productive and portable code base is a big challenge for the scientific community. At NERSC, we’re building a strategy to help users move their codes forward portably and productively to make the most out of Perlmutter and to position them for the coming exascale systems and beyond.”

Below are the NESAP teams and the applications they will focus on. Tier 1 teams will have access to the full list of resources described above, while Tier 2 teams will have access to the listed resources with the exception of eligibility for a postdoctoral researcher and a commitment of NERSC application readiness staff time.

Tier 1

           PI                       Institution              Project Name             Project Category

Dirk Hufnagel (Jim Kowalkowski)

FNAL/CMS

CMS Codes

Data

Doga Gursoy

ANL

TomoPy

Data

Julian Borrill

LBNL/CMB-S4

CMB S4/TOAST

Data

Kjiersten Fagnan

JGI

JGI-NERSC-KBase FICUS Project

Data

Maria Elena Monzani

SLAC

NextGen Software Libraries for LZ

Data

Paolo Calafiura

LBNL/ATLAS

ATLAS Codes

Data

Perazzo

SLAC

ExaFEL

Data

Stephen Bailey

LBNL/DESI

DESI Spectroscopic Pipeline Codes

Data

Yelick

LBNL

ExaBiome

Data

Benjamin Nachman and Jean-Roch Vlimant

LBNL; Caltech

Accelerating High Energy Physics Simulation with Machine Learning

Learning

Christine Sweeney

LANL

ExaLearn Light Source Application

Learning

Kris Bouchard

LBNL

Union of Intersections

Learning

Marc Day

LBNL

FlowGAN

Learning

Shinjae Yoo

BNL; Columbia

Extreme Scale Spatio-Temporal Learning

Learning

Zachary Ulissi

CMU

Deep Learning Thermochemistry for Catalys Composition Discovery/Optimization

Learning

Annabella Selloni, Robert DiStasio and Roberto Car

Princeton; Cornell

Quantum ESPRESSO

Simulation

Art Voter

LANL

EXAALT (LAMMPS)

Simulation

Bhattacharjee

PPPL

XGC1, GENE

Simulation

Carleton DeTar, Balint Joo

Utah; JLAB

USQCD

Simulation

David Green

ORNL

ASGarD (Adpative Sparse Grid Discretization)

Simulation

David Trebotich

LBNL

Chombo-Crunch

Simulation

Emad Tajkhorshid

UIUC

NAMD

Simulation

Hubertus van Dam

BNL

NWChemEx

Simulation

Josh Meyers

LLNL

ImSim

Simulation

Marco Govoni

ANL

WEST

Simulation

Mauro Del Ben

LBNL

BerkeleyGW

Simulation

Noel Keen

SNL

E3SM

Simulation

Pieter Maris

Iowa State

MFDN

Simulation

Vay, Almgren

LBNL

WarpX, AMReX

Simulation

 Tier 2

           PI                 Institution                Project Name                Project Category

Andrew J. Norman

FNAL

Neutrino Science with NOvA and DUNE

Data

Stefano Marchesini 

LBNL

Exascale Computational Imaging for Next Generation X-ray and Electron Sciences

Data

Harinarayan Krishnan

LBNL

Streaming X-ray Photon Correlation Spectroscopy for Next Generation Light Sources

Data

Chuck Yoon

SLAC/ Stanford University

Real-time Unsupervised Learning at Scale

Learning

Daniel Jacobson

ORNL

CoMet

Learning

Frank S. Tsung

UCLA

KRR-PIC: A data-centric approach for the simulation of fast electron transport in IFE plasmas.

Learning

Hector Garcia Martin

LBNL

Protein design through variational autoencoders

Learning

Paolo Calafiura

LBNL

HEP.TrkX

Learning

Ravi Prasher

LBNL, UC Berkeley

Generation of Optical Metamaterial Designs using Generative Adversarial Networks (metaGAN)

Learning

William Tang

Princeton

Fusion Recurrent Neural Networks (FRNN) Code

Learning

Choong-Seock Chang

PPPL

XGC1

Simulation

Christopher J. Mundy

PNNL

CP2K

Simulation

Colin Ophus

LBNL

Very large scale image simulation for scanning transmission electron microscopy

Simulation

Eddie Baron

University of Oklahoma

PHOENIX/3D

Simulation

Francois Gygi

UC Davis

Qbox

Simulation

Haixuan Xu

UT Knoxville

High Dimensional Energy Landscape for Complex Systems

Simulation

Huang, Zhenyu (Henry)

PNNL; ANL; NREL

ExaSGD

Simulation

James Elliott

SNL

MiniEM

Simulation

James R. Chelikowsky

UT Austin

PARSEC

Simulation

John Dennis

NCAR

CESM

Simulation

Mark S. Gordon

Ames Laboratory

GAMESS

Simulation

Martijn Marsman

University Vienna

VASP

Simulation

Noel Keen

LBNL

WRF (Weather Research and Forecasting model)

Simulation

Salman Habib

ANL

HACC

Simulation

Stephen Jardin

PPPL

M3D-C1

Simulation

Vikram Gavini

University of Michigan

Large-scale electronic structure studies on extended defects

Simulation

Weiming An, Warren Mori, Viktor K. Decyk

UCLA

QuickPIC: A unique tool for plasma based linear collider designs and real time steering of FACET II experiments

Simulation

Weixing Wang

PPPL

GTS (Gyrokinetic Tokamak Simulation code)

Simulation

William Detmold

MIT

qua

Simulation


About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, NERSC serves almost 10,000 scientists at national laboratories and universities researching a wide range of problems in climate, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. Department of Energy. »Learn more about computing sciences at Berkeley Lab.