NERSCPowering Scientific Discovery Since 1974

Early Career Achievement Award Seminar Series

Overview

NERSC is hosting an online seminar series featuring talks from, and discussions with, the recipients of the NERSC Achievement Awards for early career scientists. The speakers will give a description of their research and significant results, describe their computational methods and/or strategies, and relate notable HPC challenges or successes at NERSC. They will also share their thoughts on what it’s like to be an early career computational scientist in today's environment.

The talks are open to anyone, see "Connection Information" below.

Schedule

DatePresenterTitleTime (Pacific)
September 22 Antonia Sierra Villarreal, Argonne National Laboratory LSST DESC Second Data Challenge (DC2) Image Simulation Campaign with Parallel Python Workflows  12:00
September 29 David Vartanyan, University of California, Berkeley Revival of the Fittest: Exploding Core-Collapse Supernova  12:00
October 6 Miha Muskinja, Lawrence Berkeley National Laboratory Raythena: A Massively Parallel Data Processing Framework for the ATLAS Geant4 Simulation  12:00
October 13  Grant Johnson, Princeton University & Lawrence Livermore National Laboratory The Inverted Plasma Sheath, from Formation to Applications  12:00
Postponed to November 17      
October 27 Hsin-Yu Ko, Cornell University Towards an Accurate and Efficient Order-N HPC Framework for Large-Scale Condensed-Phase Hybrid Density Functional Theory  12:00
November 3 Abigail Polin, Caltech & Carnegie Observatories  Modelling Sub-Chandrasekhar Mass White Dwarf Explosions as Type Ia Supernovae  12:00
November 10 Samuel Kachuck, University of Michigan Implementing novel physics in ice sheet models for improved sea-level projections  12:00
November 17 Quentin Riffard, Lawrence Berkeley National Laboratory Direct dark matter searches with the LZ experiment  

Connection Information

  • Berkeley Lab employees and affiliates: the ZOOM info is on the "NERSC Public Events" calendar. 
  • NERSC Users: See your NERSC weekly email or  this page.
  • General public: Please register.

Abstracts


LSST DESC Second Data Challenge (DC2) Image Simulation Campaign with Parallel Python Workflows

 Antonio Villarreal, Argonne National Laboratory

September 22, 2021
12:00-1:00 Pacific Time

The Vera Rubin Observatory LSST is going to provide the astrophysics community with an unprecedented amount of survey data with which to contain the evolution of the universe through time. In order to leverage this dataset, we will ultimately require extensive simulations in order to validate scientific pipelines ahead of the survey ever seeing light. The LSST Dark Energy Science Collaboration (DESC) Second Data Challenge (DC2) represents the largest simulated sky survey of its complexity. Generating such a simulation required managing a complicated and rapidly changing workflow across multiple compute resources. We demonstrate how we utilize containerization and the Parsl parallel scripting library in order to create a portable and scalable workflow to meet the challenges of this computational task. With this workflow we were able to generate a simulated survey volume covering 300 square degrees and five years of image depth, utilizing 100M hours of compute and up to 2000 Cori KNL nodes at a time. We discuss possible improvements that could be made to the workflow for future survey simulation, both from the standpoint of utilizing the increasingly common workflow nodes at high performance computing (HPC) centers and that of how the underlying image simulation code may be altered to benefit more from computing at these scales.


Revival of the Fittest: Exploding Core-Collapse Supernova

David Vartanyan, University of California, Berkeley

September 29, 2021
12:00-1:00 Pacific Time

The explosion mechanism of core-collapse supernovae – the vibrant neutrino-driven explosion of massive stars – presents an unsolved problem for over half a century, since the earliest simulations by Stirling Colgate in the 1960s. Recent improvements in high-performance computing and neutrino physics have enabled a new generation of multi-dimensional simulations of core-collapse supernovae that produce robust explosions. We present results of the largest multidimensional suite of simulations to date, probing the global behavior of stellar explosion and the dependence on massive star progenitors. We discuss the joint detectability of correlated neutrinos and gravitational waves from such events, which will illustrate the dynamics of the remnant neutron star, the morphology of the explosion, and global stellar instabilities. These results galvanize synergistic observational and theoretical forays into core-collapse supernovae as Nature’s astrophysical laboratories.


Raythena: A Massively Parallel Data Processing Framework for the ATLAS Geant4 Simulation

Miha Muskinja, Lawrence Berkeley National Laboratory

October 6, 2021
12:00-1:00 Pacific Time

Raythena utilizes the Ray software (a high-performance distributed execution framework) to distribute the highly intensive ATLAS Geant4 simulation workflow across a few hundred HPC nodes. Geant4 simulation is the most computationally expensive step of the ATLAS Monte Carlo simulation chain and represents about 50% of the ATLAS computing budget. Conventionally, it is run on ‘grid’ sites and each simulation campaign takes a few months to simulate the desired quantity of proton-proton collision events. Raythena is a solution for running ATLAS Geant4 simulation efficiencly on HPCs and it could significantly reduce the duration of simulation campaigns in the future. The goal of Raythena is to process as many events as possible with a given CPU-hour allocation on an HPC as fast as possible. An effective mode of operation at NERSC’s Cori was found to be running 100-200 Cori KNL node jobs in the flex queue. Raythena is a central application that orchestrates the workload management across all nodes using the Ray API. On Cori KNL nodes, 132 Geant4 processes were spawned on each compute node, amounting to more than 25,000 Geant4 processes running in parallel. Raythena handles communication with the ATLAS central PanDA database where it retrieves the input events and feeds them to the Geant4 processes. The Raythena framework was found to scale very well up to 100 to 200 nodes on Cori KNL with virtually no delay between the consecutive processed events.


The Inverted Plasma Sheath, from Formation to Applications

 Grant Johnson, Princeton University

October 13, 2021
12:00-1:00 Pacific Time

 When an electron emitting surface is in contact with a collisional plasma, a unique regime of plasma sheath may form, an inverse sheath. The inverse sheath regime most notably differs from the classical Debye sheath by having a floating potential above that of the plasma potential. This leads to a restructuring of the plasma flows. The ubiquity of emitting surfaces in laboratory plasmas means there are a number of applications of the inverse sheath such as extending hot cathode lifetimes, cooling of the local plasma, and modifications to emissive probe theory. Due to the collisional nature of the sheath and the trapped population of ions, kinetic simulations are required to answer outstanding questions which remain about the sheath’s formation, and transport properties. To address these questions, we have developed 1D-1V and 2D-2V kinetic continuum codes which include collisions and allows us to explore the inverse sheath in relevant configurations. In this talk I will introduce the fundamentals of the inverse sheath, the codes we have developed to solve these problems, and the applications of the inverse sheath.


Towards an Accurate and Efficient Order-N HPC Framework for Large-Scale Condensed-Phase Hybrid Density Functional Theory

Hsin-Yu Ko, Cornell University

October 27, 2021
12:00-1:00 Pacific Time

By including a fraction of exact exchange (EXX), hybrid functionals reduce the self-interaction error in semi-local density functional theory (DFT), thereby providing a more accurate and reliable description of the electronic structure in systems throughout chemistry, physics, and materials science. However, the high computational cost associated with hybrid DFT limits its applicability when treating large-scale and complex condensed-phase systems in many practical applications (e.g., design of fuel cells). To overcome this limitation, we have devised a highly accurate and linear-scaling (order-N) approach based on a local (e.g., MLWF) representation of the occupied space that exploits sparsity when evaluating the EXX interaction in real space. Powered by NERSC resources over the past several years, our development has evolved into a general-purpose algorithmic framework (exx) capable of efficiently using several modern HPC architectures [1]. The exx code already enabled several large-scale hybrid DFT-based applications in its pilot forms, e.g., unraveling the qualitative/substantial difference between the structural diffusions of H3O+ and OH- in aqueous solutions. Recently, we further extended exx to a black-box solver by eliminating its system-dependent parameters. With this new extension, exx brings us one step closer to the routine use of more reliable hybrid DFT for studying large-scale condensed-phase systems relevant to energy sciences and beyond.

[1] J Chem Theory Comput 16, 3757 (2020).


 Modeling Sub-Chandrasekhar Mass White Dwarf Explosions as Type Ia Supernovae

Abigail Polin, Caltech

November 3, 2021
12:00-1:00 Pacific Time

Type Ia supernovae (SNe) are some of the most common cosmic transients, yet their progenitors are still not known. I will discuss a specific pathway to these explosions, known as the double detonation scenario, where a White Dwarf (WD) is able to explode below the Chandrasekhar mass limit through the aid of an accreted helium shell. An ignition of this helium can send a shock wave into the center of the WD which, upon convergence, can ignite the core causing a thermonuclear runaway resulting in a Type Ia-like explosion. I will describe the hydrodynamic techniques we use to simulate these explosions at NERSC as well as the radiation transport methods we use to translate the hydrodynamical output into synthetic light curves and spectra. Using these methods, we have calculated some distinct observational signatures that should be exhibited by double detonation explosions which have aided in our discovery of these events in nature.


Implementing novel physics in ice sheet models for improved sea-level projections

Samuel Kachuck, University of Michigan

November 10, 2021
12:00-1:00 Pacific Time

The Antarctic Ice Sheet is losing mass at an accelerating pace, mass which is entering the ocean and increasing sea levels across the globe. About half of this mass loss is due to ice flowing out over the ocean, where contact with the warm water melts it directly. The other half of this mass is lost as broken ice - icebergs that calve into the ocean, which float away and then melt. Though both of these processes have been identified as possible sources of dynamic instabilities in ice sheet evolution - the Marine Ice Sheet and Marine Ice Cliff Instabilities - most large scale numerical models have yet to reliably reproduce changes of observed calving front positions, resulting in significant uncertainties in projections of sea level into the next century. Furthermore, when ice enters the oceans, the earth’s viscoelastic mantle responds to the redistribution of mass, affecting the drivers of ice flow and the progression of these instabilities. I will discuss the challenges associated with incorporating solid-earth feedback and representations of mechanical failure of ice into numerical models of continental ice sheets, some solutions to them, and the effect these processes have on projections of sea levels.


Direct dark matter searches with the LZ experiment

Quinten Riffard, Berkeley Lab

November 17, 2021
12:00-1:00 Pacific Time

Many astrophysical observations support the existence of a dark matter component in our universe. However, after a few decades of active research, the nature of dark matter remains elusive. In this context, the LZ experiment aims to detect dark matter using a multi-tonne detector filled with liquid xenon and located at the SURF underground laboratory. We expect a handful of signal events over a significant background during the five years of operation in such a detector. Observing deviation from the background model would sign a dark matter detection. Hence, the LZ collaboration is engaged in a great effort of developing a robust signal and background model using Monte-Carlo simulation. Simulating such a detector requires the usage of any computing available. Therefore, we developed a framework to take advantage of all the HTC (PDFS) and HPC (EDISON and CORI) resources available at NERSC for the second and third mock data challenges. Using a centralized job submission system, containerization technologies, and software distribution service (CVMFS) made it possible.