NESAP for Doudna

NESAP for Doudna teams are selected to explore how new HPC technologies or novel combinations of existing ones can accelerate scientific workflows. Awarded through a competitive proposal process, these teams engage in a three-year partnership with NERSC staff and postdocs to optimize their workflows specifically for Doudna.
NESAP for Doudna teams are eligible for a NERSC project allocation of 2,000 GPU node hours in 2025 and 10,000 GPU node hours in 2026. In addition, NESAP teams receive priority for attending NERSC-led hackathons and other engagement events and are given early access to the Doudna system once it is commissioned.
NESAP strategic partners
Project | Science area | PI | Summary |
---|---|---|---|
HPC4EIC | Nuclear (theory) | Felix Ringer, Stony Brook University |
Developing and deploying ML and generative models for physics analyses at the Electron-Ion Collider (EIC), focusing on scalability, I/O performance, and managing complex software stacks. |
USQCD | HEP (theory), Nuclear (theory) |
Peter Boyle, Brookhaven National Laboratory | Optimizing lattice QCD Monte Carlo evaluations on Doudna, through improved communication bandwidth, mixed-precision methods, and accelerated solution of the Dirac equation. |
Schrödinger’s Devs | Quantum | Patrick Diehl, Los Alamos National Laboratory |
Developing and evaluating a distributed quantum simulator leveraging CUDA-Q on Doudna for scalable execution of quantum circuits, and exploring performance portability with iTensors. |
SciGPT | AI/Applied-Math | Dmitriy Morozov, Lawrence Berkeley National Laboratory | Training a large autoregressive transformer model (SciGPT) on high-resolution spatio-temporal datasets, including online extreme-scale training with HPC simulations for a foundation model. |
Reactantanigans | Computer Science | William S. Moses, University of Illinois | Enhancing Oceananigans for simulating oceanic fluid dynamics by leveraging MLIR compiler technology for performance, automatic differentiation, and optimal floating-point precision. |
ML4S2D: Machine Learning for Seasonal to Decadal Predictability (E3SM) | Earth Science | Mark Taylor, Sandia National Laboratory | Optimizing the E3SM project's workflow for seasonal to decadal Earth system prediction, specifically accelerating physics-based simulations and ML training of model emulators. |
MLCP | Materials/Chemistry |
Tucker Carrington, Queen’s University | Developing simulation methods for molecular spectroscopy and dynamics by solving the vibrational Schrödinger equation, focused on load balancing and solver performance. |
Deep Underground Neutrino Experiment (DUNE) | High-energy physics (experiment) | Matt Kramer, Lawrence Berkeley National Laboratory | Optimizing computational workflows, including rapid supernova prompt processing, Far Detector simulation/reconstruction with inference-as-a-service, and CPU/GPU co-scheduling. |
HACC/OpenCosmo | Cosmology | Salman Habib, Argonne National Laboratory |
Optimizing CRK-HACC, an extreme-scale cosmological hydrodynamics code, by investigating mixed precision techniques and integrating AI methods to achieve a 10x throughput gain. |
Beam, pLasma & Accelerator Simulation Toolkit (BLAST): WarpX/ImpactX/HiPACE++ | Fusion | High-energy physics, Fusion |
Axel Huebl, Lawrence Berkeley National Laboratory |
NIFTEA: NERSC Integrated Fusion Tokamak Edge Analysis | Fusion | Robert Hager, Princeton Plasma Physics Laboratory | Simulating edge physics in tokamak fusion devices by coupling and optimizing XGC, M3D-C1, and DEGAS2 (with OpenMC), incorporating AI surrogates and automating mesh generation. |
DFDM | Geo Sciences | Barbara Romanowicz, UC Berkeley | Implementing and optimizing a novel large-scale solver for elastic wave propagation in the Earth, the Distributional Finite Difference Method (DFDM), for applications in geophysics. |
DIII-D CSS | Fusion | Sterling Smith, General Atomics | Porting and optimizing time-sensitive computational workflows like CAKE and IONORB to Doudna, with new features supporting real-time experimental feedback. |
HMMER-GPU | Biology |
Kjiersten Fagnan, Lawrence Berkeley National Laboratory & DOE Joint Genome Institute |
Accelerating the HMMER software suite, a critical bottleneck in JGI's genome annotation workflows, by porting and optimizing its performance on GPUs. |
SeparationML | Materials/Chemistry | Ping Yang, Los Alamos National Laboratory |
Identifying new f-element selective molecules through hypothesis-driven generative AI and high-throughput multi-fidelity simulations, integrating LLMs and HPC tools. |
High-Throughput Design of Materials/Chemistry with Tailored Thermal Properties | Materials/Chemistry | Anubhav Jain, Lawrence Berkeley National Laboratory | Scaling up high-throughput phonon calculations for materials design to 10-100 GPU nodes and overcoming memory and I/O bottlenecks for complex materials on the Doudna system. |
RCSB Protein Data Bank | Biology | Jeremy Henry, Rutgers University & UC San Diego | Scale up an ETL workflow for protein sequence, structure, and annotation data; integrate AI/ML for structural and text-based search; and transition to a Kubernetes-native environment on NERSC. |
NCEM | Materials/Chemistry | Peter Ercius, Lawrence Berkeley National Laboratory | Enhancing a cross-facility HPC-driven ecosystem for real-time electron microscopy feedback, building on previous work in streaming data, enabling more computationally intensive analysis. |
Materials Intelligence Research | Materials/Chemistry | Chuck Witt, Harvard University | Developing and applying methods for ML-accelerated materials science, optimizing iterative workflows that combine electronic structure calculations and ML interatomic potentials. |
Representation Learning | Biology | Petrus Zwart, Lawrence Berkeley National Laboratory | Scaling workflows for learning task-specific representations from diverse scientific imaging datasets, leveraging foundation models and addressing challenges across various scales. |