NERSCPowering Scientific Discovery Since 1974

ASCR Leadership Computing Challenge (ALCC) Projects at NERSC

Overview

The mission of the ASCR Leadership Computing Challenge (ALCC) is to provide an allocation program for projects of interest to the Department of Energy (DOE) with an emphasis on high-risk, high-payoff simulations in areas directly related to the DOE mission and for broadening the community of researchers capable of using leadership computing resources.

Open to scientists from the research community in industry, academia, and national laboratories, the ALCC program allocates time on computational resources at ASCR’s supercomputing facilities. ASCR supercomputing facilities include NERSC at Lawrence Berkeley National Laboratory and the Leadership Computing Facilities at Argonne and Oak Ridge National Laboratories. These resources represent some of the world’s fastest and most powerful supercomputers. 

Allocations of computer time are awarded through a competitive process by the DOE Office of Advanced Scientific Computing Research. For more information about the program and instructions on how to apply see the DOE ALCC page. 

Computer Usage Charging 

When a job runs on NERSC's supercomputer,  Cori, charges accrue against one of a user's repository allocations.  The unit of accounting for these charges is the "NERSC Hour" (which is equivalent to the previous NERSC "MPP Hour")

A parallel job is charged for exclusive use of each multi-core node allocated to the job.  The base charge for each node used by the job is given in the following table.

SystemNode ArchitectureBase Charge per Node Hour (NERSC Hours)System Size (Nodes)Cores per Node
Cori Intel Xeon Phi (KNL) 80 9,304 68
Cori Intel Xeon (Haswell) 140 2.004 32

 

Example: A parallel job that uses 10 Cori Haswell nodes that runs for 5 hours accrues 140 x 10 x 5 = 7,000 NERSC Hours.

  • A similar job that runs on 10 Cori KNL nodes for 5 hours accrues 80 x 10 x 5 = 4,000 NERSC Hours.

NERSC Hours are designed to allow a common currency to be used across different NERSC systems and architectures. Historically, 1 NERSC hour is approximately equivalent to the charge for 1 core hour on the retired Hopper system (1 AMD "Magny-Cours" processor core hour).

The base charge can be modified by a number of factors. Please see How Usage is Charged for details.

 

Allocation Period

The ALCC year runs from July to June. This is shifted six months from the NERSC allocation cycle, so the full ALCC award for each project is allocated 50% in one NERSC allocation year and 50 percent in the next. Any, or all, of this time can be shifted from one year to the next upon request. Unused time from July through December is automatically transferred into the following year, however ALCC awarded time does not carry over past June 30.

NERSC ALCC Projects for 2020-2021

For the ALCC 2020-2021 campaign, ASCR received 96 proposals and, through a competitive review process, chose 60 proposals to receive allocations totaling 16.4 million node-hours. At NERSC, 16 research teams received a total of 4.45 million node-hours to use on the Cori supercomputer:

  • Douglas Benjamin (Argonne National Laboratory), Dirk Hufnagel (Fermi National Accelerator Laboratory - co-PI); Paolo Calafiura (Lawrence Berkeley National Laboratory), Eric Lancon (Brookhaven National Laboratory), Oliver Gutsche (Fermi National Accelerator Laboratory), Tulikla Bose (University of Wisconsin-Madison), James Letts (University of California San Diego) received 250,000 node-hours for "High Luminosity LHC detector upgrade studies by the ATLAS and CMS collaborations"
  • Rajan Gupta (Los Alamos National laboratory), T. Bhattacharya (Los Alamos National Laboratory), V. Cirigliano (Los Alamos National Laboratory), B. Yoon (Los Alamos National Laboratory),  B. Joo (Jefferson Laboratory), H-W Lin (Michigan State University) received 670,000 node-hours for "Nucleon Matrix Elements: Probes of New Physics".
  • Andreas Kronfeld (Fermi National Accelerator Laboratory), Steven Gottlieb (Indiana University), Aaron Meyer (Brookhaven National Laboratory), James Simone (Fermi National Accelerator Laboratory) received 870,000 node-hours for "Nucleon Axial Charge with All-Staggered Lattice QCD".
  • Huey-Wen Lin (Michigan State University), Simonetta Liuti (University of Virginia) received 660,000 node-hours for "Unveiling the 3D Structure of Nucleons with Machine Learning and Lattice QCD".
  • Brian Wirth (University of Tennessee), David Bernholdt (Oak Ridge National Laboratory), Aidan Thompson (Sandia National Laboratory),  Karl Hammond (University of Missouri), Wahyu Setyawan (Pacific Northwest National Laboratory), Ilon Joseph (Lawrence Livermore National Laboratory) received 30,000 node-hours for "Plasma Surface Interaction Modeling".
  • David Hatch (University of Texas at Austin) received 30,000 node-hours for "Gyrokinetic Simulations of Multi-Scale Plasma Turbulence in Tokamaks".
  • Adam Stanier (Los Alamos National Laboratory), Jonathan Jara-Almonte (Princeton Plasma Physics Laboratory), William Daughton (Los Alamos National Laboratory), Ari Le (Los Alamos National Laboratory), Robert Bird (Los Alamos National Laboratory) received 30,000 node-hours for "High-Fidelity Kinetic Modeling of Magnetic Reconnection in Laboratory Plasmas".
  • Stephen Jardin (Princeton Plasma Physics Laboratory), Nate Ferraro (Princeton Plasma Physics Laboratory), and Brendan Lyons (General Atomics) received 30,000 node-hours for "Study of a Disrupting Plasma in ITER".
  • Jason TenBarge (Princeton University) received 24,000 node-hours for "Heating and Particle Energization in Quasi-Perpendicular Shocks".
  • Sean Dettrick (TAE Technologies Inc.), Francesco Ceccherini (TAE Technologies, Inc.), Calvin Lau (TAE Technologies, Inc.), Toshiki Tajima (University of California Irvine) received 26,000 node-hours for "Field-Reversed Configuration Stability and Transport".
  • Hussein Aluie (University of Rochester) received 30,000 node-hours for "Scale-Aware Modeling of Instabilities and Mixing in HED Flows".
  • Gary S. Grest (Sandia National Laboratories), Shengfeng Cheng (Virginia Polytechnic Institute and State University), Sanat Kumar (Columbia University), Dvora Perahia (Clemson University), and Michael Rubinstein (Duke University) received 150,000 node-hours for "Large Scale Numerical Simulations of Polymer Nanocomposites".
  • Andrew Roberts (Los Alamos National laboratory), Joel Rowland (Los Alamos National Laboratory), Luke Van Roekel (Los Alamos National Laboratory), Mathew Maltrud (Los Alamos National Laboratory), Nicole Jeffery (Los Alamos National Laboratory), Ethan Coon (Oak Ridge National Laboratory) received 650,000 node-hours for "Earth System Simulations for Arctic Coastal Research".
  • Darin Comeau (Los Alamos National Laboratory), Xylar Asay-Davis (Los Alamos National Laboratory), Matthew Hoffman (Los Alamos National Laboratory), Wuyin Lin (Brookhaven National Laboratory), Mark Petersen (Los Alamos National Laboratory), Stephen Price (Los Alamos National Laboratory), and Andrew Roberts (Los Alamos National Laboratory) received 500,000 node-hours for "Variable Resolution Earth System Modeling of the Cryosphere with E3SM".
  • Po-Lun Ma (Pacific Nortwest National laboratory), Colleen Kaul (Pacific Northwest National Laboratory), Kyle Pressel (Pacific Northwest National Laboratory) received 300,000 node-hours for "Large eddy and convection-permitting simulations of aerosol-cloud interactions".
  • Julie McClean (Scripps Institution of Oceanography, UC San Diego), Sarah T. Gille (Scripps Institution of Oceanography, UC San Diego),  Mathew E. Maltrud (Los Alamos National Laboratory),  Detelina P. Ivanova (Scripps Institution of Oceanography, UC San Diego) received 200,000 node-hours for "Influence of Antarctic and Greenland continental shelf circulation on high-latitude oceans in E3SM".