NERSCPowering Scientific Discovery Since 1974

ASCR Leadership Computing Challenge (ALCC) Projects at NERSC

Overview

The mission of the ASCR Leadership Computing Challenge (ALCC) is to provide an allocation program for projects of interest to the Department of Energy (DOE) with an emphasis on high-risk, high-payoff simulations in areas directly related to the DOE mission and for broadening the community of researchers capable of using leadership computing resources.

Open to scientists from the research community in industry, academia, and national laboratories, the ALCC program allocates time on computational resources at ASCR’s supercomputing facilities. ASCR supercomputing facilities include NERSC at Lawrence Berkeley National Laboratory and the Leadership Computing Facilities at Argonne and Oak Ridge National Laboratories. These resources represent some of the world’s fastest and most powerful supercomputers. 

Allocations of computer time are awarded through a competitive process by the DOE Office of Advanced Scientific Computing Research. For more information about the program and instructions on how to apply see the DOE ALCC page

Computer Usage Charging 

When a job runs on one of NERSC's supercomputers,  Edison and Cori, charges accrue against one of a user's repository allocations.  The unit of accounting for these charges is the "NERSC Hour" (which is equivalent to the previous NERSC "MPP Hour")

A parallel job is charged for exclusive use of each multi-core node allocated to the job.  The base charge for each node used by the job is given in the following table.

SystemNode ArchitectureBase Charge per Node Hour (NERSC Hours)System Size (Nodes)Cores per Node
Cori Intel Xeon Phi (KNL) 96* 9,304 68
Cori Intel Xeon (Haswell) 80 2.004 32
Edison Intel Xeon (Ivy Bridge) 48 5,576 24

*Tentative value. Production computing value still to be determined. 

Example: A parallel job that uses 10 Cori Haswell nodes that runs for 5 hours accrues 80 x 10 x 5 = 4,000 NERSC Hours.

NERSC Hours are designed to allow a common currency to be used across different NERSC systems and architectures. Historically, 1 NERSC hour is approximately equivalent to the charge for 1 core hour on the retired Hopper system (1 AMD "Magny-Cours" processor core hour).

The base charge can be modified by a number of factors. Please see How Usage is Charged for details.

Hours Available to ALCC Projects

For ALCC 2016-17, 900 million NERSC Hours of computing time is available on NERSC's Edison and Cori supercomputers.  

Most of the time available at NERSC for  ALCC is provided by  Cori's Xeon Phi (KNL) nodes. Of the 900 million available hours, up to 300 million may be awarded by the ALCC program for use on the Edison/Cori Intel Xeon processors. 

When you apply for ALCC time at NERSC, you will be asked to chose one or both of 

  • Cori (Cray XC40 Intel Xeon Phi KNL nodes)
  • Cori/Edison (Cray XC40/30 Intel Xeon nodes, aka "traditional x86" nodes)

When hours are awarded by the ALCC program, they will be in units of "NERSC Hours" for each type of node separately. However, NERSC Hours are fungible between both kinds of nodes, which means that awarded projects can run on either. NERSC will monitor usage to ensure that actual usage on the Xeon nodes (traditional x86 nodes) is not excessive relative to the awarded hours. 

Allocation Period

The ALCC year runs from July to June. This is shifted six months from the NERSC allocation cycle, so the full ALCC award for each project is allocated 50% in one NERSC allocation year and 50 percent in the next. Any, or all, of this time can be shifted from one year to the next upon request. Unused time from July through December is automatically transferred into the following year, however ALCC awarded time does not carry over past June 30.

NERSC ALCC Projects for 2016-2017

The ALCC allocation year for 2016-2017 runs from July 1, 2016 to June 30, 2017. Time allocated to ALCC projects at NERSC expires on July 1, 2017.

ProjectPINERSC 2015-16 ALCC Award (NERSC Hours)
An End-Station for Intensity and Energy Frontier Experiments and Calculations Taylor Childers, Argonne National Laboratory 13,000,000
Multiscale Gyrokinetic Simulation of Reactor Relevant Tokamak Discharges: Understanding the Implications of Cross-Scale Turbulence Coupling in ITER and Beyond Chris Holland, UC-San Diego 60,000,000
Portable Application Development for Next Generation Supercomputer Architectures Tjerk Straatsma, Oak Ridge National Laboratory 10,000,000
Multi-scale, high-resolution integrated terrestrial and lower atmosphere simulation of the contiguous United States (CONUS) Reed Maxwell, Colorado School of Mines 15,000,000
Wall-Resolved Large Eddy Simulations of Transonic Shock-Induced Flow Separation Mujeeb Malik, NASA  66,000,000
Accurate Predictions of Properties of Energy Materials with High Throughput Hybrid Functional Calculations Christopher Wolverton, Northwestern University  45,000,000
 Chombo-Crunch: Modeling Pore Scale Reactive Transport Processes Associated with Carbon Sequestration  David Trebotich, Lawrence Berkeley National Laboratory  40,000,000
 Computational Design of Interfaces for Photovoltaics  Noa Marom, Tulane University  3,000,000
Predictive Simulations of Complex Flow in Wind Farms Matthew Barone, Sandia National Laboratories 10,700,000
Nuclear structure for tests of fundamental symmetries and astroparticle physics Calvin Johnson, San Diego State University 24,000,000
High Performance Computing for Manufacturing Peter Nugent, Lawrence Berkeley National Laboratory 10,000,000