NERSCPowering Scientific Discovery for 50 Years

NERSC Initiative for Scientific Exploration (NISE) 2010 Awards

Overview

NISE is a mechanism used for allocating the NERSC reserve (10% of the total allocation). It is a competitive allocation administered by NERSC staff and management.  Criteria used in 2010 were:

  • A new research area not covered by the existing ERCAP proposal: this could be a tangential research project or a tightly coupled supplemental research initiative.
  • New programming techniques that take advantage of multicore compute nodes by using OpenMP, Threads, UPC or CAF: this could include modifying existing codes, creating new applications or testing the performance and scalability of multicore programming techniques.
  • Developing new algorithms that increase your ability to do your science (e.g. run at higher scale, incorporate new physics)

Awarded projects are listed on the next pages.

NISE 2010 Research Accomplishments (PDF)

Modeling the Energy Flow in the Ozone Recombination Reaction

Dmitri Babikov, Marquette University

Associated NERSC Project: Coherent Control of the Ground Electronic State Molecular Dynamics (m409), Principal Investigator: Dmitri Babikov

NISE Award: 260,000 Hours
Award Date: April 2010

By resolving the remaining mysteries of what determines the isotopic composition of atmospheric ozone and by investigating unusual isotope effects in other atmospheric species, this work will make a large impact on the fields of Atmospheric Chemistry, Chemical Physics, Climate Science and Geochemistry. Explaining the anomalous isotope effects in O3, NO2 and CO2 will significantly improve our understanding of their production, chemistry, lifetime and loss in the atmosphere. That knowledge will help to identify and remove pollution sources as well as monitor the ozone hole, with the possible impact on enhancing the security of all life on the planet. Babikov It will allow the isotopic composition of oxygen to be used as a reliable probe of its source and history and provide information for studying atmospheric chemistry, global climate change, atmospheres of other planets and the history of the solar system.

To do this, we will develop a massively parallel code that should allow us to model very efficiently a flow of ro-vibrational energy in atom-molecule collisions typical for many recombination reactions. Such chemical processes proceed through formation of an intermediate long-lived metastable state (scattering resonance).

We develop an efficient theoretical framework which should allow us to make this problem treatable computationally with emphasis on massive scaling. Our approach is to keep quantum mechanics for description of the vibrational motion in O3* (using 3D-wavepacket formalism) but to treat the overall rotation of O3* and the M + O3* collisional motion using classical trajectories. In such a mixed quantum-classical approach the quantum physics of the process (zero-point energy, scattering resonances and symmetry rules for state-to-state transitions) is captured by the vibrational wavepacket, while the classical trajectory part of the system allows sampling initial conditions efficiently by running a set of independent calculations on different processors. For typical calculations, the number of classical trajectories needed is between 10,000 and 100,000, which can be propagated on different processors. With such approach we can easily employ thousands of processors simultaneously.

The fully quantum treatment of such processes is unaffordable due to poor scalability of the intrinsically global quantum mechanics, while the fully classical treatment looses all quantum physics of the process. It was shown, however, that quantum effects in the recombination reaction given above are responsible for the famous mystery -- the anomalous isotope effect in ozone formation. This effect still remains poorly understood, mainly due to computational difficulties associated with quantum dynamics treatment of polyatomic systems. Our mixed quantum-classical approach should help to solve this problem.

 

Bridging the Gaps Between Fluid and Kinetic Magnetic Reconnection Simulations in Large Systems

Amitava Bhattacharjee, University of New Hampshire

Associated NERSC Project: Center for Integrated Computation and Analysis of Reconnection and Turbulence (m148), Principal Investigator: Amitava Bhattacharjee

NISE Award: 1,000,000 Hours
Award Date: April 2010

Magnetic reconnection drives the most dramatic, explosive energy releasing processes in the solar system: solar flares and coronal mass ejections. These violent events rapidly convert huge amounts of stored magnetic energy into heat and kinetic energy. An X-class solar flare can release up to 6 x 10**25 Joules of energy in less than an hour. This energy release is comparable to tens of millions of atomic bombs exploding simultaneously. The same process occurs through current disruptions on smaller scales in laboratory devices seeking to magnetically confined plasma for nuclear fusion energy.

The main question driving reconnection research is, "How does reconnection happen so rapidly?" If magnetic energy were dissipated only by collisional plasma resistivity (the way a copper wire dissipates the energy stored in a battery) , a solar flare would take years to release its energy, rather than the sub-hour time scale that is actually observed. This is because collisions between the electrons and ions in astrophysical plasmas are exceedingly rare. Simulations of magnetic reconnection generally employ one of two basic strategies. In the kinetic (or Particle-in-Cell, PIC) approach, one follows the individual particles in the plasma subject to self-consistent electromagnetic fields , while the other approach is to model the plasma as a fluid, or multiple co-existing fluids. Particle based simulations include the most realistic dissipative (energy releasing) physics. However, particle models are too computationally expensive to use to model large systems relevant to space and astrophysical plasmas. Fluid models, however, excel in modeling the gross features of very large systems, although they show significant deviations from the predictions of kinetic models in the dissipation regions.

Particle simulations have shown that the dominant effect responsible for energy release in nearly collisionless plasmas is the electron pressure tensor in the generalized Ohm's law governing weakly collisional plasmas. In recent years, interesting analytic closure approximations that represent the electron pressure tensor in terms of multi-fluid variables have been developed, but they have not been tested sufficiently in global multi-fluid codes. The principal objective of our proposed research is to represent these closure approximations in our global multi-fluid codes, and to test the validity of these closure schemes by comparing the predictions of our global codes with results from smaller-scale kinetic simulations as well as with observations. One of the most challenging aspects of this work is that the simulations need to resolve the smallest physical scales (and associated short time scales) in large systems, necessitating the use of large computational grids, and small simulation time steps. The task of simulations is complicated further by the discovery by the PI and his collaborators (as well as a few others) of the tendency of large and thin current sheets embedded in reconnection layers to break up into a copious number of magnetic islands or plasmoids, which provide an additional mechanism for passage to fast reconnection.

The impact of the research would be to be able to simulate explosive eruptions driven by magnetic reconnection in weakly collisional astrophysical, space, or laboratory plasma systems.

 

Decadal Predictability in CCSM4

Grant Branstator and Haiyan Teng, National Center for Atmospheric Research

Associated NERSC Project: Climate Change Simulations with CCSM: Moderate and High Resolution Studies (mp9), Principal Investigator: Warren Washington, National Center for Atmospheric Research

NISE Award: 1,600,000 Hours
Award Date: April 2010

We propose to estimate the initial-value decadal predictability in CCSM4 using a 40-member ensemble run with each of two perturbed atmospheric initial condition strategies. The runs will be integrated from year 2005 to 2025 under the IPCC RCP4.5 scenario external forcing. We have done similar experiments using CCSM3 and now plan to examine whether and how much the predictability property has changed in CCSM4.

The experiments will help to determine to what extent the ocean initial states can contribute to predictive skills on the decadal time scale. Whether there is initial-value decadal predictability must be addressed before the climate modeling community carries out decadal prediction experiments using initialized ocean observations.

 

3-D Radiation/Hydrodynamic Modeling on Parallel Architectures

Adam Burrows, Princeton University

Associated NERSC Project: Computational Astrophysics Consortium (m106), Principal Investigator: Stan Woosley, University of California, Santa Cruz

NISE Award: 5,000,000 Hours
Award Date: February 2010

Entropy colormap Radiation transport in a multi-dimensional context is a difficult problem that nevertheless finds ubiquitous application in astrophysics. The supernova explosion phenomenon is one such dramatic example, wherein 3-D effects may be crucial to the outcome and a key to the mechanism of explosion. However, such simulations over the many timesteps necessary to illuminate the problem are very computationally expensive. Hence, the development and testing of efficient algorithms that scale to many processors is an important desideratum for the efficient computational exploration of the supernova phenomenon. This proposal is to address this computational issue and help improve and test a new 3-D radiation algorithm and its implementation.

 

 

Dependence of Secondary Islands in Magnetic Reconnection on Dissipation Model

Paul Cassak, West Virginia University

Associated NERSC Project: Three Dimensional and Diamagnetic Effects on the Onset and Evolution of Magnetic Reconnection Dimensions (m866), Principal Investigator: Paul Cassak

NISE Award: 200,000 Hours
Award Date: February 2010

Entropy colormap The physics of magnetic reconnection is a very important component of determining how energy gets released during solar eruptions, which contributes to our understanding of space weather. Magnetic reconnection is a fundamental plasma physics process that occurs in solar flares, the Earth's magnetosphere, and in fusion devices, which makes understanding it important for predicting space weather and producing a safe source of renewable energy. Reconnection allows magnetic field lines to break, which allows them to slingshot out and release large amounts of stored energy. Determining the rate at which magnetic reconnection releases energy is a topic of considerable importance to understanding how the energy is stored and released in such systems.

The original model of magnetic reconnection is called collisional, or Sweet-Parker reconnection. Upon its discovery, it was realized that it is very slow. However, this prediction came from assuming that current sheets that form during reconnection are structurally stable in the very extreme conditions in the solar corona. It has been known for some time that the current sheets are not structurally stable for coronal parameters and produce so-called secondary islands, but their role on the reconnection process has not been fully appreciated until recently. The question that needs to be answered is: "What is the effect of secondary islands on Sweet-Parker reconnection?"

Previous studies of secondary islands used simplified dissipation physics (a constant and uniform plasma resistivity). A careful study of the effect of this assumption on secondary island generation has not been carried out. We propose to test the dependence of secondary islands generation and their effect on reconnection as a function of resistivity model. In particular, we will use a Spitzer resistivity with and without self-consistent Ohmic heating and with and without viscosity, and compare the results of these simulations. These simulations will help us determine the extent to which what has been learned is applicable to real physical systems. These simulations are challenging because the computational domain needs to be resolved down to very small length scales.

 

Simulation of elastic properties and deformation behavior of light-weight protection materials under high pressure

Wai-Yim Ching, University of Missouri - Kansas City

Associated NERSC Project: Electronic Structures and Properties of Complex Ceramic Crystals and Novel Materials (mp250), Principal Investigator: Wai-Yim Ching

NISE Award: 1,725,000 Hours
Award Date: June 2010

This research is related to materials development under extreme conditions urgently needed in a critical area of national interest, and as such it will have a large impact. Supercomputing time is needed because many of the required data and their proper understanding cannot be obtained from pure laboratory tests. The method and procedure developed and the knowledge generated in this project can be further expanded and applied to other areas of materials development including those related to energy science and technology. Description of Proposed Research: The development of light-weight protection materials for military applications is urgently needed for a nation fighting unconventional war. Boron-rich boron carbide (B4+xC) is a leading candidate for such promising materials. However, in spite of many years of research and development, full utilization of boron carbide as protection material has not been fully realized. Ballistic tests indicate that this material lost its shear strength when the pressure exceeds its Hygoniot elastic limit (HEL) of about 20 GPa. Postmortem analysis of dynamically deformed samples indicates boron carbide undergoes shock-induced amorphization when subject to high velocity impact pressure. The mechanism of amorphization and the concomitant plastic behavior of B4+xC under ballistic loading and unloading are not understood.

We plan to investigate the structural deformation and the changes in elastic properties of boron carbide using large-scale ab initio simulations at the atomic level based on density functional theory. Boron carbide has a unique structure consisting of icosahedral B11C unit and a three atom C-B-C chain in the axial direction of the rhombohedral cell. It is characterized by strong intra-icosahedral and inter-icosahedral covalent and three-center bonds. To understand the amorphization and deformation behavior of boron carbide under pressure, extensive simulations using large supercells of at least several hundred atoms per cell must be used which are computationally very demanding. The simulations entail theoretical experiments of applying, step-by-step, uniaxial pressure in the axial direction to the rhombohedral supercells up to 50% of volume reduction. At each level of strain, the atomic structural evolution, the elastic modulus, the stress level, the electronic structure and the inhomogeneous localization of the amorphous zone will be investigated in order to understand the amorphization process and to find ways to mitigate structural softening of the material beyond HEL.

The methods and the computational codes developed using NERSC supercomputing facility under NISE program will also be applied to other potential candidates of light-weight protection materials for both military and civilian use. Based on systematic simulations and extensive data collected, a comprehensive database for mechanical properties of boron rich and other compounds under extreme conditions will be generated that can be used for future modeling at the macro-scale.

 

Molecular Mechanisms of the Enzymatic Decomposition of Cellulose Microfibrils

Jhih-Wei Chu, University of California, Berkeley

Associated NERSC Project: Molecular Simulation of the Intra- and Inter-molecular Communications through Protein Matrix (m787), Principal Investigator: Jhih-Wei Chu

NISE Award: 2,000,000 Hours
Award Date: February 2010

To develop viable technologies for the enzymatic decomposition of cellulosic materials, mechanistic knowledge of elementary steps is indispensable. To establish such knowledge, we propose to model and simulate in silico (a) the structural deformation of a cellulose microfibril by mechanical and thermophysical means, (b) the cleavage of 1,4-glycosidic bonds on the surfaces of a microfibril by cellulases, and (c) the detachment of broken glucose chains from the microfibril into solution. For (a) and (c), we will perform simulations in aqueous solutions and in solutions of selected ionic liquids (IL). In particular, we will focus on imidazolium ILs with chloride and acetate anions. The objective is to quantify the interactions between the chemical moieties of IL molecules and the microfibril and how IL molecules modulate the free-energy barriers of deconstruction. Such analyses will be conducted in aqueous solutions as well as in water/IL mixtures to develop the molecular basis for designing better pretreatment solvents, as well as develop fundamental knowledge to facilitate the conversion of biomass into biofuels.

 

Developing an Ocean-Atmosphere Reanalysis for Climate Applications (OARCA) for the period 1850 to present

Gil Compo, University of Colorado at Boulder

Associated NERSC Project: Surface Input Reanalysis for Climate Applications (SIRCA) 1850 - 2011 (m958), Principal Investigator: Gil Compo

NISE Award: 2,000,000 Hours
Award Date: April 2010

By determining the strength of the variations in the atmosphere and ocean during the strong 1918/1919 El Nino event, we will have a dataset that can be used to study the event and its devastating socio-economic impacts (such as a severe famine in India) and to compare with coupled climate model representations of El Nino during the early 20th century. Importantly, with this development project, we will have determined whether the computationally-costly coupled ocean-atmosphere assimilation can be expected to improve our ability to determine the historical weather and climate variations from the 19th to 21st centuries. A complete record of these variations is an essential dataset for assessing the strengths and weaknesses of the next-generation of coupled climate models being used to project the effects of anthropogenic greenhouse gas emissions for the upcoming Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5).

 

Surface Input Reanalysis for Climate Applications 1850 - 2011

Gil Compo, University of Colorado at Boulder

Associated NERSC Project: Surface Input Reanalysis for Climate Applications (SIRCA) 1850 - 2011 (m958), Principal Investigator: Gil Compo

NISE Award: 1,000,000 Hours
Award Date: January 2010

Compo Hnomalous 500mb geopotential height map High-quality six-hourly tropospheric circulation datasets for the period 1850 to present are urgently needed to validate the climate model simulations being generated to predict anthropogenic effects on climate. Prior to our work, there were no such circulation analyses available before 1948. We propose to use a newly developed Kalman filter-based technique to produce a global tropospheric circulation dataset at four-times daily resolution back to 1850, building on our successful production of a dataset back to 1891. The goal is to develop improvements to the Surface Input Reanalysis for Climate Applications (SIRCA) 1850-2011 system prior to production in 2011. The timely production of these data will provide an important check on the climate models that will be used to make 21st century climate projections in the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC).

We will improve the Ensemble Filter data assimilation by incorporating new observation types using a higher resolution system. In the current Ensemble Filter system of our previous INCITE allocation, the 20th Century Reanalysis Project, only surface and sea level pressure observations were assimilated at 2 degree latitude by longitude resolution and 28 vertical levels in the atmosphere. While extremely successful, several other surface data types are available in the historical observations. These data types include position of storms, near-surface temperature data, and near-surface wind data. All three of these candidates will be tested. Additionally, computing power has increased, making it feasible to test a doubling of the resolution to 1 degree latitude by longitude and an increase in the vertical resolution to 64 levels. This may help improve the representation of the near-surface atmospheric boundary layer and upper-tropospheric transition to the stratosphere, as many of the new levels are in these regions.

 

Pushing the limits of the GW/BSE method for excited-state properties of molecules and nanostructures

Jack Deslippe, University of California, Berkeley

Associated NERSC Project: Computation of Electronic Properties of Materials from First Principles and on surfaces (mp149), Principal Investigator: Steven Louie, Berkeley Lab

NISE Award: 1,000,000 Hours
Award Date: April 2010

Accurate description of interaction of light with matter is important for developing new materials used in photovoltaic applications. This research focuses on improving the description of the interaction of light with molecules and nano-systems that are likely to be the building blocks of future photovoltaic devices. Additionally, this research will quantitatively test the applicability of many-body physics' approaches to the study of systems traditionally described using quantum chemistry approaches.

The development of light-weight protection materials for military applications is urgently needed for a nation fighting unconventional war. Boron-rich boron carbide (B4+xC) is a leading candidate for such promising materials. However, in spite of many years of research and development, full utilization of boron carbide as protection material has not been fully realized. Ballistic tests indicate that this material lost its shear strength when the pressure exceeds its Hygoniot elastic limit (HEL) of about 20 GPa. Postmortem analysis of dynamically deformed samples indicates boron carbide undergoes shock-induced amorphization when subject to high velocity impact pressure. The mechanism of amorphization and the concomitant plastic behavior of B4+xC under ballistic loading and unloading are not understood.

We plan to investigate the structural deformation and the changes in elastic properties of boron carbide using large-scale ab initio simulations at the atomic level based on density functional theory. Boron carbide has a unique structure consisting of icosahedral B11C unit and a three atom C-B-C chain in the axial direction of the rhombohedral cell. It is characterized by strong intra-icosahedral and inter-icosahedral covalent and three-center bonds. To understand the amorphization and deformation behavior of boron carbide under pressure, extensive simulations using large supercells of at least several hundred atoms per cell must be used which are computationally very demanding. The simulations entail theoretical experiments of applying, step-by-step, uniaxial pressure in the axial direction to the rhombohedral supercells up to 50% of volume reduction. At each level of strain, the atomic structural evolution, the elastic modulus, the stress level, the electronic structure and the inhomogeneous localization of the amorphous zone will be investigated in order to understand the amorphization process and to find ways to mitigate structural softening of the material beyond HEL.

The methods and the computational codes developed using NERSC supercomputing facility under NISE program will also be applied to other potential candidates of light-weight protection materials for both military and civilian use. Based on systematic simulations and extensive data collected, a comprehensive database for mechanical properties of boron rich and other compounds under extreme conditions will be generated that can be used for future modeling at the macro-scale.

 

Reaction pathways in methanol steam reformation

Hua Guo, University of New Mexico

Associated NERSC Project: Quantum dynamics of elementary reactions in the gas phase and on surfaces (m627), Principal Investigator: Hua Guo

NISE Award: 200,000 Hours
Award Date: February 2010

Methanol
Methanol steam reforming is a promising process capable of generating hydrogen fuel for mobile fuel cell applications. The current catalyst (Cu/ZnO) has a few undesirable properties such as sintering at high temperatures. A more thermally stable catalyst (PdZn/ZnO) has been discovered recently. This research project is exploring the reaction mechanisms for the steam reforming of methanol on metal surfaces, in collaboration with an experimentalist (A. Datye at UNM). In particular, we are focusing on the initial steps of dissociative adsorption of both H2O and CH3OH on a new alloy (PdZn) catalyst.


 

Warm Dense Matter simulations using the ALE-AMR code

Enrique Henestroza, Berkeley Lab

Associated NERSC Project: Simulation of intense beams for heavy-ion-fusion science (mp42), Principal Investigator: Alex Friedman, Berkeley Lab

NISE Award: 450,000 Hours
Award Date: April 2010

Simulations of the poorly understood regime of Warm Dense Matter (WDM) are important in designing experiments at the planned Neutralized Drift Compression Experiment II (NDCX-II) facility. This regime is found naturally in, e.g., the interiors of giant planets. In terrestrial settings it generally appears transiently, e.g., in the early stages of inertial-fusion experiments. As a subset of High Energy Density Physics, WDM is the subject of considerable emerging interest both as basic science and for its importance to inertial-fusion power production.

We propose to model WDM experiments using the ALE-AMR code and to incorporate new physics into the code, specifically a surface tension model, in order to make the code better suited for NDCX-II, which is under construction with completion planned for 2012. It will be very important to have suitable computational models for comparing with experimental results. ALE-AMR is the most capable code platform for such studies, and has proven itself on a range of applications, including studies of damage to NIF target supports and the basic physics of multiscale flows. A surface tension model is needed before the code can reliably model the full spectrum of planned experiments on the new facility and the existing NDCX-I facility. However, simulations of WDM can be done now, to test the code's existing physics models (e.g., ion deposition) for this new application area, while we develop the surface tension model.

In the proposed research, we will examine a family of emerging approaches to surface tension modeling, with an eye toward selecting an approach that scales well to tens of thousands of processors. This is somewhat challenging since surface tension is not an entirely local effect: the curvature and topology of the surface come into play. However, we believe that preserving quasi-locality will be sufficient for both the physics and the required computational efficiency.

We will use the NISE allocation to enable the development of the required scalable models, and to simulate both planned experiments on and synthetic diagnostic results from NDCX-II. This will position us so that we are able to properly plan and analyze actual experiments when NDCX-II becomes available, thereby significantly enlarging the value of DOE's investment in the new facility.

 

Modeling the dynamics of catalysts with the adaptive kinetic Monte Carlo method

Graeme Henkelman, University of Texas at Austin

Associated NERSC Project: Correlation of Theory and Function in Well-defined Bimetallic Electrocatalysts (m1069), Principal Investigator: Graeme Henkelman

NISE Award: 1,700,000 Hours
Award Date: February 2010

A mission of the Department of Energy is to develop alternative energy sources. In order to store energy in a transportable chemical form and most efficiently extract the energy again, better catalysts are needed. The proposed research is a computational effort to identify new nanoparticle catalysts and correlate their electronic structure to measured catalytic function. In this supplementary request, a new computational methodology will be used to model the reaction dynamics at the catalysts directly using density functional theory. In these adaptive kinetic Monte Carlo calculations, reaction pathways are determined without bias from the modeler so that unexpected mechanisms can be discovered. It is likely that catalytic nanoparticles will be more dynamic than previously assumed and that the dynamics of the nanoparticle surfaces will play an important role in determining their catalytic activity and stability.

Our current DOE sponsored research at NERSC uses density functional theory (DFT) to model nanoparticle catalysts and to understand correlations between their structure and catalytic function. The computational approach for this is to identify reactivity descriptors, such as the binding of reactant molecules to the surface or the average energy of the d-electrons in the metal surface. These descriptors have been shown to correlate well with experimental reactivities for some reactions, such as the oxygen reduction reaction on nanoparticles made from alloys of Pt-group metals.

What is missing from this approach, and what a new computational methodology can provide, is the possibility of discovering reaction mechanisms that we do not anticipate. One strategy for modeling reaction dynamics using DFT without having to anticipate the reaction mechanisms is called adaptive kinetic Monte Carlo (AKMC)1. Here, reaction mechanisms are determined by searching for saddle points on the potential energy surface leading from an initial state to any final state. The rate of each mechanism is calculated using a harmonic approximation to transition state theory, and the state-to-state dynamics is calculated using the kinetic Monte Carlo algorithm. This approach avoids several major limitations of standard kinetic Monte Carlo: reaction mechanisms are a result of the simulation instead of an input; these mechanisms can be complex, unexpected, and involve collective motions of many atoms; and there is no need to base the simulation on a lattice.

What makes this new methodology particularly relevant for nanoparticle catalysts is the mounting experimental evidence that the particles are not static and can dramatically rearrange under reaction conditions. One recent example of this is the inversion of core-shell nanoparticles under oxidizing/reducing conditions; another is the local oxidation of the Au(111) surface in the presence of atomic oxygen. In both cases, a theoretical model in which reactions are assumed to take place on a static metal surface will be qualitatively wrong. As we investigate and try to discover new nanoparticle catalysts, it will be important to model the actual dynamics and not just the dynamics that we assume should happen.

The AKMC method is, however, more computationally expensive than modeling assumed reaction mechanisms. Massively parallel resources are just now able to make these calculations tractable since the searches for reaction mechanisms are independent of one another. In this supplementary request for computer time, I am asking for a million node hours, which will allow for several AKMC simulation of the oxygen reduction reaction at mono- and bi-metallic nanoparticles to investigate the mechanism(s) of reaction directly from DFT. Modeling the (possibly) dynamic role of the nanoparticle structure will be a significant step forward in predicting the activity of new nanoparticle catalysts.

1L. Xu, D. Mei, and G. Henkelman. Adaptive kinetic Monte Carlo simulation of methanol decomposition on Cu(100). J. Chem. Phys. 131, 244520 (2009).

 

First-Principles Study of the Role of Native Defects in the Kinetics of Hydrogen Storage Materials

Khang Hoang, University of California, Santa Barbara

Associated NERSC Project: Computational Studies of Hydrogen Interactions with Storage Materials (m892), Principal Investigator: Chris Van de Walle, University of California, Santa Barbara

NISE Award: 600,000 Hours
Award Date: April 2010

 

ITER rapid shut-down simulation

Valerie Izzo, General Atomics

Associated NERSC Project: Disruption and disruption mitigation simulations (m455), Principal Investigator: Valerie Izzo

NISE Award: 1,200,000 Hours
Award Date: April 2010

This research will help develop scenarios to quickly shut down fusion reactor plasmas in case of unexpected occurrences that may damage machine components.

Research on safe rapid termination of tokamak discharges is primarily targeted toward implementation on ITER; however, simulation of ITER is considerably more challenging that smaller tokamaks. Rapid termination simulations of the DIII-D and C-Mod tokamaks have been carried out through the entire current quench phase with NIMROD (a time scale of 1-5 ms). These simulations include the interaction between impurity radiation and MHD, and a fast electron orbit model for understanding runaway electron confinement. These tools can be applied directly to ITER given sufficient computational resources.

Previous NIMROD simulations of an ITER rapid shutdown have been limited to simplified scenarios and only the early current quench phase. The entire current quench on ITER will last 10s of ms, while the spatial scale is 3 times larger in linear dimension than DIII-D. With similar toroidal resolution, 9 times more grid points in the poloidal plane, and several times longer simulations time, a complete ITER rapid termination scenario will consume about 1,000,000 cpu hours on franklin using several thousand processors, compared with a DIII-D simulation consuming 20,000 hours on roughly 1000 processors.

From the ITER rapid shutdown simulation we will better understand the MHD fluctuation spectrum of an ITER plasma cooled by high-Z impurities, evolution of the electric field, confinement time of high energy runaway electrons, and strike points of escaping runaway electrons. Some of these characteristics may differ greatly from DIII-D and C-Mod.

 

First-principles Modeling of Charged-Defect-Assisted Loss Mechanisms in Nitride Light Emitters

Emmanouil Kioupakis, University of California, Santa Barbara

Associated NERSC Project: First principles modeling of group-III-nitride compounds and alloys for applications in electronic and optoelectronic devices (m934), Principal Investigator: Patrick Rinke, University of California, Santa Barbara

NISE Award: 900,000 Hours
Award Date: February 2010

Entropy colormap Nitride light-emitting devices already have a wide range of applications, such as the lasers in Blu-Ray players and the white LEDs of bicycle lights. In the future, they may also be used as general white light sources, replacing the existing incandescent and fluorescent light bulbs, or in tiny laser projectors that can fit inside a cell phone. At present, however, these devices are not as efficient at the high intensities required for these applications. We will investigate why nitride light-emitting devices lose their efficiency at high intensities and suggest ways to fix this problem.

Nitride-based light emitters in the UV to green part of the optical spectrum have revolutionized the field of solid-state lighting and hold great promise for applications in general illumination and laser projectors. The performance of these devices at high drive currents, however, is limited by a prominent efficiency loss, the origin of which is not fully understood. One mechanism that has been blamed for the efficiency droop of nitride light-emitting diodes is Auger recombination, a 3-particle non-radiative recombination process that becomes dominant at high injected-carrier densities. The Auger process may occur either in a direct way, or mediated by a scattering mechanism, such as the electron-phonon coupling and alloy-scattering. An additional loss mechanism in nitride lasers is the absorption of the generated light by free and impurity-bound carriers in the active region and surrounding material. Our preliminary results indicate that phonon-mediated absorption due to non-ionized acceptor atoms in the p-type material is substantial and a source of concern for nitride lasers.

An additional carrier scattering mechanism in materials is the scattering by charged defects. Lattice imperfections and impurity atoms are omnipresent in materials and in certain cases they acquire an overall charge, rendering them efficient scattering centers via their Coulomb interaction with the charge carriers. This is particularly true for nitride devices, whose n- and p-type layers are intentionally doped. Charged-defect scattering has profound effects for e.g. the carrier mobility in semiconductors and may also provide the momentum required for indirect transitions. As a supplemental research initiative related to our existing ERCAP proposal, we plan to investigate charged-defect-mediated indirect Auger recombination and free-carrier absorption processes in nitride materials and evaluate their significance for nitride optoelectronic devices.

 

Thermodynamic, Transport, and Structural Analysis of Geosilicates Using the SIESTA First Principles Molecular Dynamics Software

Ben Martin, University of California, Santa Barbara

Associated NERSC Project: Large-Scale Molecular Dynamics of Geomaterials: The Glass Transition, Nanoscale Structure and Effects of Water on Molten Silicate Properties at High-Pressure, and Applications to Earth and Planets (m178), Principal Investigator: Frank Spera, University of California, Santa Barbara

NISE Award: 110,000 Hours
Award Date: February 2010

This study will improve our understanding of how rocky planets form and how magmas in volcanoes can be studied theoretically, before they erupt into lavas on the Earth's surface.

First Principles Molecular Dynamics (FPMD) investigations of geoliquids at high temperatures and pressures has recently emerged as a valuable technique for studying the chemistry of materials present in the Earth's mantle where experimental results are sparse. FPMD is also useful for investigating the interaction between H2O and silicates because classical pair-potential Molecular Dynamics models using water are difficult to develop. Previous FPMD studies have used simulations of only ~100 atoms--far too few for a robust statistical analysis of the melt structure. SIESTA is a software package (available as a module on Franklin) that allows simulations of up to N~1000 atoms. We propose a full thermodynamics, transport, and structural study of silicates + H2O using SIESTA order-N technique that allows simulations with a relatively high number of atoms for comparison with other Molecular Dynamic and experimental studies.

 

Computational Prediction of Protein-Protein Interactions

Harley McAdams, Stanford University

Associated NERSC Project: Computational Prediction of Transcription Factor Binding Sites (m926), Principal Investigator: Harley McAdams

NISE Award: 1,880,000 Hours
Award Date: February 2010

Interactions between proteins underlie much of the biology of the cell. For example mammalian brains use protein-protein interactions to send signals through the brain, and drugs disrupt harmful bacteria by interfering with their protein function. Experimentally determining whether two proteins interact is often labor-intensive and expensive, making computational alternatives to screen for proteins likely to interact highly desirable.

My group is developing novel algorithms to predict interactions between proteins and their cognate DNA binding sites. The computational work we propose here would allow us to extend our algorithm to predict protein-protein interactions.

We predict protein-DNA interactions using a physics-based calculation of the electrostatics of binding pockets between proteins (specifically, DNA-binding regulatory proteins) and DNA. We use a machine learning procedure to predict the structural hotspots within the binding pocket. We now propose to use a similar technique to infer the affinity of interaction between two protein surfaces, for example FtsZ and FtsA, two proteins that are part of the bacterial cell division machinery. This problem differs from the case of protein-DNA interactions, where the region of interaction is known from experiment. In the more general case of protein-protein interactions, any exposed patch on the protein surface is a potential contact region for interacting with other proteins.

To address this problem, we will generalize our protein-DNA interaction algorithm with a new statistical module for predicting regions within proteins that are likely candidates for protein-protein interactions. Since interacting proteins depend on the mutual compatibility of their surface patches to mediate the interaction, such patches are evolutionarily co-conserved. To detect these co-conserved patches, we will search for overrepresented fragments in protein structures, identifying the surface patches of the protein that are the likely candidates for protein-protein interactions. After conserved surfaces patches have been identified, we will be able to apply a similar algorithmic procedure as we now use for protein-DNA interactions, except that many more possible interactions mediated by the surface patches must now be considered, leading to a more intense computational requirement. In the protein-DNA case, only a single surface patch on the DNA and protein surface is considered, while in the more general protein-protein interaction case, multiple patches on each interacting partner must be considered. This results in a combinatoric increase in the computation that is particularly suited for parallelization.

 

Light propagation in nanophotonics structures designed for high-brightness photocathodes

Karoly Nemeth and Katherine Harkay, Argonne National Lab

Associated NERSC Project: Solid state physics calculations for the modeling and design of low transverse emittance photocathodes (m878), Principal Investigator: Katherine Harkay

NISE Award: 1,300,000 Hours
Award Date: April 2010

"This research will be based on our recent paper, K. Kemeth, K.C. Harkay, et.al, Physical Review Letters 104, 046801 (2010), which reflects the resent status of our research on the design and development of high-brightness photocathodes for applications in future x-ray sources.

We propose to model the propagation of light in the silver nanorod/nanohole array, described in our above mentioned paper, using the VORPAL code. It is expected that this simulation will lead to improved, more robust design of our photocathode, in which nanophotonics processes help the generation of high brightness electron beams.

The silver nanorods/nanoholes will be modelled as structures of very high density nanoarchitectured plasma, with rigid nanoarchitechtured positive continuum background, similar to structured-density electron-sources of laser-plasma simulations. Geometric effects of the nanoarchitecture and intensity/incidence-angle/polarization dependence of the applied laser on the properties of produced electrons will be investigated.

The goal of this research is a practical nanostructure design that can be realized with well established nanotechnology techniques and can lead to future robust high brightness electron sources.

 

ILC Damping Ring beam instabilities

Mauro Pivi, Stanford Linear Accelerator Center

Associated NERSC Project: Beam dynamics issues in the new high-intensity particle colliders and storage rings (mp93), Principal Investigator: Miguel Furman, Berkeley Lab

NISE Award: 250,000 Hours
Award Date: April 2010

We propose to run simulations that will be used to predict the impact of the electron cloud effect on the International Linear COllider (ILC) damping Ring and will be used to asses the feasibility of a ring with a reduced circumference.

Simulations of the electron cloud effect will provide a tool to predict the general features of the electron cloud and provide information for possible cures to its instabilities. The code CMAD and other codes will be used to predict the electron cloud effect on the beam in the ILC damping rings. For the next round of simulations we need to refine our simulations to give a recommendation on a reduced damping ring circumference from 6.4km to 3.2km with $200M that would represent a very important cost reduction on the machine costs. This is an important savings for the ILC project that could contribute to make the future machine a reality.

 

Thermonuclear Explosions from White Dwarf Mergers

Tomasz Plewa, Florida State University

Associated NERSC Project: Supernova Explosions in Three Dimensions (m461), Principal Investigator: Tpmasz Plewa

NISE Award: 500,000 Hours
Award Date: April 2010

We propose to study a formation and evolution of Type Ia supernovae originating from binary white dwarfs. This is an alternative evolutionary channel producing (super) Chandrasekhar mass white dwarfs believed to be progenitors of SN Ias. The importance of this alternative double-degenerate channel has been recently highlighted by Gilfanov and Bogdan (2010, Nature, 463, 924), who found that, contrary to a commonly accepted as standard single degenerate scenario, double-degenerates may account for as much as 95% of all SN Ia events. This is rather unexpected and potentially critically important finding. The proposed study is aimed at increasing our understanding of double-degenerates and thus improving accuracy of supernova-based cosmological predictions. This is of particular importance in the context of the upcoming NASA-DOE Joint Dark Energy Mission (NASA's Beyond Einstein Program, National Research Council, 2007) which will measure thousands of high-redshift supernovae helping to understand physics and origins of the Universe.

We will consider two binary system configurations: first with with 0.6 and 0.9 solar mass components, and the second comprised of two identical 0.9 solar mass white dwarfs. We modeled the evolution of the the unequal mass system in 2008 thanks to the NERSC Scaling Reimbursement program. We had to stop our investigations short of obtaining complete set of results due to insufficient computational resources. Since then we have improved scalability of our code, especially of the self-gravity multigrid solver, and are eager to bring our preliminary study to the conclusion.

The second binary configuration we wish to investigate was recently studied by Pakmor et al. (2010, Nature, 463, 61), who claimed the merger produced a prompt explosion. This result, however, as all white dwarf merger models computed to date, was obtained with help of the Smoothed Particle Hydrodynamics method (SPH). SPH is known to be very diffusive and usually considered as providing a low-cost "first look" at the evolution of complex physical systems. This is especially true in case of hydrodynamics involving steep gradients (i.e. shocks and contact discontinuities). Such gradients are indeed characteristic of stellar mergers with density showing extreme variation at the stellar surface (and especially in case of essentially envelope-free white dwarfs). The hydrodynamic flow also features an accretion shock responsible for compressing and heating stellar material (through converting kinetic energy of the accretion stream into internal energy). SPH stabilizes shocks with help of artificial viscosity resulting in severe loss of numerical resolution at shocks. This is much less of a problem for our high-order Godunov hydro solver.

Thanks to our participation in the NERSC Scaling Reimbursement program, we were able to design and develop a simulation setup and conduct preliminary study of binary white dwarf merger in 0.6+0.9 solar mass configuration. Our preliminary series of 128/64/32 km resolution models established the presence of several interesting flow features. We observe the formation of a hot and turbulent boundary layer at the surface of the more massive component. In this region temperatures approach 1x10^9 K and densities exceed 1x10^5 g/cc. We were not able to confirm higher temperatures reported in models obtained with SPH, although the resolution in our models is perhaps 2 orders of magnitude (in mass) better than in SPH models. Again, such discrepancies between numerical solutions deserve close scrutiny. Also, there is no indication of the Kelvin-Helmholtz instability developing in the boundary layer in SPH models. This stays in clear contrast with our preliminary results and further motivates a careful numerical investigation of the dynamics of the boundary layer.

 

Beam Delivery System Optimization of Next Generation X-Ray Light Sources

Ji Qiang, Robert Ryne, and Xiaoye Li, Berkeley Lab

Associated NERSC Project: Frontiers in Accelerator Design: Advanced Modeling for Next-Generation BES Accelerators (m669), Principal Investigator: Robert Ryne

NISE Award: 1,000,000 Hours
Award Date: April 2010

Next generation x-ray light sources provide coherent X-ray beams with unprecedent brightness and intensity. The performance of these light sources depends heavily on the quality of the electron beam coming out of the accelerator beam delivery system. In this study, we will make use the high performance computing power at NERSC to optimize the design of such a beam delivery system. We will explore and implement two-level parallelism in our parallel beam dynamics simulation codes; this will allow us to take advantage of the large number of cores at NERSC for accelerator design through parallel parameter scans and parallel optimization. We will also explore mixing parallel programming paradigm with both MPI and OpenMP to take advantage of the Cray XT multicore architecture. Such a hybrid paradigm will help to improve the parallel performance for the beam dynamics simulation.

The local loop part of IMPACT code (such as external map kicks for all local particles) will be suitable for mixed MPI-OpenMP. That means we do MPI for each node, then we do OpenMP for the 4 cores on each node using 4 threads for local loop.

The research could lower the cost/risk and optimize the performance of electron beams for future high brightness coherent X-ray light sources.

 

Electron Transport in Presence of Lattice Vibrations for Electronic, Spintronic and Photovoltaic Devices

Sayeef Salahuddin, University of California, Berkeley

Associated NERSC Project: Massively Parallel Quantum Transport Simulation of Nano-Scale Electronic Devices for Ultra-Low Power Computing (m946), Principal Investigator: Sayeef Salahuddin, University of California, Berkeley

NISE Award: 600,000 Hours
Award Date: February 2010

There are two important aspects of the problem that we seek to solve. Non-equilibrium transport is a generic formulation of any quasi-particle transport phenomena. As a result, to understand transport in the nanometer scale, for example, in a solar cell where electrons and holes are propagating or in a thermoelectric device where a phonon is propagating or a solid state lighting device where one needs to worry about photon propagation, the same Non Equilibrium formalism will be necessary. Thus the simulation platform that we are developing will be a general one that can be applied to a diverse set of problems.

If successful, this will be the first calculation of electronic transport in presence of lattice vibrations in realistically sized devices including atomistic detail.

 

High Resolution Climate Simulations with CCSM

Warren Washington, Jerry Meehl, and Tom Bettge, National Center for Atmospheric Research

Associated NERSC Project: Climate Change Simulations with CCSM: Moderate and High Resolution Studies (mp9), Principal Investigator: Warren Washington

NISE Award: 3,525,000 Hours
Award Date: April 2010

The high resolution version of CCSM -- with a 0.5 degree atmospheric model coupled to a 1.0 degree ocean model as well as state-of-the-art sea ice and land surface scheme -- had previously encountered a numerically unstable polar night jet near the poles. The instability has now been treated, and the need exists to tune the model to a zero radiative balance to allow climate change simulations. The tuning will require approximately 40 simulated years. This model tuning exercise was not included in our original ERCAP proposal, and should be considered a supplemental research initiative. Then we intend to run the model for a 100 year control run with all constituents (e.g. greenhouse gases) held constant at 1850 values. That simulation will be followed by a historical climate simulation from 1850 to 2005 with time-evolving natural and anthropogenic forcings to test the model's ability to simulate a recently observed period in climate history. These simulations will provide the baseline comparison for future decadal prediction experiments with this model.

 

Abrupt Climate Change and the Atlantic Meridional Overturning Circulation

Peter Winsor, University of Alaska Fairbanks

Associated NERSC Project: Abrupt Climate Change and the Atlantic Meridional Overturning Circulation - Sensitivity and Non-Linear Response to Arctic/Sub-Arctic Freshwater Pulses (m1068), Principal Investigator: Peter Winsor

NISE Award: 2,000,000 Hours
Award Date: February 2010

We will use NERSC computer resources to investigate Earth's climate sensitivity to abrupt releases of freshwater originating from ice melt (glaciers, damned glacial lakes and sea ice). This has direct societal implications as increased freshwater forcing may disrupt or party stop ocean circulation and change the atmospheric circulation.

This research project aims to address the sensitivity of Earth's climate to excess freshwater forcing from melting glaciers and sea ice and damned glacial lakes. We will use forward integrations of a high-resolution global ocean-ice model over long time-scales (>50 years) to examine changes in the Meridional Overturning Circulation (MOC) that drives much of Earth climate.

 

Modeling plasma surface interactions for materials under extreme environments

Brian Wirth, University of California, Berkeley

Associated NERSC Project: Ab-initio modeling of the energetics and structure of nanoscale Y-Ti-O cluster precipitates in ferritic alloys (m916), Principal Investigator: Brian Wirth

NISE Award: 500,000 Hours
Award Date: April 2010

The detailed atomistic understanding of plasma-surface interaction will provide a way to predict the material performance and further enable the design of new plasma facing materials with an extraordinary tolerance under extreme fusion environment towards the realization of nuclear fusion energy.

Plasma - surface interactions (PSI) pose an immense scientific challenge in magnetic confinement fusion. These challenges are related to the large-scale modification of plasma facing surfaces via erosion, fuel retention, and mixing, which will begin to limit the operational viability and availability of the device.

Despite the vastly different physical length scales for the surface (~nm) and plasma processes (~mm), the plasma and material surface are strongly coupled to each other, mediated by an electrostatic and magnetic sheath. Also the intense radiation environment (ions, neutrons, photons) ensures that the material properties are modified and dynamically coupled to the PSI processes.

To capture such complex and fast kinetics associated with low-energy ion impingement and entrainment at or near the surface, in combination with point defect mediated evolution from higher energy ion and neutron irradiation, we propose to employ large scale molecular dynamics (MD) and molecular statics (MS) simulations using the parameters (e.g., atomic forces, displacement fields, defect energetics, and electronic bonding character) provided by density functional theory (DFT) in the W-He-H system. These studies, which will incorporate energetics and semi-empirical potentials developed from DFT calculations, will be able to investigate the structure and fast kinetic structural evolution of much larger scale systems and topological complexity.

 

Models for the Explosion of Type Ia Supernovae

Stan Woosley, University of California, Santa Cruz

Associated NERSC Project: Computational Astrophysics Consortium (m106), Principal Investigator: Stan Woosley

NISE Award: 2,500,000 Hours
Award Date: April 2010

Using the 3D AMR code CASTRO, we will model the deflagration phase of Type Ia supernovae. For 40 years this has been a forefront problem in computational astrophysics, but only recently have the codes and machines reached the point where a first principle's simulation can be attempted. Starting from a carbon-oxygen white dwarf that has reached a critical mass and ignited a thermonuclear runaway in its core, we will follow the carbon fusion flame as it propagates through the star and erupts from its surface. Quite different results have been obtained by the two groups who have so far attempted this problem in 3D (Chicago FLASH and Munich MPA). We hope to resolve the discrepancy and determine which happens more frequently in nature. Ignition conditions will be taken from other studies by our group at ORNL using the MAESTRO code that can follow the initial, highly subsonic stages of the runaway. Resolution should be sufficient to at least approximately resolve the integral scale of the highly turbulent burning (10 km).

Show Pagination