New Computer Codes Unlock the Secrets of Cleaner Burning Coal
Linda Vu, email@example.com, +1 510 495 2402
Approximately half of all electricity used in the United States comes from coal. When burned, this fossil fuel emits pollutants that contribute to smog, acid rain and increased greenhouse gases in the atmosphere. But as the demand for power increases, and affordable renewable energy sources remain years away from offsetting this need, one fact is clear: Coal is here to stay, at least for the foreseeable future.
Recognizing this reality, researchers supported by the Department of Energy are investigating cleaner methods for extracting energy from coal, like gasification. Using supercomputers at the National Energy Research Scientific Computing Center (NERSC), scientists at the University of Utah have developed tools to model the complex processes of coal gasification, as well as methods to validate these models. A detailed understanding of coal gasification will help engineers retrofit existing gasification plants and improve the design of future gasification plants for efficiently producing electricity.
“Coal is an extremely complex organic fuel. Although it has been used as an energy source for centuries, humanity is only now beginning to understand the science behind how it works,” says Charles Martin Reid, who is currently a postdoctoral researcher at the Lawrence Livermore National Laboratory (LLNL).
For his doctoral thesis at the University of Utah, Reid retrofitted a massively parallel combustion code called ARCHES to simulate the complex processes occurring inside coal gasifiers. He then combined experimental data and mathematical models to validate the computer code. He used about 3 million processor hours on NERSC’s Hopper and Franklin systems to build codes and validate his predictions.
“The scale of computing provided by NERSC was absolutely key in providing me with the resources I needed to tackle a complex, multi-scale, multi-physics problem like coal gasification,” says Reid. “Having game-changing resources like Franklin and Hopper allowed me, and allows all scientists, to extend the capabilities of scientific induction and explore much more deeply the implications of a model.”
Cleaner Coal through Gasification
In a traditional coal-fired power plant, coal undergoes combustion in a boiler to turn water into steam. This steam pressure then drives turbine generators to create electricity. Burning coal also creates and releases into the air harmful compounds like nitric oxides, sulfur oxides, and mercury compounds, in addition to carbon dioxide (CO2).
Unlike a traditional coal-fired boiler, a gasifier uses only a limited amount of oxygen and a tremendous amount of pressure to kick-start a chemical chain reaction that results in a gaseous mixture called “syngas.”
Because gasification occurs in a relatively oxygen-deficient environment—where coal is chemically broken down by heat and pressure, not oxidation—the nitric oxides that contribute to acid rain do not form.
The gasification process also makes it easier to “scrub” the syngas of pollutants like ammonia, mercury, sulfur and CO2, rather than releasing them into the atmosphere. In turn, these elements can be used as industrial raw materials or in products like fertilizer. Even slag, a glasslike byproduct of gasification, can be used as roofing or roadbed material.
Once stripped of pollutants, syngas is primarily composed of hydrogen and carbon monoxide, which can be cleanly combusted in existing gas turbines to generate electricity.
Capturing the Complexity of Coal Gasification in Computer Models
Coal gasification is complex. Not only does it in involve a multitude of phenomena— including turbulence, chemical reactions, and heat transfer processes—but these processes all happen in large spaces and over relatively long spans of time. Researchers need simulation tools that can handle the complexity and enormous size of this problem and methods to quantify the accuracy of their models.
“The complexity of fossil fuel combustion makes it fertile ground for using computer models to better understand what is going on,” says Reid. “It is really difficult to observe the process of coal-gasification because the process occurs in extremely harsh environments that are not accessible to data-collecting sensors and instruments. Computer models give us a window into these environments and allow us to formulate hypotheses, which we can then test with what little experimental data we are able to collect.”
According to Reid, a wealth of modeling approaches for exploring turbulence, radiative heat transfer, multiphase flows, and kinetics of fossil fuel combustion has already been developed. But in order to successfully design an electricity-producing coal gasification plant and make existing gasification plants more efficient, researchers need an all-encompassing model that incorporates a number of these sub-models.
Reid and others at the University of Utah developed such a model to accurately simulate the chemical breakdown of coal by combining an established mathematical model—the large eddy simulation method that models gas-phase turbulence—with another mathematical method called DQMOM (Direct Quadrature Method of Moments) within the ARCHES code, which was originally developed to simulate pool fires.
According to Reid, this approach proved to be successful, but it was also very computationally expensive—using on the order of 200,000 computer processor-hours per sample. So Reid developed another a surrogate model, which is essentially an approximation that mimics the more expensive simulation, and uses far fewer computational resources. He then validated his model using an approach called Data Collaboration.
“With the Data Collaboration approach, we can use information from the experimental gasification system and from the mathematical models to determine how input values, like the amount of coal injected into the gasifer, affect the dominant mechanisms in each zone of the gasifer,” says Reid.
The Data Collaboration approach used time-average concentration data—measurements of the gasifier’s output taken every 30 minutes—to validate most of the code’s predictions. Reid notes that one of the most valuable results of this work was to create a case study for how one might let the validation process drive the development and progression of the code.
“Validation requires a probabilistic way of thinking, one with uncertainty built into our view of both models and reality,” says Reid. “One of the most significant outcomes of my doctoral dissertation was an appreciation for the immense challenges of validating massively parallel simulation codes with sparse data, especially as applied to multiphysics systems.”
The Department of Energy’s Coal Program supported this work. In addition to Reid, University of Utah Research Professor Jeremey Thornock and Graduate Student Julien Pedel also contributed to the development of the Arches code, and implemented code features that made the coal gasification simulations possible. NERSC resources were provided by an ASCR Leadership Computing Challenge (ALCC) allocation awarded to University of Utah Professor Phillip Smith.
About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is the primary high-performance computing facility for scientific research sponsored by the U.S. Department of Energy's Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 4,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science. »Learn more about computing sciences at Berkeley Lab.