NERSCPowering Scientific Discovery Since 1974

Tony Ladd

BES Requirements Worksheet

1.1. Project Information - Multiscale modeling of dissolution in rough fractures

Document Prepared By

Tony Ladd

Project Title

Multiscale modeling of dissolution in rough fractures

Principal Investigator

Tony Ladd

Participating Organizations

University of Florida 
University of Warsaw

Funding Agencies

 DOE SC  DOE NSA  NSF  NOAA  NIH  Other:

2. Project Summary & Scientific Objectives for the Next 5 Years

Please give a brief description of your project - highlighting its computational aspect - and outline its scientific objectives for the next 3-5 years. Please list one or two specific goals you hope to reach in 5 years.

A fundamental understanding of the role of fractures, and their effects on solute transport, is an essential component of theoretical models of geological systems. We have developed a state-of-the-art simulation that captures the complex topography of the fracture surface with unprecedented fidelity. We are developing a new pore-scale simulation of fracture dissolution, which could extend the range of scales that can be simulated by two or three orders of magnitude. In the end we would like to extend the simulations to field scales - for example a fracture 10m x 1m x 1mm think - roughly 10^10 grid points. 
 
Goals: 
1) An efficient implementation of the 3D code will form the computational basis for our proposed work. 
2) To span a wider range of spatial scales, we will couple a three-dimensional simulation of the reaction front with two-dimensional simulations of the much larger regions ahead of and behind the front. 
3) To increase the temporal range of the simulation, we will investigate methods to project out the small scale dynamics, allowing for much larger time steps. 

3. Current HPC Usage and Methods

3a. Please list your current primary codes and their main mathematical methods and/or algorithms. Include quantities that characterize the size or scale of your simulations or numerical experiments; e.g., size of grid, number of particles, basis sets, etc. Also indicate how parallelism is expressed (e.g., MPI, OpenMP, MPI/OpenMP hybrid)

Codes are developed in house for porous/fractured rocks, colloidal suspensions, and polymer solutions. The lattice-Boltzmann method is at the core of the fluid dynamics solver in each of these applications. The codes are parallelized via MPI using a uniform domain decomposition. In future a multithreaded version of the codes will be desirable to better exploit multicore processors. 
 
We use our own clusters - currently we gave ~ 450 cores shared between 4 faculty. I maintain the system and we add new hardware as funds become available. 

3b. Please list known limitations, obstacles, and/or bottlenecks that currently limit your ability to perform simulations you would like to run. Is there anything specific to NERSC?

Primarily it is code development to eliminate unnecessary memory usage and improve the robustness and reliability of the code. 

3c. Please fill out the following table to the best of your ability. This table provides baseline data to help extrapolate to requirements for future years. If you are uncertain about any item, please use your best estimate to use as a starting point for discussions.

Facilities Used or Using

 NERSC  OLCF  ACLF  NSF Centers  Other:  Personal Cluster

Architectures Used

 Cray XT  IBM Power  BlueGene  Linux Cluster  Other:  

Total Computational Hours Used per Year

 300000 Core-Hours

NERSC Hours Used in 2009

 0 Core-Hours

Number of Cores Used in Typical Production Run

 32-96

Wallclock Hours of Single Typical Production Run

 100

Total Memory Used per Run

 10-100 GB

Minimum Memory Required per Core

 1 GB

Total Data Read & Written per Run

 10-100 GB

Size of Checkpoint File(s)

10-100 GB

Amount of Data Moved In/Out of NERSC

 GB per  

On-Line File Storage Required (For I/O from a Running Job)

 GB and  Files

Off-Line Archival Storage Required

 1 GB and  Files

Please list any required or important software, services, or infrastructure (beyond supercomputing and standard storage infrastructure) provided by HPC centers or system vendors.

 

4. HPC Requirements in 5 Years

4a. We are formulating the requirements for NERSC that will enable you to meet the goals you outlined in Section 2 above. Please fill out the following table to the best of your ability. If you are uncertain about any item, please use your best estimate to use as a starting point for discussions at the workshop.

Computational Hours Required per Year

 

Anticipated Number of Cores to be Used in a Typical Production Run

 10000

Anticipated Wallclock to be Used in a Typical Production Run Using the Number of Cores Given Above

 100

Anticipated Total Memory Used per Run

 10000 GB

Anticipated Minimum Memory Required per Core

1 GB

Anticipated total data read & written per run

 10000 GB

Anticipated size of checkpoint file(s)

 10000 GB

Anticipated On-Line File Storage Required (For I/O from a Running Job)

 10 GB and  10000 Files

Anticipated Amount of Data Moved In/Out of NERSC

 GB per  

Anticipated Off-Line Archival Storage Required

 100 GB and 100000 Files

4b. What changes to codes, mathematical methods and/or algorithms do you anticipate will be needed to achieve this project's scientific objectives over the next 5 years.

Multithreading - load balancing

4c. Please list any known or anticipated architectural requirements (e.g., 2 GB memory/core, interconnect latency < 3 #s).

1-2GB per core is reasonable. Interconnect latency is not so important - we typically use large messages. At present Gigabit ethernet is fine (for up to 100 cores - 2 per node). For multicore systems we will need higher bandwidth (DDR infiniband). A flat network is important in my opinion.

4d. Please list any new software, services, or infrastructure support you will need over the next 5 years.

Algorithm development and code development together are difficult with the resources we have. We could use help with code development, which tends to get sidelined at present. 

4e. It is believed that the dominant HPC architecture in the next 3-5 years will incorporate processing elements composed of 10s-1,000s of individual cores, perhaps GPUs or other accelerators. It is unlikely that a programming model based solely on MPI will be effective, or even supported, on these machines. Do you have a strategy for computing in such an environment? If so, please briefly describe it.

 

New Science With New Resources

To help us get a better understanding of the quantitative requirements we've asked for above, please tell us: What significant scientific progress could you achieve over the next 5 years with access to 50X the HPC resources you currently have access to at NERSC? What would be the benefits to your research field if you were given access to these kinds of resources?

Please explain what aspects of "expanded HPC resources" are important for your project (e.g., more CPU hours, more memory, more storage, more throughput for small jobs, ability to handle very large jobs).

Our main issues at present are to do with algorithms. Unless there are extensive HPC resources available, it is typically not worth the time invested in my opinion (based on past experience). Its difficult to develop codes for x processors on smaller facilities, since many of the issues do not come up at these scales. This is why I have preferred to work with my own systems