miniGhost is a Finite Difference mini-application which implements a difference stencil across a homogenous three dimensional domain.
Thus the kernels that it contains are:
- computation of stencil options,
- inter-process boundary (halo, ghost) exchange.
- global summation of grid values.
Computation simulates heat diffusion across a homogenous domain, with Dirichlet boundary conditions. It does not currently solve a specific problem that can be checked for correctness. However, it can be run in a mode that does check correctness in a limited sense: the domain is initialized to 0, a heat source is applied to a single cell in the middle of the global domain, and the grid values are summed and compared with the initial source term.
This reference version is self-contained and includes serial, MPI-parallel, and OpenMP intra-node with MPI inter-node parallelism.
How to Build
$ cd miniGhost
This will build the MPI-parallel miniGhost code assuming you have the mpi-compiler-wrappers mpicxx and mpicc in your path.
If successful, this will create an executable called miniGhost.x.
Special builds can be done for things like:
- gnu compilers (g++, gcc), no MPI
type 'make -f makefile.gnu.serial'
Note: the main program is in main.c but all the rest of the code is in Fortran. It may be necessary to explicitly link in Fortran libraries.
How to Run
To test, run miniGhost using the default settings:
$ <mpi-run-command> ./miniGhost.x
where <mpi-run-command> varies from system to system but usually looks something like 'mpirun -np 4 ' or similar.
Execution is then driven entirely by the default settings, as configured in default-settings.h. Options may be listed using
% ./miniGhost.x --help
There are scripts for running the small, large and extra large problems as defined in the Trinity/NERSC-8 run rules document. The scripts are called run-<problem size>.sh for the respective size problems. The small problem is sized to use approximately 90 GB of main memory, medium is sized for ~46 TB and the extra large problem is ~900 TB. The scripts are configured for running on NERSC's Hopper machine and the number of MPI ranks can be modified to suit the Offeror's target architecture.
Results are written to a yaml file called results.yaml.
Report "Total_time:" and "GFLOPS_Total:".
Written by Richard F. Barrett and Michael A. Heroux of Sandia National Laboratories
- Initial MiniGhostv0.9.tar file and web page