NERSCPowering Scientific Discovery Since 1974

UMT

Description

The UMT benchmark is a 3D, deterministic, multigroup, photon transport code for unstructured meshes. 

Download

Download the UMT_v1.3.tar file (May 20, 2013)

How to Build

Two steps are required.

cd to src; make

cd to Teton; make SuOlsonTest

The result is SuOlsonTest in the Teton directory.  The build process requires MPI.  The g++ compiler is also needed for preprocessing.

How to Run

 In the 'run' directory you will find scripts for the small, large and extra large problems as defined for the Trinity/NERSC-8 run rules, run-<problem size>.sh. The small problem is sized to use ~144 GB of aggregate memory. The large problem is scaled to use 512 times more memory, and the extra large problem is scaled up by 10,000. The scripts are set up to use the Cray runtime and for the small and large problems correspond to how the baseline data was collected on Hopper. 

In order to change the number of MPI ranks for a given problem size, it is also necessary to change the input file. The number of MPI ranks used needs to match the product of the values specified by the sms(x,y,z) parameter. For example, the small problem was baselined on the NERSC Hopper platform using 96 MPI ranks. To recast the problem to use 32 MPI ranks, the following modifications can be made.

# The dimentionality of the problem is reduced by 1/3 in the X dimension
sms(2,4,4)
# The blk definition needs to match the sms() definition
blk(on,0:1,0:3,0:3)
# tag boundary faces also need to changed in the X dimension
tag("xMinFaces",face,(0:0,0:4,0:4))
tag("xMaxFaces",face,(2:2,0:4,0:4))
tag("yMinFaces",face,(0:2,0:0,0:4))
tag("yMaxFaces",face,(0:2,4:4,0:4))
tag("zMinFaces",face,(0:2,0:4,0:0))
tag("zMaxFaces",face,(0:2,0:4,4:4))
# To keep the total number of zones the same, the X dimension is
# increase by 3x.
numzones(39,13,13)
#Hex subdivisions remain the same
sub(10%,0:0, 0:1,0:0,(7,0,0,0)) #7 hex
seed(10) 

Reporting

Results are written to stdout. Report cumulativeWorkTime and cumulativeIterationCount.

Change Log

  • 03/29/2013
    • Initial version added to this web site 
  • 05/06/2013
    • Readme updated to instruct on how to change the number of MPI ranks for a given problem size; instruction to report cumulativeIterationCount added to README
  • 05/20/2013
    • Sample output for a run of the 'large' problem on Hopper added; Benchmark time for reference SSP updated