NERSCPowering Scientific Discovery Since 1974

SMB

Description

There are two benchmarks included here.  

The msg_rate test measures the sustained MPI message rate using a communication pattern found in many real applications. For a complete explanation of the routine refer to http://www.cs.sandia.gov/smb/msgrate.html.  

The mpi_overhead test uses a post-work-wait method using MPI non-blocking send and receive calls to measure the user level overhead of the respective MPI calls. For a complete explanation of the routine refer to http://www.cs.sandia.gov/smb/overhead.html.

Download

SMB tar file

How to Build

The source files are in the src directory, in smb_1.0-1/src/mpi_overhead and smb_1.0-1/src/msgrate.  Edit the Makefile for your environment and type 'make.'

How to Run

The executables run a single message at a time. The "run_script" scripts are used to collect results for a series of message sizes; e.g., for mpi_overhead:

$ ./run_script
msgsize iterations  iter_t      work_t      overhead    base_t      avail(%)
0       1000        3.054       2.656       0.398       2.035       80.4
2       1000        4.916       4.443       0.473       2.180       78.3
4       1000        4.913       4.449       0.464       2.165       78.6
8       1000        4.935       4.451       0.484       2.176       77.8
16      1000        4.955       4.455       0.500       2.205       77.3
32      1000        4.939       4.450       0.489       2.211       77.9
64      1000        4.955       4.448       0.507       2.187       76.8
128     1000        5.044       4.432       0.612       2.249       72.8
256     1000        5.086       4.425       0.661       2.241       70.5
512     1000        5.180       4.432       0.748       2.340       68.0
1024    1000        5.393       4.428       0.965       2.616       63.1
2048    1000        5.855       4.432       1.423       3.335       57.3
4096    1000        10.134      7.941       2.193       4.815       54.5
8192    1000        5.536       4.437       1.099       3.178       65.4
16384   1000        5.686       4.438       1.248       3.733       66.6
32768   1000        9.107       7.930       1.177       5.486       78.5
65536   100         29.860      28.701      1.159       12.528      90.8
131072  100         57.631      56.422      1.209       23.783      94.9
262144  100         113.089     111.959     1.130       50.570      97.8
524288  100         224.860     223.010     1.850       139.149     98.7
1048576 100         447.218     445.030     2.189       295.571     99.3
2097152 100         891.130     888.932     2.198       593.447     99.6
4194304 100         1779.258    1777.520    1.738       1139.772    99.8

and for the message_rate test:

$ ./run_script
Message size is 8
job size:   192
npeers:     6
niters:     4096
nmsgs:      128
nbytes:     8
cache size: 8388608
ppn:        24
single direction: 86432.43
pair-based: 112546.58
  pre-post: 79560.22
 all-start: 82540.99
Message size is 1024
job size:   192
npeers:     6
niters:     4096
nmsgs:      128
nbytes:     1024
cache size: 8388608
ppn:        24
single direction: 18679.41
pair-based: 29312.91
  pre-post: 17854.26
 all-start: 17997.65 

Required Runs

For the mpi_overhead test:

Run_script shall be executed using a total of two nodes, one MPI process per node. Data shall be collected for both send (default) and receive (--recv) modes, as follows:.
$ ./run_script
<capture output>
$ ./run_script --recv
<capture oputput>

For the msg_rate test:

Run_script shall be executed using a total of eight nodes. You may have to use the -c option to set the size of the last level cache (e.g., L3) for your node's processor. But other than that, the values defined in run_script for msgrate options shall be used.
$ ./run_script
<capture output>