NERSCPowering Scientific Discovery Since 1974



Pynamic is a benchmark designed to test a system's ability to handle the Dynamic Linking and Loading requirements of Python-based scientific applications.

Pynamic is based on pyMPI (, an MPI extension to the Python programming language


NERSC-8 pynamic tar file

How to Build 

It is recommended to use the GNU C compiler to build Pynamic.

Before executing the configure script, it may be necessary to set a number of environment variables or paths. For example, if your system has a cc wrapper for for the C compiler, you should set the following variable.

export CC=cc

Otherwise, the script will execute the gcc compiler directly. If the wrappers point to the necessary MPI libraries, that should be sufficient, otherwise you may need to point at them either using environment variables or flags through the configure script (consult ./configure --help).

NB: In a few python scripts, it may be necessary to manually fix references to the gcc compiler. It also may be necessary to set an LDFLAG environment variable to enable dynamic linking to shared object libraries (e.g. export LDFLAG="-dynamic").

In the file get-symtab-sizes, it may be necessary to change the following line 

sharedlib=`ldd $1 | gawk '{ print $3}' | xargs ap`
sharedlib=`ldd $1 | gawk '{ print $3}'` as not all systems have the ap command available. In order for the script to extract the necessary data to report in the Excel spreadsheet.

The configuration for this benchmark is as follows. In the pynamic-pyMPI-2.6a1/ directory, execute the following command:

./ 495 1850 -e -u 215 1850 -n 100

At the end of the build and configure, the following data will be reported.

Size of aggregate total of shared libraries: XXX
Size of aggregate texts of shared libraries: XXX
Size of aggregate data of shared libraries: XXX
Size of aggregate debug sections of shared libraries: XXX
Size of aggregate symbol tables of shared libraries: XXX
Size of aggregate string table size of shared libraries: XXX

This must be reported in the Excel spreadsheet provided for this procurement.

How to Run 

 In order to run the benchmark on a batch system, launch the pynamic-pyMPI executable from

the above directory, i.e.

time mpirun -n NMPI ./pynamic-pyMPI `date +"%s"`

Where NMPI is the number of MPI tasks. Providing the date command as an argument to the Python script allows the necessary timings to be reported. See included 'runit' script.

In the output file for any reported runs, you can find the following execution time data:

Startup time
Module import time
Module visit time

Also, please report the Execution time (i.e. result from time mpirun)
An example job launching script (runit) is provided.

For the benchmarks below, please copy these values to the supplied Excel spreadsheet.

Required Runs 

Please provide execution time data (see above) for the following runs:

(1) The average and standard deviation of three runs, with one (1) MPI task per node on all system computational nodes.
(2) The average and standard deviation of three runs, with eight (8) MPI tasks per node on all system computational nodes.


A successful serial run of Pynamic (i.e., no errors) is sufficient verification of functionality. 


The code was written by Gregory L. Lee, Dong H. Ahn, Bronis R. de Supinski, John Gyllenhaal, and Patrick Miller of Lawrence Livermore National Laboratory.  For more information see the Pynamic web site.