NERSCPowering Scientific Discovery Since 1974

MOLPRO

Description

MOLPRO is a complete system of ab initio programs for molecular electronic structure calculations, written and maintained by H.-J. Werner and P. J. Knowles, with contributions from several other authors. As distinct from other commonly used quantum chemistry packages, the emphasis is on highly accurate computations, with extensive treatment of the electron correlation problem through the multiconfiguration-reference CI, coupled cluster, and associated methods. Using recently developed integral-direct local electron correlation methods, which significantly reduce the increase of the computational cost with molecular size, accurate ab initio calculations can be performed for much larger molecules than with most other programs.

The heart of the program consists of the multiconfiguration SCF, multireference CI, and coupled-cluster routines, and these are accompanied by a full set of supporting features.

Accessing MOLPRO

NERSC uses modules to manage access to software. To use the default version of MOLPRO, type:

% module load molpro

To use a specific version, e.g., version 2010.1 on Edison, use:

% module load molpro/2010.1.26

To see all the available versions, use:

% module avail molpro

To see where the MOLPRO executables reside (the bin directory) and what environment variables the modulefile defines, use:

% module show molpro

e.g., on Edison,

% module show molpro
-------------------------------------------------------------------
/usr/common/usg/Modules/modulefiles/molpro/2012.1.21:
module-whatis a rather complete system of ab initio programs for molecular electronic structure calculations
prepend-path PATH /usr/common/usg/molpro/2012.1.21/bin
setenv TMPDIR /scratch1/scratchdirs/JOE_USER
setenv SHMEM_SWAP_BACKOFF 150
-------------------------------------------------------------------

Using MOLPRO on Cori 

You must use the batch system to run MOLPRO on the compute nodes. You can do this interactively or you can use a script. Examples of both are below.

Note that the file molpros is a script to run a serial (one-processor) version of the code and molprop is a script to run a parallel version. The script molpro is linked to molprop on Cori. 

To run a parallel job interactively use the salloc -N # -p debug -t 30:00 command to request an interactive batch session, where "#" is the number of nodes you want. Here is an example, requesting 1 node for 30 minutes to run jobs interactively

% salloc -N 1 -p debug -t 30:00

When this command is successful a new batch session will start in the window where you typed the command. Then, issue commands similar to the following:

%  module load molpro 
% molprop -n 32 your_molpro_inputfile_name

Note that there are 32 cores (or 64 logical cores with Hyperthreading) per node on Cori. You can run up to 32 way (or 64 way with Hyperthreading) parallel molpro jobs on a single node. Note that when the time limit (30 minutes is maximum) is reached the job will end, and the session will exit.

To run a batch job on Cori, use a job script similar to this one:

#!/bin/bash -l
#SBATCH -J test_molpro
#SBATCH -p debug
#SBATCH -N 1
#SBATCH -t 00:30:00
#SBATCH -t test_molpro.o%j module load molpro
molprop -n 32 h20_opt_dflmp2.test

Put those commands or similar ones in a file, say, run.slurm and then use the sbatch command to submit the job:

% sbatch run.slurm

If your job requires large memory, meaning more than available on per core memory, 4.0 GB, on Cori, you can run with a reduced number of cores per node:

#!/bin/bash -l
#SBATCH -J test_molpro
#SBATCH -p regular
#SBATCH -N 1
#SBATCH -t 06:00:00
#SBATCH -t test_molpro.o%j module load molpro
molprop -n 8 pentane_dflccsd.test

In this example, the job will run with only 8 cores on the node (out of 32 cores available on a Cori node), each task will then able to use up to 4 times as much as memory (4x4.0GB=16GB on Cori). Note that your repo will still be charged for the full node (all 32 cores on the node) although you use only 8 out of 32 available cores. 

If you run small parallel jobs using less than 32 cores available, you can use the shared partition, for which jobs are charged for the number of cores actually used instead of the full nodes (all 32 cores). The shared partition allows a  much higher submit limit than the regular partition. Here is a sample job script,

#!/bin/bash -l
#SBATCH -J test_molpro
#SBATCH -p shared
#SBATCH -n 8
#SBATCH -t 06:00:00
#SBATCH -t test_molpro.o%j module load molpro
molprop -n 8 pentane_dflccsd.test

You can run short jobs interactively using the shared partition as well. Note that the shared partition has a longer wall limit. For example, the following command request 8 cores under the shared partition for 1 hour:

% salloc -n 8 -p shared -t 1:00:00

When a batch shell prompts, do:

%  module load molpro 
% molprop -n 8 your_molpro_inputfile_name

Using MOLPRO on Edison 

You must use the batch system to run MOLPRO on the compute nodes. You can do this interactively or you can use a script. Examples of both are below.

Note that the file molpros is a script to run a serial (one-processor) version of the code and molprop is a script to run a parallel version.

To run a parallel job interactively use the salloc -N # -p debug -t 30:00 # command to request an interactive batch session, where "#" is the number of nodes you want. Here is an example, requesting 1 node for 30 minutes to run jobs interactively

% salloc -N 1 -p debug -t 30:00

When this command is successful a new batch session will start in the window where you typed the command. Then, issue commands similar to the following (only the "module load ..." command may need to be different):

%  module load molpro 
% molprop -n 24 your_molpro_inputfile_name 

Note that when the time limit (30 minutes is maximum) is reached the job will end and the session will exit.

To run a batch job on Edison, use a job script similar to this one: 

#!/bin/bash -l
#SBATCH -J test_molpro
#SBATCH -p debug
#SBATCH -N 1
#SBATCH -t 00:30:00
#SBATCH -t test_molpro.o%j module load molpro
#on Edison there are 24 cores per node molprop -n 24 h2o_opt_dflmp2.test

 Put those commands or similar ones in a file, say, run.slurm and then use the sbatch command to submit the job:

% sbatch run.slurm

If your job requires large memory, meaning more than available on a per-core basis (about 2.67 GB per core on Edison), you can run with a reduced number of cores per node, although your repo will be charged for the full number of nodes that you use. Here is an example job script using 8 cores using 1 node on Edison:

#!/bin/bash -l
#SBATCH -J test_molpro
#SBATCH -p debug
#SBATCH -N 1
#SBATCH -t 00:30:00
#SBATCH -t test_molpro.o%j module load molpro
molprop -n 8 pentane_dflccsd.test

In this example, the job will run with only 8 cores on the node (out of 24 cores available on an Edison node), each task will then able to use up to 3 times as much as memory (3x2.67GB=8.0GB on Edison).

The shared partition  is not available on Edison. 

Restart Capabilities

By default, the job is run so that all MOLPRO files are generated in $TMPDIR. This is fine if the calculation finishes in one job, but does not provide for restarts. This section describes techniques which can be used to restart calculations.

MOLPRO has three main files which contain information which can be used for a restart: file 1 is the main file, holding basis set, geometry, and the one and two electron integrals; file 2 is the dump file and used to store the wavefunction information, i.e. orbitals, CI coefficients, and density matrices; file 3 is an auxiliary file which can be used in addition to file 2 for restart purposes. File 1 is usually too large to be saved in permanent storage

By putting the following lines in the input file, the wavefunction file (file number 2) can be saved as file "h2.wfu",and the auxiliary file (file number 3) saved as "h2.aux". By default, the files are saved to the subdirectory "wfu" of your home directory if the job runs out of time.

***,H2
file,2,h2.wfu,new
file,3,h2.aux,new
basis=vdz;
geometry={angstrom;h1;h2,h1,.74}
optscf;

The directory where the files are saved may be changed using the "-W" command line option.

These files enable some restarts to be performed, as they provide snapshots of the calculation as each module finishes. Unfortunately, restarting an incomplete SCF or CI calculation is not possible. To use the files in a restart, remove the "new" qualifier from the "file" command:

***,H2
file,2,h2.wfu
file,3,h2.aux
basis=vdz;
geometry={angstrom;h1;h2,h1,.74}
optscf;

Documentation

MOLPRO User's manual.

On Carver you can find the full manual and a quickstart version in MOLPRO's doc subdirectory.

Availibility

PackagePlatformCategoryVersionModuleInstall DateDate Made Default
molpro cori applications/ chemistry 2012.1.21 molpro/2012.1.21 2015-12-21 2015-12-21
molpro cori applications/ chemistry 2015.1 molpro/2015.1 2016-03-28 2016-06-30
molpro edison applications/ chemistry 2010.1.26 molpro/2010.1.26 2013-02-07
molpro edison applications/ chemistry 2010.1.40 molpro/2010.1.40 2014-05-16
molpro edison applications/ chemistry 2012.1.21 molpro/2012.1.21 2015-02-26 2016-02-01
molpro edison applications/ chemistry 2015.1 molpro/2015.1 2016-03-28