Amber Molecular Dynamics Package

There are several versions of the Amber MD package available on Gemini. The older Amber 12 is still available, but in general should not be used. Amber 14, the current version is the preferred version. In addition to the standard Amber build (including AmberTools), there are versions of the main MD programs (sander and pmemd) that work under MPI, CUDA and CUDA+MPI.

We also have a specially developed version of Amber that implements the Rotatable Accelerated Molecular Dynamics Dual Boost technique developed by Donald Hamelberg at Georgia State University. This version only works with pmemd with MPI, CUDA and CUDA+MPI.

Loading Amber 14

The standard version of Amber 14 can be loaded from the module file using the command:

module load amber/amber14

This will load all of the other modules needed to run Amber (i.e. OpenMPI and CUDA)

Additionally the custom RaMD-db implementation can be loaded using the command:

module load amber/amber14_ramddb

The complete manual for Amber 14 and Ambertools is available on the right.

Standard Amber Runs

The detailed usage of Amber is beyond the scope of this document. However some general guidance for scheduling runs and using the RaMD-db features is provided below.

Running Amber on Gemini should be done via the job scheduler. From the command line the you would issue a command similar to the following (after loading the appropriate Amber 14 module):

32 CPU Run
sbatch -n 32 mpirun pmemd.MPI -O -i -o output.out -p md.prmtop -c md.inpcrd -r md.rst -x

1 GPU Run
sbatch -n 1 --gres=gpu:1 mpirun pmemd.cuda.MPI -O -i -o output.out -p md.prmtop -c md.inpcrd -r md.rst -x

4 GPU Run
sbatch -n 4 -N 1 --gres=gpu:4 mpirun pmemd.cuda.MPI -O -i -o output.out -p md.prmtop -c md.inpcrd -r md.rst -x

Important: For standard gpu runs you must specify a gpu reservation using the --gres=gpu:N flag (where N is the number of gpus, and is equal to the number of tasks, n, specified). You must also constrain those runs to a single node using -N 1.

You can also place these commands in a batch script and use that for submission:

Sample Batch Script (4 GPUs) Named
#SBATCH --job-name=my_job
#SBATCH --gres=gpu:4
#SBATCH -n 4

mpirun pmemd.cuda.MPI -O -i -o output.out -p md.prmtop -c md.inpcrd -r md.rst -x

exit 0

After making the script executable (e.g. chmod +x, issue the following command:

4 GPU Run

RaMD-db Runs

RaMD-db runs are similar to standard aMD runs (using the iamd flag and associated options). In the case of RaMD-db you would specify the special case by setting iamd equal to 4 in your file:

RaMD-db Input File (e.g.
&cntrl imin=0, irest=1, ntx=5,
ntpr=1000, ntwx=5000, ntwr = 50000,
cut = 9.0,
iwrap=1, nstlim=50000000, dt=0.002,
ntt = 1, temp0=300.00, tautp=10.0, ig=-1,
ntb = 1, ntf = 2, ntc = 2, ntp = 0,
alphaD=247.8, EthreshD=4946.1530,
alphaP=6956.8, EthreshP=-120311.5793

The rest of process for running the job is the same as for standard Amber runs.

Important: The calculation of aMD parameters is described in the Amber manual. However for RaMD-db runs, the calculations are slightly different. The script linked to on the left can be used to calculate reasonable values for the boosting calculations.