NAMD
NAMD is a parallel, object-oriented molecular dynamics code designed for high performance simulation of large biomolecular systems. It is file-compatible with AMBER, CHARMM and X-PLOR.
NAMD is licensed software from the University of Illinois. Please read and follow the terms of the license agreement.
Sample scripts for NAMD use are available in directory /opt/packages/examples/namd on Bridges-2.
Documentation
- NAMD web site
- Example scripts: For best performance, please see the optimized example scripts for NAMD on Bridges and Bridges-2 in directory /opt/packages/examples/namd.
Citations
Any reports or published results obtained with NAMD must acknowledge its use by the appropriate citation as follows:
“NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign.”
Any published work which utilizes NAMD shall include the following reference:
“James C. Phillips, David J. Hardy, Julio D. C. Maia, John E. Stone, Joao V. Ribeiro, Rafael C. Bernardi, Ronak Buch, Giacomo Fiorin, Jerome Henin, Wei Jiang, Ryan McGreevy, Marcelo C. R. Melo, Brian K. Radak, Robert D. Skeel, Abhishek Singharoy, Yi Wang, Benoit Roux, Aleksei Aksimentiev, Zaida Luthey-Schulten, Laxmikant V. Kale, Klaus Schulten, Christophe Chipot, and Emad Tajkhorshid. Scalable molecular dynamics on CPU and GPU architectures with NAMD. Journal of Chemical Physics, 153:044130, 2020. doi:10.1063/5.0014475”
Electronic documents will include a direct link to the official NAMD page at http://www.ks.uiuc.edu/Research/namd/
Usage on Bridges-2
To see what versions of NAMD are available and if there is more than one, which is the default, along with some help, type
module spider namd
To use NAMD, include a command like this in your batch script or interactive session to load the NAMD module: (note ‘module load’ is case-sensitive):
module load namd
Running NAMD in the GPU partition
Here is an example script to run NAMD in the GPU partition. This script is also available in directory /opt/packages/examples/namd/GPU on Bridges-2.
#!/bin/bash #SBATCH -N 1 #SBATCH --ntasks-per-node=40 #SBATCH -p GPU #SBATCH -t 10:00 #SBATCH --gres=gpu:v100:8 # set echo set -x # Load the needed modules module load namd/2.13-gpu cd $SLURM_SUBMIT_DIR echo $SLURM_NTASKS $BINDIR/namd2 +setcpuaffinity +p 40 +devices 0,1,2,3,4,5,6,7 /opt/packages/examp les/namd/stmv/stmv.namd
Running NAMD in the GPU-shared partition
When running a job in the GPU-shared partition, you must pass the numbers of the cpus assigned to your job to NAMD. If you do not, the job will fail. For the correct commands to do this, please see the example job below. This script is also available in directory /opt/packages/examples/namd/GPU-shared on Bridges-2.
#!/bin/bash #SBATCH -N 1 #SBATCH --ntasks-per-node=20 #SBATCH -p GPU-shared #SBATCH -t 05:00 #SBATCH --gres=gpu:v100:4 # set echo set -x #This example is for 4 gpus in the shared partition. To use a different number of gpus #(1 to 4 in the shared partition) modify the 4 above (v100:4) and ntasks-per-node to 5x number of gpus required. # # Load the needed modules module load namd/2.13-gpu cd $SLURM_SUBMIT_DIR echo $SLURM_NTASKS #In the line below only modify the path with the input file (/opt/packages/namd/stmv/stmv.namd) to your input file. #Leave everything else as is, respect the new line after "{printf \". #numactl passes the cpu numbers that you get to namd for setcpuaffinity to work. # $BINDIR/namd2 +setcpuaffinity `numactl --show | awk '/^physcpubind/ {printf \ "+p%d +pemap %d",(NF-1),$2; for(i=3;i<=NF;++i){printf ",%d",$i}}'` /opt/packages /examples/namd/stmv/stmv.namd