Simulating CHARMM-GUI systems on HPCC
Contents
Simulating CHARMM-GUI systems on HPCC¶
Alex Dickson, Michigan State University
BMB 961, Machine Learning for Molecular Dynamics
This workbook assumes that you have already used CHARMM-GUI to setup your system and that you have selected openmm
in the Simulation Script section.
This notebook contains a set of commands that you should execute in a command shell, such as Terminal for MacOS or WSL for Windows.
This notebook will use one of the development nodes to run your simulations. These are only meant for short-duration jobs (e.g. less than 2 hours). If your job lasts longer than this it might be killed by a sys admin.
For longer simulations you will need to write a SLURM script to submit your job to the queue. An example of this can be found here: https://wiki.hpcc.msu.edu/display/ITH/Job+Script+and+Job+Submission.
Instructions:¶
Navigate to the directory containing your charmm-gui.tgz
tarball, then upload it to HPCC:
cd ~/Downloads
scp charmm-gui.tgz [USERNAME]@hpcc.msu.edu:~
Connect to HPCC and ssh to a development node with GPU support:
ssh [USERNAME]@hpcc.msu.edu
ssh dev-intel16-k80
(If you haven’t already) Install Conda if the home directory of your HPCC account:
wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.10.3-Linux-x86_64.sh
bash Miniconda3-py37_4.10.3-Linux-x86_64.sh -bfp ~
(If you haven’t already) Install openmm
in your conda distribution.
You could install this to a specific environment, but here we are installing in the base environment. Note that the install step could take a while.
conda config --add channels conda-forge
conda install python=3.7 cudatoolkit=10.0 git jupyterlab numpy pandas scipy matplotlib ipympl rdkit openbabel openmm mdtraj pymbar pdbfixer parmed openff-toolkit openmoltools openmmforcefields
Load the CUDA module:
module load CUDA/10.0.130
Look at the current jobs being run on the GPUs.
At the bottom this will list the GPUs that are currently occupied. By default jobs will run on GPU 0. If this is currently busy you can change which device you will use in the next step.
nvidia-smi
(Optional) Change the default GPU index.
Choose X
to be a GPU index that is unoccupied (e.g. 5)
export CUDA_VISIBLE_DEVICES=X
Test your openmm
installation:
You should see 4 platforms available, including CUDA.
python -m openmm.testInstallation
Make a work directory, then move and unpack the charmm-gui.tgz
file.
mkdir bmb961_sims
mv charmm-gui.tgz bmb961_sims
cd bmb961_sims
tar xzf charmm-gui.tgz
List the files in this directory with the ls
command.
You should see a folder named charmm-gui-XXXXXXXXXX
, where XXXXXXXXXX
is your job ID.
Change to the openmm directory
Note: you will need to change the Run ID below from “4908284510” to whatever yours is.
cd charmm-gui-4908284510/openmm
Change the README
file into an executable
The README
file is a csh
script that contains all of the commands for heating up our system and running some preliminary trajectories.
chmod +x README
Run the README!
./README > charmm-gui.out &
Note that this will take a while. You can get updates by listing the files in your directory with ls -ltr
. The first output file is step5_equilibration.out
in the openmm
directory. (Note that the step indices (e.g. 5, 6) might differ depending on which CHARMM-GUI tool you used for setup.)
A helpful way to view this file is:
tail -f step5_equilibration.out
The last column gives you estimates for how long this will take.
After the equilibration there will be 10 trajectory segments (step6_*
). The trajectories are stored in .dcd
files. You can copy these back to your local machine and analyze them after they are generated.
You can also watch your job run by viewing nvidia-smi
again, which will confirm that you have targeted the correct GPU.
To run longer (or shorter) simulations you could modify the nstep
parameter in the step5_production.inp
file. This can be done with a text editor such as Emacs, vim, or Nano.