LAMMPS
LAMMPS is available on the following servers.
Large-Scale Parallel Computing Server
Available executables
Version | Path |
---|---|
31 Mar 17 | /work/app/LAMMPS/current/src/lmp_intel_omp |
22 Aug 18 | /work/app/LAMMPS/lammps-22Aug18/src/lmp_intel_omp |
12 Dec 18 | /work/app/LAMMPS/lammps-12Dec18/src/lmp_intel_omp |
5 Jun 19 | /work/app/LAMMPS/lammps-5Jun19/src/lmp_intel_omp |
7 Aug 19 | /work/app/LAMMPS/lammps-7Aug19/src/lmp_intel_omp |
3 Mar 20 | /work/app/LAMMPS/lammps-3Mar2020/src/lmp_intel_omp |
29 Oct 20 | /work/app/LAMMPS/lammps-29Oct20/src/lmp_intel_omp |
29 Sep 21 | /work/app/LAMMPS/lammps-29Sep21/src/lmp_intel_omp |
23 Jun 22 | /work/app/LAMMPS/lammps-23Jun22/src/lmp_intel_omp |
2 Aug 23 | /work/app/LAMMPS/lammps-2Aug23/src/lmp_intel_omp |
Create a script file in advance.
#!/bin/sh #PBS -l select=nodes #PBS -q queue #PBS -N jobname DIRNAME=`basename $PBS_O_WORKDIR` WORKDIR=/work/$USER/$PBS_JOBID mkdir -p $WORKDIR cp -raf $PBS_O_WORKDIR $WORKDIR cd $WORKDIR/$DIRNAME aprun [ -n MPI total tasks ][ -N MPI tasks per node ] -j 1 /work/app/LAMMPS/current/src/lmp_intel_omp < input file > output file 2> error file cd; if cp -raf $WORKDIR/$DIRNAME $PBS_O_WORKDIR/.. ; then rm -rf $WORKDIR; fi
(Example)
#!/bin/sh #PBS -l select=1 #PBS -q P_016 #PBS -N lammps DIRNAME=`basename $PBS_O_WORKDIR` WORKDIR=/work/$USER/$PBS_JOBID mkdir -p $WORKDIR cp -raf $PBS_O_WORKDIR $WORKDIR cd $WORKDIR/$DIRNAME aprun -n 36 -N 36 -j 1 /work/app/LAMMPS/current/src/lmp_intel_omp < in.ij > lammps.out 2> lammps.err cd; if cp -raf $WORKDIR/$DIRNAME $PBS_O_WORKDIR/.. ; then rm -rf $WORKDIR; fi
Accelerator Server
Available executables
*Note: When using 7Aug19 or 3Mar2020 on the accelerator server, switch the module to “CUDA 10.1.243”.
*Note: When using 29Oct20 or 29Sep21 on the accelerator server, switch the module to “CUDA 10.2.89”.
*Note: When using 23Jun22 on the accelerator server, switch the module to “CUDA 10.2.89”, “intel 21.5.0”, and “gcc”.
*Note: When using 2Aug23 on the accelerator server, switch the module to “CUDA 11.6.2”, “intel 22.3.1”, and “gcc”.
Version | Path | Queue |
---|---|---|
31 Mar 17 | /usr/local/app/LAMMPS/current/src/lmp_gpu | A_004 CA_001 CA_001g |
12 Dec 18 | /usr/local/app/LAMMPS/lammps-12Dec18/src/lmp_gpu | A_004 CA_001 CA_001g |
5 Jun 19 | /usr/local/app/LAMMPS/lammps-5Jun19/src/lmp_gpu | A_004 CA_001 CA_001g |
5 Jun 19 -DFFT_SINGLE OFF | /usr/local/app/LAMMPS/lammps-5Jun19_wo_single/src/lmp_gpu | A_004 CA_001 CA_001g |
7 Aug 19 | /usr/local/app/LAMMPS/lammps-7Aug19/src/lmp_gpu *Switch the module to “CUDA 10.1.243”. | A_004 CA_001 CA_001g |
3 Mar 20 | /usr/local/app/LAMMPS/lammps-3Mar20/src/lmp_gpu *Switch the module to “CUDA 10.1.243”. | A_004 CA_001 CA_001g |
29 Oct 20 | /usr/local/app/LAMMPS/lammps-29Oct20/src/lmp_gpu *Switch the module to “CUDA 10.2.89”. | A_004 CA_001 CA_001g |
29 Sep 21 | /usr/local/app/LAMMPS/lammps-29Sep21/src/lmp_gpu *Switch the module to “CUDA 10.2.89”. | A_004 CA_001 CA_001g |
23 Jun 22 | /usr/local/app/LAMMPS/lammps-23Jun22/src/lmp_gpu *Switch the module to “CUDA 10.2.89”, “intel 21.5.0”, and “gcc”. | A_004 CA_001 CA_001g |
2 Aug 23 | /usr/local/app/LAMMPS/lammps-2Aug23/src/lmp_gpu *Switch the module to “CUDA 11.6.2”, “intel 22.3.1”, and “gcc”. | A_004 CA_001 CA_001g |
Create a script file in advance.
#!/bin/sh #PBS -l select=nodes #PBS -q queue #PBS -N jobname DIRNAME=`basename $PBS_O_WORKDIR` WORKDIR=/work/$USER/$PBS_JOBID mkdir -p $WORKDIR cp -raf $PBS_O_WORKDIR $WORKDIR cd $WORKDIR/$DIRNAME mpirun [ -np MPI total tasks ][ -ppn MPI tasks per node ] -hostfile $PBS_NODEFILE /usr/local/app/LAMMPS/current/src/lmp_gpu -sf gpu -pk gpu GPUs per node < input file > output file 2> error file cd; if cp -raf $WORKDIR/$DIRNAME $PBS_O_WORKDIR/.. ; then rm -rf $WORKDIR; fi
(Example) Accelerator Server
#!/bin/sh #PBS -l select=1 #PBS -q A_004 #PBS -N lammps DIRNAME=`basename $PBS_O_WORKDIR` WORKDIR=/work/$USER/$PBS_JOBID mkdir -p $WORKDIR cp -raf $PBS_O_WORKDIR $WORKDIR cd $WORKDIR/$DIRNAME mpirun -np 30 -ppn 30 -hostfile $PBS_NODEFILE /usr/local/app/LAMMPS/current/src/lmp_gpu -sf gpu -pk gpu 10 < in.ij > lammps.out 2> lammps.err cd; if cp -raf $WORKDIR/$DIRNAME $PBS_O_WORKDIR/.. ; then rm -rf $WORKDIR; fi
(Example) Accelerator Server(7 Aug 19)
#!/bin/sh #PBS -l select=1 #PBS -q A_004 #PBS -N lammps module switch cudatoolkit/9.0.176 cudatoolkit/10.1.243 DIRNAME=`basename $PBS_O_WORKDIR` WORKDIR=/work/$USER/$PBS_JOBID mkdir -p $WORKDIR cp -raf $PBS_O_WORKDIR $WORKDIR cd $WORKDIR/$DIRNAME mpirun -np 30 -ppn 30 -hostfile $PBS_NODEFILE /usr/local/app/LAMMPS/lammps-7Aug19/src/lmp_gpu -sf gpu -pk gpu 10 < in.ij > lammps.out 2> lammps.err cd; if cp -raf $WORKDIR/$DIRNAME $PBS_O_WORKDIR/.. ; then rm -rf $WORKDIR; fi
Parallel Computing and Informatics Server
Available executables
Version | Path | Queue |
---|---|---|
31 Mar 17 | /usr/local/app/LAMMPS/current/src/lmp_intel_cpu_intelmpi | C_002 C_004 |
5 Jun 19 | /usr/local/app/LAMMPS/lammps-5Jun19/src/lmp_intel_cpu_intelmpi | C_002 C_004 |
Create a script file in advance.
#!/bin/sh #PBS -l select=nodes #PBS -q queue #PBS -N jobname DIRNAME=`basename $PBS_O_WORKDIR` WORKDIR=/work/$USER/$PBS_JOBID mkdir -p $WORKDIR cp -raf $PBS_O_WORKDIR $WORKDIR cd $WORKDIR/$DIRNAME mpirun [ -np MPI total tasks ] [ -ppn MPI tasks per node ] -hostfile $PBS_NODEFILE /usr/local/app/LAMMPS/current/src/lmp_intel_cpu_intelmpi < input file > output file 2> error file cd; if cp -raf $WORKDIR/$DIRNAME $PBS_O_WORKDIR/.. ; then rm -rf $WORKDIR; fi
(Example)
#!/bin/sh #PBS -l select=1 #PBS -q C_002 #PBS -N lammps DIRNAME=`basename $PBS_O_WORKDIR` WORKDIR=/work/$USER/$PBS_JOBID mkdir -p $WORKDIR cp -raf $PBS_O_WORKDIR $WORKDIR cd $WORKDIR/$DIRNAME mpirun -np 36 -ppn 36 -hostfile $PBS_NODEFILE /usr/local/app/LAMMPS/current/src/lmp_intel_cpu_intelmpi < in.ij > lammps.out 2> lammps.err cd; if cp -raf $WORKDIR/$DIRNAME $PBS_O_WORKDIR/.. ; then rm -rf $WORKDIR; fi
Virtual Server
Available executables
Version | Path |
---|---|
31 Mar 17 | /usr/local/app/LAMMPS/current/src/lmp_intel_cpu_intelmpi |
5 Jun 19 | /usr/local/app/LAMMPS/lammps-5Jun19/src/lmp_intel_cpu_intelmpi |
Execute the commands as follows.
mpirun [ -np MPI total tasks ][ -ppn MPI tasks per node ] -hostfile hostfile /usr/local/app/LAMMPS/current/src/lmp_intel_cpu_intelmpi < input file > output file
(Example)
mpirun -np 2 -hostfile hostfile /usr/local/app/LAMMPS/current/src/lmp_intel_cpu_intelmpi < in.ij > lammps.out