5.16. LAMMPS¶
Following version is available.
| version | module | execution queue | 
|---|---|---|
| 29 AUG 2024 | /work/app/LAMMPS/lammps-29Aug2024_cpu/src/lmp_mpi | P_030 TP_002 MP_001 CP_001 DP_002 S_001 CS_001 | 
| 29 AUG 2024 Package Add-on Version | /work/app/LAMMPS/lammps-29Aug2024v2_cpu/src/lmp_intel_omp | Same as above | 
| 22 JUL 2025 | /work/app/LAMMPS/lammps-22Jul2025_cpu/src/lmp_intel_omp | Same as above | 
Attention
Execute following command in advance.
29 AUG 2024:
module load oneapi/2025.0.1
29 AUG 2024 Package Add-on version and 22 JUL 2025:
module load oneapi/2023.2.0
Attention
You can check the installed packages using the make package-status command.
| version | module | queue | 
|---|---|---|
| 29 AUG 2024 | /work/app/LAMMPS/lammps-29Aug2024_gpu/build/lmp_gpu | A_002 CA_001 DA_002 | 
| 29 AUG 2024 Package Add-on Version | /work/app/LAMMPS/lammps-29Aug2024v2_gpu/src/lmp_gpu | Same as above | 
| 29 AUG 2024 kokkos Version | /work/app/LAMMPS/lammps-29Aug2024kokkos_gpu/src/lmp_kokkos_ompi | Same as above | 
| 22 JUL 2025 | /work/app/LAMMPS/lammps-22Jul2025_gpu/src/lmp_gpu | Same as above | 
| 22 JUL 2025 kokkos Version | /work/app/LAMMPS/lammps-22Jul2025kokkos_gpu/src/lmp_kokkos_ompi | Same as above | 
Attention
Execute following command in advance.
29 AUG 2024:
module load oneapi/2025.0.1
module load cuda/12.8
29 AUG 2024 Package Add-on version and 22 JUL 2025:
module load nvhpc-nompi/25.3
module load oneapi/2024.2.1
29 AUG 2024 kokkos Version and 22 JUL 2025 kokkos version:
module load openmpi/4.1.8_gpu
Attention
You can check the installed packages using the make package-status command.
- Job Submission Script 
・Large-scale Parallel Computing Server
#!/bin/sh
#PBS -l select=nodes
#PBS -q queue
#PBS -N jobname
module load oneapi/2025.0.1
cd ${PBS_O_WORKDIR}
mpirun [ -np MPI total tasks ] [ -ppn MPI tasks per node ] -hostfile $PBS_NODEFILE /work/app/LAMMPS/lammps-29Aug2024_cpu/src/lmp_mpi  < input file > output file 2> error file
・Accelerator server
#!/bin/sh
#PBS -l select=1[:ncpus=number of CPUs][:ngpus=number of GPUs][:mem=amount of memory]
#PBS -q CA_001
#PBS -N jobname
module load oneapi/2025.0.1
module load cuda/12.8
cd ${PBS_O_WORKDIR}
mpirun [ -np MPI total tasks ] [ -ppn MPI tasks per node ] -hostfile $PBS_NODEFILE /work/app/LAMMPS/lammps-29Aug2024_gpu/build/lmp_gpu -sf gpu -pk gpu MPI tasks per node < input file > output file 2> error file
- Example 
・Large-scale Parallel Computing Server
#!/bin/sh
#PBS -l select=1
#PBS -q P_030
#PBS -N lammps
module load oneapi/2025.0.1
cd ${PBS_O_WORKDIR}
mpirun -np 112 -ppn 112 -hostfile $PBS_NODEFILE /work/app/LAMMPS/lammps-29Aug2024_cpu/src/lmp_mpi  < in.lj > lammps.out 2> lammps.err
・Accelerator server
#!/bin/sh
#PBS -l select=1:ncpus=16:ngpus=2:mem=160gb
#PBS -q CA_001
#PBS -N lammps
module load oneapi/2025.0.1
module load cuda/12.8
cd ${PBS_O_WORKDIR}
mpirun -np 16 -ppn 16 -hostfile $PBS_NODEFILE /work/app/LAMMPS/lammps-29Aug2024_gpu/build/lmp_gpu -sf gpu -pk gpu 2 < in.lj > lammps.out 2> lammps.err
・Accelerator server (kokkos Version)
#!/bin/sh
#PBS -l select=1:ncpus=2:ngpus=2:mem=32gb
#PBS -q CA_001
#PBS -N lammps
module load openmpi/4.1.8_gpu
cd ${PBS_O_WORKDIR}
mpirun -np 2 -N 2 -hostfile $PBS_NODEFILE --oversubscribe /work/app/LAMMPS/lammps-29Aug2024kokkos_gpu/src/lmp_kokkos_cuda_ompi -k on g 2 -sf kk < in.lj > lammps.out 2> lammps.err