CP2K =================================== Following version is available. .. csv-table:: Large-scale Parallel Computing Server :header: "version", "module","execution queue" :widths: 20, 79, 1 "2025.2", "/work/app/CP2K/cp2k-2025.2_cpu/exe/Linux-intel-xd220v/cp2k.psmp", "P_030 TP_002 MP_001 CP_001 DP_002 S_001 CS_001" .. attention:: Execute following command in advance. module load oneapi/2024.2.1 .. csv-table:: Accelerator server :header: "version", "module","execution queue" :widths: 20, 79, 1 "2025.2", "/work/app/CP2K/cp2k-2025.2_gpu/exe/Linux-intel-xd670_cuda/cp2k.psmp", "A_002 CA_001 DA_002" .. attention:: Execute following command in advance. module load oneapi/2024.2.1 module load cuda/12.8 - Job Submission Script ・Large-scale Parallel Computing Server .. code-block :: none #!/bin/sh #PBS -l select=nodes #PBS -q queue #PBS -N jobname module load oneapi/2024.2.1 2> /dev/null cd ${PBS_O_WORKDIR} export CP2K_DATA_DIR=/work/app/CP2K/cp2k-2025.2_cpu/data . /work/app/CP2K/cp2k-2025.2_cpu/tools/toolchain/install/setup mpirun [ -np MPI total tasks ][ -ppn MPI tasks per node ] -hostfile $PBS_NODEFILE /work/app/CP2K/cp2k-2025.2_cpu/exe/Linux-intel-xd220v/cp2k.psmp input file > output file 2> error file ・Accelerator server .. code-block :: none #!/bin/sh #PBS -l select=1[:ncpus=number of CPUs][:ngpus=number of GPUs][:mem=amount of memory] #PBS -q CA_001 #PBS -N jobname module load oneapi/2024.2.1 2> /dev/null module load cuda/12.8 2> /dev/null cd ${PBS_O_WORKDIR} export CP2K_DATA_DIR=/work/app/CP2K/cp2k-2025.2_gpu/data . /work/app/CP2K/cp2k-2025.2_gpu/tools/toolchain/install/setup mpirun [ -np MPI total tasks ][ -ppn MPI tasks per node ] -hostfile $PBS_NODEFILE /work/app/CP2K/cp2k-2025.2_gpu/exe/Linux-intel-xd670_cuda/cp2k.psmp input file > output file 2> error file - Example ・Large-scale Parallel Computing Server .. code-block :: none #!/bin/sh #PBS -l select=1 #PBS -q P_030 #PBS -N cp2k module load oneapi/2024.2.1 2> /dev/null cd ${PBS_O_WORKDIR} export CP2K_DATA_DIR=/work/app/CP2K/cp2k-2025.2_cpu/data . /work/app/CP2K/cp2k-2025.2_cpu/tools/toolchain/install/setup mpirun -np 112 -ppn 112 -hostfile $PBS_NODEFILE /work/app/CP2K/cp2k-2025.2_cpu/exe/Linux-intel-xd220v/cp2k.psmp input.inp > cp2k.out 2> cp2k.err ・Accelerator server .. code-block :: none #!/bin/sh #PBS -l select=1:ncpus=16:ngpus=2:mem=160gb #PBS -q CA_001 #PBS -N cp2k module load oneapi/2024.2.1 2> /dev/null module load cuda/12.8 2> /dev/null cd ${PBS_O_WORKDIR} export CP2K_DATA_DIR=/work/app/CP2K/cp2k-2025.2_gpu/data . /work/app/CP2K/cp2k-2025.2_gpu/tools/toolchain/install/setup mpirun -np 16 -ppn 16 -hostfile $PBS_NODEFILE /work/app/CP2K/cp2k-2025.2_gpu/exe/Linux-intel-xd670_cuda/cp2k.psmp input.inp > cp2k.out 2> cp2k.err