3.2. Format of an execution script

This section describes the format of execution script files to run programs on the supercomputer. To execute the application that requires an execution script file, create the file in advance.

3.2.1. Execute a non MPI program

#!/bin/sh
#PBS -l select=1
#PBS -q queue name
#PBS -N job name

cd $PBS_O_WORKDIR

program  > output file 2> error file

・Example: To execute a program ‘a.out’.

#!/bin/sh
#PBS -l select=1
#PBS -q CP_001
#PBS -N sample

cd $PBS_O_WORKDIR

./a.out > result.out 2> result.err

3.2.2. Execute a OMP program

#!/bin/sh
#PBS -l select=1:ncpus=[number of CPUs]:ompthreads=[number of OMP threads]:mem=[amount of memory]gb
#PBS -q queue name
#PBS -N job name

cd $PBS_O_WORKDIR

program  > output file 2> error file

・Example: To execute a program ‘a.out’.

#!/bin/sh
#PBS -l select=1:ncpus=4:ompthreads=4:mem=10gb
#PBS -q CP_001
#PBS -N sample

cd $PBS_O_WORKDIR

./a.out > result.out 2> result.err

3.2.3. Execute a MPI program

#!/bin/sh
#PBS -l select=nodes:ncpus=[number of CPUs per node]:mpiprocs=[number of MPI procs per node]:mem=[amount of memory per node]gb
#PBS -l walltime=HH:MM:SS
#PBS -q queue
#PBS -N jobname

cd $PBS_O_WORKDIR

mpirun [ -np MPI total tasks | -ppn MPI tasks per node ] -hostfile $PBS_NODEFILE program > output file 2> error file

・Example: To run a program on shared node using Intel MPI, with 8 MPI processes, 32GB memory, and walltime of 1 hour.

#!/bin/sh
#PBS -l select=1:ncpus=8:mpiprocs=8:mem=32gb
#PBS -l walltime=01:00:00
#PBS -q CP_001
#PBS -N mpi

module -s load oneapi
cd $PBS_O_WORKDIR

mpirun -np 8 -hostfile $PBS_NODEFILE ./a.out > result.out 2> result.err

・Example: To run a program on 2 node using Intel MPI, with 112 MPI processes each nodes.

#!/bin/sh
#PBS -l select=2
#PBS -q P_030
#PBS -N mpi

module -s load oneapi
cd $PBS_O_WORKDIR

mpirun -np 224 -ppn 112 -hostfile $PBS_NODEFILE ./a.out > result.out 2> result.err

3.2.4. Execute a MPI + OMP program

#!/bin/sh
#PBS -l select=nodes:ncpus=[number of CPUs per node]:mpiprocs=[number of MPI procs]:ompthreads=[number of OMP threads]:mem=[amount of memory per node]gb
#PBS -l walltime=HH:MM:SS
#PBS -q queue
#PBS -N jobname

cd $PBS_O_WORKDIR

mpirun [ -np MPI total tasks | -ppn MPI tasks per node ] -hostfile $PBS_NODEFILE program > output file 2> error file

Attention

Set the value so that ncpus = mpiprocs * ompthreads.

・Example: To run a program on 1 node using Intel MPI, with 56 MPI processes and 56 OMP threads.

#!/bin/sh
#PBS -l select=1:ncpus=112:mpiprocs=56:ompthreads=2:mem=460gb
#PBS -q P_030
#PBS -N mpi_omp

module -s load oneapi
cd $PBS_O_WORKDIR

mpirun -np 56 -ppn 56 -hostfile $PBS_NODEFILE ./a.out > result.out 2> result.err

・Example: To run a program on 1 node using OpenMPI, with 56 MPI processes and 56 OMP threads.

#!/bin/sh
#PBS -l select=1:ncpus=112:mpiprocs=56:ompthreads=2:mem=460gb
#PBS -q P_030
#PBS -N mpi_omp

module -s load openmpi/4.1.8_cpu
cd $PBS_O_WORKDIR

mpirun -np 56 -N 56 -hostfile $PBS_NODEFILE ./a.out > result.out 2> result.err

3.2.5. Execute a GPU program

#!/bin/sh
#PBS -l select=nodes:ncpus=[number of CPUs per node]:ngpus=[number of GPUs per node]:mem=[amount of memory per node]gb
#PBS -l walltime=HH:MM:SS
#PBS -q queue
#PBS -N jobname

cd $PBS_O_WORKDIR

mpirun [ -np MPI total tasks | -ppn MPI procs per node ] -hostfile $PBS_NODEFILE program > output file 2> error file

・Example: To run a program on shared node using NVIDIA HPC SDK, with 2 MPI processes, 2GPUs, 64GB memory, and walltime of 30 minutes.

#!/bin/sh
#PBS -l select=1:ncpus=2:ngpus=2:mem=64gb
#PBS -l walltime=00:30:00
#PBS -q CA_001
#PBS -N mpi

module -s load nvhpc
cd $PBS_O_WORKDIR

mpirun -np 2 -hostfile $PBS_NODEFILE -x LD_LIBRARY_PATH ./a.out > result.out 2> result.err