Skip to main content

Submitting Jobs

Specification

Introduction to job templates

The slurm job template

Take the vasp job as an example:

#!/bin/bash
#SBATCH -p i64m512u
#SBATCH -J myjob
#SBATCH --ntasks-per-node=64
#SBATCH -n 128
#SBATCH -o job.%j.out
#SBATCH -e job.%j.err

module load vasp/6.3.2
MPIEXEC=`which mpirun`
$MPIEXEC -np 128 vasp_std

# Template introduction
# Line 1 is a fixed format for shell scripts
# The second line specifies the partition as i64m512u (currently slurm is divided into i64m512u,i64m512r,a128m512u,i64m1tga800u,i96m3tu several partitions according to the model of the machine, where i means Intel cpu, a means amd cpu, 64, 128 means the cpu core of each server, m512 means 512g of memory, and a128 means the core of each server. (m512 means 512g memory, refer to the hardware resources chapter for specific machine models)
# The third line specifies the name of the job, custom configuration
# Line 4 specifies how many cores are running per node (this parameter must be configured for cross-node computation)
# Line 5 specifies the total number of cores
# Line 6 specifies the standard output file
# Line 7 specifies the error input file
# Lines 9, 10, and 11 specify the vasp job command.

The unischeduler Job Template

Take the vasp job as an example:

#!/bin/bash
#BSUB -J VASP
#BSUB -q i64m512u
#BSUB -n 128
#BSUB -o out.%J
#BSUB -e err.%J

module load vasp/6.3.2
MPIEXEC=`which mpirun`
$MPIEXEC -genvall -machinefile $MPI_HOSTFILE vasp_std

# Template introduction
# Line 1 specifies the fixed format of the shell script.
# Line 2 specifies the job name, custom configurable
# Line 3 specifies queue as normal (defaults to this queue)
# Line 4 specifies the total number of cores
# Line 5 specifies the standard output file
# Line 6 specifies the error input file
# Lines 9, 10, and 11 are vasp job commands, where $MPI_HOSTFILE is an environment variable given by the scheduler and must be specified for cross-node use, and intelmpi is $MPI_HOSTFILE. openmi is $OMPI_HOSTFILE.

Serial Jobs

Submitting serial jobs via slurm scheduling

Single-core, single-threaded, commit script using lammps as an example

vim lammps.sh

#!/bin/bash
#SBATCH -p i64m512u
#SBATCH -J myjob
#SBATCH --ntasks-per-node=1
#SBATCH -n 1
#SBATCH -o job.%j.out
#SBATCH -e job.%j.err

module load lammps/2022-serial
lmp < in.crack

Submit the job

module load slurm
sbatch lammps.sh

Submitting serial tasks via unischeduler scheduling

Single-core, single-threaded, submit script using lammps as an example.

vim lammps.sh

#!/bin/bash
#BSUB -J lammps-test
#BSUB -n 4
#BSUB -e err.%J
#BSUB -o out.%J

module load lammps/2022-serial
lmp=`which lmp`
$lmp < in.crack

Submit the job

source  /opt/jhinno/unischeduler/etc/profile.unischeduler
jsub < lammps.sh

Parallel jobs

Submitting Parallel Jobs via slurm Scheduling

Write a commit script, using lammps as an example of a commit script.

vim lammps.sh

#!/bin/bash
#SBATCH -o job.%j.out
#SBATCH -e job.%j.err
#SBATCH -p i64m512u
#SBATCH -J myjob
#SBATCH --ntasks-per-node=64
#SBATCH -n 64

module load mpi/openmpi-4.1.5
module load lammps/2022-serial
export OMPI_MCA_btl=^openib
mpirun -np 64 /hpc2ssd/softwares/lammps_parallel/bin/lmp <in.crack

Submit the job

module load slurm
sbatch lammps.sh

Scheduling Submissions via unischeduler Parallel Jobs

Write a commit script, using lammps as an example to write a commit script

vim lammps.sh

#!/bin/bash
#BSUB -J lammps-test
#BSUB -n 128
#BSUB -e err.%J
#BSUB -o out.%J

module load mpi/openmpi-4.1.5
module load lammps/2022-serial
########################################################
# $JH_HOSTFILE: List of computer hostfiles
# $OMPI_HOSTFILE: List of computer hostfiles for OpenMPI
########################################################
export OMPI_MCA_btl=^openib
cd /hpc2ssd/home/hpcadmin/examples/crack
mpirun -np 128 -machinefile $OMPI_HOSTFILE /hpc2ssd/softwares/lammps_parallel/bin/lmp <in.crack


Submit the job

/opt/jhinno/unischeduler/etc/profile.unischeduler
bsub < lammps.sh

VASP Software Tasks

Submitting via slurm scheduling

  1. Create a working directory
mkdir vasp
cd vasp
  1. upload the relevant files needed to run vasp to that folder

  2. write the job script under that folder and name it vasp.sh. The script will look like the following:

#!/bin/bash
#SBATCH -p i64m512u
#SBATCH -J myjob
#SBATCH --ntasks-per-node=64
#SBATCH -N 2
#SBATCH -o job.%j.out
#SBATCH -e job.%j.err
# 导入运行环境
ulimit -s unlimited
ulimit -l unlimited
source /opt/intel/oneapi/setvars.sh --force
module load vasp/6.3.2
# MPI跨节点运行
/opt/intel/oneapi/mpi/2021.9.0/bin/mpirun -np 128 vasp_std

  1. Submit the job
sbatch vasp.sh

LAMMPS software tasks

Submission via slurm scheduling

  1. Create the working directory
mkdir lammps
cd lammps
  1. upload the relevant files needed to run.
  2. Write a job script in that folder and name it lammps.sh, with the following script content.
#!/bin/bash
#SBATCH -o job.%j.out
#SBATCH -e job.%j.err
#SBATCH -p i64m512u
#SBATCH -J myjob
#SBATCH --ntasks-per-node=64
#SBATCH -N 1

module load mpi/openmpi-4.1.5
module load lammps/2022-parallel
export OMPI_MCA_btl=^openib

mpirun -np 64 /hpc2ssd/softwares/lammps_parallel/bin/lmp <in.crack > in.crack.log
  1. 4.Submit the job.
sbatch lammps.sh

GROMACS

CPU Jobs

Scheduling submission via slurm.

  1. Create the working directory.
mkdir gromacs
cd gromacs
  1. upload the relevant files needed to run gromacs to that folder.

  2. Write a job script in that folder and name it gromacs.sh, with the following script content.

#!/bin/bash
#SBATCH -J gromacs
#SBATCH --cpus-per-task=16 per rank 16 cores
#SBATCH -n 4 # 4 MPI RANK
#SBATCH -N 1 # 1 node required
#SBATCH -p i64m512u
#SBATCH -o out.%J
#SBATCH -e err.%J

module load mpi/openmpi-4.1.5
module load gromacs/2023.2

MPIRUN=`which mpirun`
GMX_MPI=`which gmx_mpi`
# computer i64m512u each have two sockets
NP=$SLURM_STEP_NUM_TASKS
# notes: no need specify -ntomp at $GROMACS_OPTS
GROMACS_OPTS="" #GROMACS args"
$MPIRUN -np $NP $GMX_MPI mdrun $GROMACS_OPTS
  1. submit the job
sbatch gromacs.sh

GPU Jobs

Scheduling submission via slurm.

  1. Create the working directory.
mkdir gromacs_gpu
cd gromacs_gpu
  1. upload the relevant files needed to run gromacs to that folder.

  2. Write a job script in that folder and name it gromacs.sh, with the following script content.

#!/bin/bash
#SBATCH -J gromacs
#SBATCH --cpus-per-task=8
#SBATCH -n 4 ####
#SBATCH -N 1
#SBATCH -p i64m512u
#SBATCH -o out.%J
#SBATCH -e err.%J
#SBATCH -gres:gpu:A800:4

module load mpi/openmpi-4.1.5_cuda12
module load gromacs/2023.2_gpu

MPIRUN=`which mpirun`
GMX_MPI=`which gmx_mpi`
### computer i64m512u each have two sockets
NP=$SLURM_STEP_NUM_TASKS
NT=$SLURM_CPUS_PER_TASK
###notes: no need specify -ntomp at $GROMACS_OPTS
GROMACS_OPTS="" #GROMACS args"
##gpu part
CUDA_VISIBLE_DEVICES
GROMACS_GPU_OPTS="-gpu_id
$MPIRUN -np $NP $GMX_MPI mdrun -ntomp $NT $GROMACS_GPU_OPTS $GROMACS_OPTS
  1. submit the job
sbatch gromacs.sh

TensorFlow software tasks

Submit via slurm scheduling

  1. Create the working directory.
mkdir tensorflow
cd tensorflow
  1. Upload the relevant files needed to run.
  2. Write a job script in that folder and name it tensorflow.sh, with the following script content.
#!/bin/bash
#SBATCH -o job.%j.out
#SBATCH --partition=i64m1tga800u
#SBATCH -J tensorflow
#SBATCH -N 1
#SBATCH --ntasks-per-node=2
#SBATCH --gres=gpu:1
#SBATCH --qos=low

source /hpc2ssd/softwares/anaconda3/bin/activate tensorflow-gpu
python test.py
  1. Submit the job.
sbatch tensorflow.sh

pytorch task assignment

Scheduling submission via slurm

  1. create the working directory.
mkdir pytorch
cd pytorch
  1. upload the relevant files needed to run.
  2. Write a job script in this folder and name it pytorch.sh, with the following content.
#!/bin/bash
#SBATCH -o job.%j.out
#SBATCH --partition=i64m1tga800u
#SBATCH -J pytorch
#SBATCH -N 1
#SBATCH --ntasks-per-node=2
#SBATCH --gres=gpu:1
#SBATCH --qos=low

source /hpc2ssd/softwares/anaconda3/bin/activate pytorch_gpu_2.0.1
sbatch python test.py
  1. Submit the job.
sbatch pytorch.sh