SGE vs SLURM comparison

A guide comparing common commands in slurm and sge

Some common commands and flags in slurm and sge:

sge slurm
qstat
qstat -u username
qstat -f
squeue
squeue -u username
squeue -al
qsub
qsub -N jobname
qsub -m beas
qsub -M user@host
qsub -l h_rt=24:00:00
qsub -pe dmp4 16
qsub -l mem=4G
qsub -P projectname
qsub -o filename
qsub -e filename
qsub -l scratch_free=20G
sbatch
sbatch -J jobname
sbatch --mail-type=ALL
sbatch --mail-user=user@host
sbatch -t 24:00:00
sbatch -p node -n 16
# Do not use mem specifications!
sbatch -A projectname
sbatch -o filename
sbatch -e filename
sbatch --tmp=20480
# Interactive run, one core # Interactive run, one core
qrsh -l h_rt=8:00:00 salloc -t 8:00:00
interactive -p core -n 1 -t 8:00:00

qdel

scancel

A comparison between job scripts in slurm and sge

sge for an MPI application slurm for an MPI application
#!/bin/bash
#
#
#$ -N test
#$ -j y
#$ -o test.output
#$ -cwd
#$ -M username@domain.tld
#$ -m bea
# Request 5 hours run time
#$ -l h_rt=5:0:0
#$ -P your_project_id_here
#$ -R y
# for Isis with 4 cores/node:
#$ -pe dmp4 16
# (for grad, with 8 cores, use dmp8)
#$ -l mem=2G
# memory is counted per process on node
# (dmp4 and mem=2G requires 8GB per node)
 
module load pgi openmpi
 
mpirun <put your app here>
#!/bin/bash -l
# NOTE the -l flag!
#
#SBATCH -J test
#SBATCH -o test.output
#SBATCH -e test.output
# Default in slurm
#SBATCH --mail-user username@domain.tld
#SBATCH --mail-type=ALL
# Request 5 hours run time
#SBATCH -t 5:0:0
#SBATCH -A your_project_id_here
#
#SBATCH -p node -n 16
# NOTE Each Kalkyl node has eight cores
#
 
module load pgi openmpi
 
mpirun <put your app here>

sge for a single-core application slurm for a single-core application
#!/bin/bash
#
#
#$ -N test
#$ -j y
#$ -o test.output
#$ -cwd
#$ -M username@domain.tld
#$ -m bea
# Request 5 hours run time
#$ -l h_rt=5:0:0
#$ -P your_project_id_here
#
#$ -l mem=4G
#
 
<call your app here>
#!/bin/bash -l
# NOTE the -l flag!
#
#SBATCH -J test
#SBATCH -o test.output
#SBATCH -e test.output
# Default in slurm
#SBATCH --mail-user username@domain.tld
#SBATCH --mail-type=ALL
# Request 5 hours run time
#SBATCH -t 5:0:0
#SBATCH -A your_project_id_here
#
#SBATCH -p core -n 1
# NOTE: You must not use more than 3GB of memory
 
<call your app here>

Comparison of some parallel environments set by sge and slurm

sge slurm
$JOB_ID $SLURM_JOB_ID
$NSLOTS $SLURM_NPROCS