Skip to content

Moab/PBS to Slurm translation

Moab/PBS to Slurm commands

Action  Moab/Torque  Slurm
Job Submission msub/qsub sbatch
Job deletion canceljob/qdel scancel
List all jobs in queue showq/qstat squeue
List all nodes   sinfo
Show information about nodes mdiag -n/pbsnodes scontrol show nodes 
Job start time showstart squeue --start
Job information checkjob scontrol show job <jobid>
Reservation information showres

scontrol show res (this option shows details)

sinfo -T

 Moab/PBS to Slurm environmental variables

Description  Moab/Torque  Slurm
 Job ID  $PBS_JOBID $SLURM_JOBID
 node list   $PBS_NODEFILE

 Generate a listing of 1 node per line:
srun hostname | sort -u > nodefile.$SLURM_JOBID

Generate alisting of 1 core per line: 

srun hostname | sort  > nodefile.$SLURM_JOBID

 

submit directory $PBS_O_WORKDIR $SLURM_SUBMIT_DIR
number of nodes   $SLURM_NNODES
number of processors (tasks)   $SLURM_NTASKS ($SLURM_NPROCS for backward compatibility)

Moab/PBS to Slurm job script modifiers

 

Description  Moab/Torque  Slurm
Walltime #PBS -l walltime=1:00:00 #SBATCH -t 1:00:00 (or --time=1:00:00)
Process count

#PBS -l nodes=2:ppn=12

#SBATCH -n 24 ( or --ntasks=24)
#SBATCH -N 2 (or --nodes=2)

For threaded MPI jobs, use number of MPI tasks for --ntasks,
not number of cores. See the example script above for how
to figure out number of threads per MPI task

Memory #PBS -l nodes=2:ppn=12:m24576

#SBATCH --mem=24576

it is also possible to specify memory per tash with --mem-per-cpu; also see constraint section above for additional infomraiton on the use of this.

Mail options #PBS -m abe

#SBATCH --mail-type=FAIL,BEGIN,END 
there are other options such as REQUEUE, TIME_LIMIT_90. ...

Mail user #PBS -M user@mail.com  #SBATCH --mail-user=user@mail.com
Job name and
STDOUT/STDERR
#PBS -N myjob

#SBATCH -o myjob-%j.out-%N
#SBATCH -e myjob-%j.err-%N

NOTE: The %j and %N are replaced by the job number and the node (first node if a multi-node job.  This gives the stderr and stdout a unique name for each job.

Account #PBS -A owner-guest
optional in Torque/Moab

#SBATCH -A owner-guest (or --account=owner-guest)
required in Slurm

Dependency #PBS -W depend=afterok:12345
run after job 12345 finishes correctly

#SBATCH -d afterok:12345 (or --dependency=afterok:12345)
similarly to Moab, other modifiers include after, afterany, afternotok.
Please note that if job runs out of walltime, this does not constitute OK exit. To start a job after specified job finished use afterany.
For details on job exit codes see http://slurm.schedmd.com/job_exit_code.html

Reservation #PBS -l advres=u0123456_1

#SBATCH -R u0123456_1 (or --reservation=u0123456_1)

Partition No direct equivalent

#SBATCH -p lonepeak (or --partition=lonepeak)

Propagate all environment
variables from terminal
#PBS -V  All environment variables are propagated by default, except for modules
which are purged at a job start to prevent possible inconsistencies.
One can either load the needed modules in the job script,
or have them in their .custom.[sh,csh] file.
Propagate specific
environment variable
#PBS -v myvar #SBATCH --export=myvar
use with caution as this will export ONLY variable myvar

Target specific owner
nodes as guest

#PBS -l nodes=1:ppn=24:ucgd -A owner-guest #SBATCH -A owner-guest -p kingspeak-guest -C "ucgd"

Target specific nodes  

  #SBATCH -w notch001,notch002 (or --nodelist=notch001,notch002)

 

More Slurm Information

For more information on using Slurm at the CHPC, please look at the options here.

Last Updated: 8/28/24