Using - Sample Scripts
Jobs are submitted to SLURM as a job script, which lists the commands to run and gives instructions to SLURM on how to treat the job. These SLURM instructions are lines beginning #SBATCH. All #SBATCH lines must be at the top of your scripts, before any other commands, or they will be ignored.
To submit a job to SLURM, adapt one of the example job scripts below, save your job script with a suitable name and type
sbatch myjobscriptname
The sbatch man page has information about the #SBATCH options available. Always try to specify your job's run-time and/or memory needs, rather than accepting the defaults or the maximum allowed. If you ask for too much, SLURM may take longer to schedule your jobs and will overstate your usage, affecting the priority it gives to your future jobs. However, if you ask for too little, your job may crash or be stopped.
Serial (non-parallel) job
#!/bin/bash # Example SLURM job script for serial (non-parallel) jobs # # SLURM defaults to the directory you were working in when you submitted the job. # Output files are also put in this directory. To set a different working directory add: # #SBATCH –-workdir=/my_path_to/my_directory # # Tell SLURM if you want to be emailed when your job starts, ends, etc. # #SBATCH --mail-type=ALL # echo Starting job in directory `pwd` |
Job array
If you need to submit a set of very similar jobs, use a job array. Job arrays make use of shell variables, including ${SLURM_ARRAY_TASK_ID} that identifies each job in the array. You can also use %A to refer to the job ID and %a to refer to the array index. Array indices can be set as needed, e.g. #SBATCH --array=1-9:2,12,19 would run an array with elements 1,3,5,7,9,12 and 19.
#!/bin/bash |
Multi-threaded/OpenMP job
#!/bin/bash module load OpenMP # ./my_omp_program |
MPI job
#!/bin/bash # example MPI job script for SLURM # # SLURM defaults to the directory you were working in when you submitted the job. # Output files are also put in this directory. To set a different working directory add: # #SBATCH –-workdir=/nobackup/my_directory # # Tell SLURM if you want to be emailed when your job starts, ends, etc. # Currently mail can only be sent to addresses @ncl.ac.uk # #SBATCH --mail-type=ALL # # number of tasks to use #SBATCH --ntasks=88 # # use Intel programming tools module load intel # # SLURM recommend using srun instead of mpirun for better job control. srun ./mpi_program |
Hybrid OpenMP/MPI job
#!/bin/bash # example MPI+OpenMP job script for SLURM # # SLURM defaults to the directory you were working in when you submitted the job. # Output files are also put in this directory. To set a different working directory add: # #SBATCH –-workdir=/nobackup/my_directory # # Tell SLURM if you want to be emailed when your job starts, ends, etc. # Currently mail can only be sent to addresses @ncl.ac.uk # #SBATCH --mail-type=ALL # # This example has 4 MPI tasks, each with 22 cores # # number of tasks #SBATCH --ntasks=4 # # number of cores per task #SBATCH -c 22 # use Intel programming tools module load intel # # set the $OMP_NUM_THREADS variable ompthreads=$SLURM_CPUS_PER_TASK export OMP_NUM_THREADS=$ompthreads # # SLURM recommend using srun instead of mpirun for better job control. srun ./omp_mpi_program |
GPU job - batch
#!/bin/bash |
GPU job - interactive
For an interactive shell on the GPU node, e.g. to compile natively for Power9,
srun --pty -p power --gres=gpu:4 -c 128 bash |
Matlab job - batch
#!/bin/bash module load MATLAB |
Note the two issues below before running Matlab:
(1) Matlab has a default location for its internal temporary files, which can cause issues if you run multiple Matlab sessions at once. Set a unique location for each job:
- for compiled Matlab code, add the following line to your job script:
export MCR_CACHE_ROOT=$TMPDIR |
- for parallel jobs, set a unique storage location for each job’s temporary files in the .m file. For example:
pcluster = parcluster('local'); |
(2) The Matlab licence allows parallelisation within a node (e.g. via the Parallel Computing Toolbox) but not parallelisation across multiple nodes. Use code similar to this snippet to match the number of parallel workers to the SLURM allocation.
cluster.NumWorkers = str2num(getenv('SLURM_CPUS_PER_TASK')); parpool(cluster,str2num(getenv('SLURM_CPUS_PER_TASK'))) |
Matlab job - interactive
Matlab can cause problems on the login nodes, thanks to multi-threading and high memory use. If you need to use Matlab interactively, run it through a SLURM interactive session. To run a single-core session, type:
module load MATLAB |
To start an interactive session on multiple cores, add a line 'maxNumCompThreads(str2num(getenv('SLURM_CPUS_PER_TASK')));' at the beginning of your Matlab session or script and type e.g:
module load MATLAB |