Child pages
  • Mox_scheduler
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 17 Next »

Mox uses a scheduler called slurm. It is similar to but different from the PBS based scheduler used on hyak classic.

Below xyz is your hyak group name and abc is your UW netid.

To logon:


The above command gives you access to the login node of mox. The login node is only for logging in and submitting jobs. The computational work is  done on a compute node. As shown below, you can get either an interactive compute node or submit a batch job. The build node is a special compute node which can connect to the internet.

To see the various partitions (aka allocations):


Below mox specific command  shows  the number of nodes etc. of all allocations.


Interactive Single Node Usage:

The build node can connect to outside mox. It is useful for using git, transferring files to outside mox or getting files from outside mox, installing packages in R or Python etc.

To get an interactive build node for 2 hours:

srun -p build --time=2:00:00 --pty /bin/bash

An interactive node in your own group cannot connect to outside mox.

To get an interactive node in your own group for 2 hours:

srun -p xyz -A xyz --time=2:00:00 --pty /bin/bash

Issue below comand at an interactive node prompt to find the list of SLURM environment variables:

export | grep SLURM

Interactive Multiple Node Usage:

To get 2 nodes for interactive use:

salloc -N 2 -p xyz -A xyz  --time=2:00:00

When the above command runs, then you will have been allocated 2 nodes but will still be on the mox login node. In order to find the names of the nodes that you have been allocated, issue below command:

export | grep SLURM_JOB_NODELIST

Once you know the node names, then you can use them for your work.

Below command will tell you about other SLURM environment variables.

export | grep SLURM

Batch usage:

To submit a batch job:

sbatch -p xyz -A xyz myscript.slurm

The script myscript.slurm is similar to myscript.pbs used in hyak classic. Below is an example slurm script.


## Job Name

#SBATCH --job-name=myjob

## Resources

## Nodes

#SBATCH --nodes=1   

## Walltime (3 hours)

#SBATCH --time=3:00:00

## Memory per node

#SBATCH --mem=30G

## Specify the working directory for this job

#SBATCH --workdir=/gscratch/xyz/abc/myjobdir



srun vs salloc

If no nodes have been allocated then (1) and (2) are equivalent.

(1) srun -p xyz -A xyz --time=4:00:00  --pty /bin/bash

(2) salloc -p xyz -A xyz --time=4:00:00    This allocates the nodes.

     srun --pty /bin/bash            This uses those same nodes.

If nodes have already been allocated by using salloc then srun just uses those nodes.


More details are here:

Hyak mox Overview

  • No labels