Mox uses a scheduler called slurm. It is similar to but different from the PBS based scheduler used on hyak classic.
Below xyz is your hyak group name and abc is your UW netid.
To logon:
ssh abc@mox.hyak.uw.edu
To see the various partitions (aka allocations):
sinfo
Interactive Usage:
The build node can connect to outside mox. It is useful for using git, transferring files to outside mox or getting files from outside mox, installing packages in R or Python etc.
To get an interactive build node for 2 hours:
srun -p build --time=2:00:00 --pty /bin/bash
To get an interactive node in your own group for 2 hours:
srun -p xyz -A xyz --time=2:00:00 --pty /bin/bash
Batch usage:
To submit a batch job:
sbatch -p xyz -A xyz myscript.slurm
The script myscript.slurm is similar to myscript.pbs used in hyak classic. Below is an example slurm script.
#!/bin/bash
## Job Name
#SBATCH --job-name=myjob
## Resources
## Nodes
#SBATCH --nodes=1
## Walltime (3 hours)
#SBATCH --time=3:00:00
## Memory per node
#SBATCH --mem=30G
## Specify the working directory for this job
#SBATCH --workdir=/gscratch/xyz/abc/myjobdir
myprogram
srun vs salloc
If no nodes have been allocated then (1) and (2) are equivalent.
(1) srun -p xyz -A xyz --time=4:00:00 --pty /bin/bash
(2) salloc -p xyz -A xyz --time=4:00:00 This allocates the nodes.
srun --pty /bin/bash This uses those same nodes.
If nodes have already been allocated by using salloc then srun just uses those nodes.
More details are here: