Child pages
  • Mox_scheduler
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

Mox uses a scheduler called slurm. It is similar to but different from the PBS based scheduler used on hyak classic.

Below xyz is your hyak group name and abc is your UW netid.

To logon:

ssh abc@mox.hyak.uw.edu

To see the various partitions (aka allocations):

sinfo

Interactive Usage:

To get an interactive build node for 2 hours:

srun -p build --time=2:00:00 --pty /bin/bash

To get an interactive node in your own group for 2 hours:

srun -p xyz -A xyz --time=2:00:00 --pty /bin/bash

Batch usage:

To submit a batch job:

sbatch -p xyz -A xyz myscript.slurm

The script myscript.slurm is similar to myscript.pbs used in hyak classic. Below is an example slurm script.

#!/bin/bash

## Job Name

#SBATCH --job-name=myjob

## Resources

## Nodes

#SBATCH --nodes=1   

## Walltime (3 hours)

#SBATCH --time=3:00:00

## Memory per node

#SBATCH --mem=30G

## Specify the working directory for this job

#SBATCH --workdir=/gscratch/xyz/abc/myjobdir

myprogram

 

srun vs salloc

If no nodes have been allocated then (1) and (2) are equivalent.

(1) srun -p xyz -A xyz --time=4:00:00  --pty /bin/bash

(2) salloc -p xyz -A xyz --time=4:00:00    This allocates the nodes.

     srun --pty /bin/bash            This uses those same nodes.

If nodes have already been allocated by using salloc then srun just uses those nodes.

 

More details are here:

Hyak mox Overview

https://slurm.schedmd.com/quickstart.html

https://slurm.schedmd.com/documentation.html

  • No labels