Child pages
  • Mox_scheduler

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

This article is for both mox.hyak (hyak nextgen) and for ikt.hyak (hyak classic).

(For historical reasons, the title of this page is Mox_scheduler.)

Mox and ikt use Mox uses a scheduler called slurm.

Below xyz is your hyak group name and abc is your UW netid.

Find out from your group members, whether you should To logon to mox or to ikt. Some groups have nodes on both mox and ikt. Some groups have nodes only on ikt or only on mox.To logon mox.hyak:


To logon ikt:


The above command gives you access to the login node of mox or ikt.hyak. The login node is only for logging in and submitting jobs. The computational work is  done on a compute node. As shown below, you can get either an interactive compute node or submit a batch job. The build node is a special compute node which can connect to the internet.


srun -p build --time=2:00:00 --mem=20G --pty /bin/bash

For ikt.hyak:

srun -p build --time=2:00:00 --mem=10G --pty /bin/bash

(Note: (a) --pty /bin/bash must be the last option in above command.


(d) For GNU parallel use  "-j 4" option

For ikt, use --mem=58G.

On mox and ikt, the -p and -A options are usually the same.

However, when When you are using the build node with "-p build", then you do not need to give the -A option.


See below link for using the mox and ikt ckpt queue:



Batch usage Multiple Nodes:

If you use multple multiple nodes in one batch job, then your program should know how to use all the nodes. For example, your program should be a MPI program.


The value of the --ntasks-per-node option should be 16 for ikt. It should be 28 for older mox nodes and 40 for newer mox nodes. (Do not increase the values. However, you can decrease them if your progam is running out of memory on a node.)no greater than 40 as it represents the maximum number of cores per node you are requesting and no node type has more than this. However, the nodes you are able to access may not have 40 cores so your job could pend indefinitely. Please check which resources are available to you and their limits in determining what you can or should request.

For ikt:

#SBATCH --nodes=4
#SBATCH --ntasks-per-node=16


When the above command runs, then you will have been allocated 2 nodes but will still be on the mox login node.

If you issue a command like below then srun will run the command hostname on each node: