Child pages
  • Hyak mox Overview

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents
outlinetrue

High Level Differences from ikt.hyak, the 1st generation Hyak system (retired March 2020).

  1. Mox is an entirely separate cluster. They share nothing with one another.
  2. You only get what you ask for, regardless of the resources available on the node. If you ask for 1 CPU, you'll only get one. If you ask for 1GB of RAM, you'll only get 1GB.
  3. An allocation won't get the same set of nodes all the time, just access to the particular number of nodes to which they're entitled.
  4. No occasional preemption in ckpt (formerly bf queue) for the moment.
  5. Preempted jobs get 10s to do something smart before being killed and requeued.
  6. Please report any problems to help@uw.edu with Hyak as the first word in the subject. Please also let us know you're using mox not ikt.

...

BBCP = mox1.hyak.uw.edu or mox2.hyak.uw.edu

File Transfers

Internal to Hyak systems

You can copy files at high speed without a password between the Hyak systems using commands like the ones below.

...

cat /gscratch/<my short group>/usage_report.txt

Slurm Primer

Show Queue

All Jobs
squeue
Jobs in Allocation

...

Build Allocation - usage limited by core count and time

srun -p build --mem=100G --time=2:00:00 --pty bash -l

Own Allocation

srun -p <my short group name> -A <my short group name> --mem=100G --time=2:00:00 --pty bash -l

Show Allocation Information

...

#!/bin/bash
## Job Name
#SBATCH --job-name=test-job
## Allocation Definition
#SBATCH --account=MYSHORTGROUP
#SBATCH --partition=MYSHORTGROUP
## Resources

## Nodes
#SBATCH --nodes=2      
## Tasks per node (Slurm assumes you want to run 28 tasks, remove 2x # and adjust parameter if needed)
###SBATCH --ntasks-per-node=28
## Walltime (ten minutestwo hours)

#SBATCH --time=102:00:00
# E-mail Notification, see man sbatch for options
 

##turn on e-mail notification

#SBATCH --mail-type=NONE
ALL

#SBATCH --mail-user=your_email_address


## Memory per node

#SBATCH --mem=2G 100G
## Specify the working directory for this job
#SBATCH --workdirchdir=/gscratch/MYGROUP/MYUSER/MYRUN

module load icc_<version>-impi_<VERSION>
mpirun /gscratch/MYGROUP/MYMODEL/MYMODEL-BIN

...

See articles on mox at below link (scroll down to links with prefix mox):

Hyak HOWTO

GNU parallel:

https://wiki.cac.washington.edu/display/hyakusers/Hyak+Serial+Job+Scripts

Hyak parallel-sql:

Hyak parallel-sql

Ikt Documentation

You can find additional documentation that applies to both mox and ikt on the main Hyak User Wiki.