Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Warning
titleIkt Hyak Cluster Decommissioning

The older Ikt cluster will be completely decommissioned in June 2020.  Ikt users and groups, please begin making preparations now.  Details are available HERE.

Table of Contents
outlinetrue

...

System Status

Hyak consists of two independent clusters: ikt.hyak (hyak classic) and 1 cluster: mox.hyak (hyak next-genthe 2nd generation hyak system).

Current Status

Tip

iktmox.Hyak is online.

A maintenance reservation is in place for Feb 11th, 2020.*

Tip

mox.Hyak is online.

*   Please subscribe to the hyak-users mailing list for updates on system outages and maintenance.

*A maintenance reservation is in place for Feb 11thscheduled maintenance on December 14th, 2020.*

*When a maintenance reservation is in place, jobs whose time limit (sbatch --time parameter) could allow the job to extend into the maintenance period will not be allowed to start by the scheduler.  The squeue command will show the reason as 'ReqNodeNotAvail'.

Hyak will be offline from 9:00 am - 5:00 pm for scheduled maintenance the second Tuesday of every month. Every third month (Feb, May, Aug, Nov), the maintenance window will last from 9:00 am to 9:00 the following morning. More on the Maintenance schedule.

...

RC Club office hours: https://depts.washington.edu/uwrcc/calendar/

Hyak consists of two independent clusters: ikt.hyak (hyak classic) and 1 cluster: mox.hyak (hyak next-gen). All new nodes are added to mox.hyak.

ikt

ikt.hyak has hundreds of nodes, each comparable to a high-end server. A typical node on ikt.hyak has 16 processor cores and at least 64GB of memory. All the ikt.hyak nodes run CentOS 6 Linux, and they are tied together by the Slurm cluster software. The user tasks are submitted through the Slurm scheduler. ikt.Hyak is made of several generations of hardware with different levels of performance, see Hyak Node Hardware for more detailsthe 2nd generation Hyak system).

mox

mox.hyak contains hundreds of nodes, each comparable to a high-end server. A typical node on mox.hyak has at least 28 processor cores and at least 128GB of memory. All the mox.hyak nodes run CentOS 7 Linux, and they are tied together by the Slurm cluster software. The user tasks are submitted through the Slurm scheduler.

...

CPU and storage options are available for purchase to Hyak users associated with sponsored campus units. New nodes will be part of mox.hyak.Non-sponsored campus units are able to purchase dedicated compute or storage capacity with an added annual maintenance cost.

This information is provided for preliminary planing purposes only. Please use your UW e-mail to  contact help@uw.edu for assistance in preparing actual Hyak hardware configurations.

https://itconnect.uw.edu/service/shared-scalable-compute-cluster-for-research-hyak/

Any faculty or PI can send an e-mail to help@uw.edu and request a "Welcome to Hyak Tutorial" to be held at their lab or department.

Getting Started

Please review everything in this section before contacting Hyak support for help with new accounts. This section also contains useful information on life with two-factor authentication, including setting up SSH tunnels to reduce hassles.

...

All Hyak use is mediated by the system scheduler. This section provides details on using the scheduler to run interactive, batch, and parallel jobs. It also includes instructions for using the scheduler to monitor your jobs and the cluster status.

ikt.hyak uses the slurm scheduler. Jobs must be submitted via login nodes.

mox.hyak uses the slurm scheduler. Jobs must be submitted via the login nodes.

Below The page below is for both ikt.hyak and mox.hyak.

Mox_scheduler


Getting Help

If you use the STF (Student Technology Fee) allocation through the Research Computing Club (RCC), please join the mailing list for the RC club RCC (https://depts.washington.edu/uwrcc/) and direct all questions there. Do not contact UW-IT for support.

The principle means of support we provide is this Wiki - a comprehensive set of documentation covering all basic Hyak functions. If you encounter problems with one of Hyak's basic functions use your UW e-mail to send an e-mail to help@uw.edu with 'hyak' as the first word in the subject and a brief description of your issue. If you do not use your UW provided e-mail account, please include your UW NetID in your e-mail. If you're reporting a problem with a job or the job scheduler, please include at least one affected jobid as well as paths to the job script and job stdout.

...

  • This work was facilitated though through the use of advanced computational, storage, and networking infrastructure provided by the Hyak supercomputer system at the University of Washington

...