Page 1 of 7. Showing 68 results (0.035 seconds)

  1. Hyak mox Overview

    High Level Differences from ikt.hyak, the 1st generation Hyak system (retired March 2020). Mox is an entirely separate cluster. They share nothing with one … any problems to with Hyak as the first word in the subject. Please also let us know you're using mox not ikt. Connecting SSH
    Hyak User DocumentationAug 25, 2020
  2. How to access google drive from Mox

      Step-by-step guide to using "drive" (a google drive push/pull client written in "go" and preinstalled in /sw/crontrib/go on Hyak/Mox. Note: Needs a build … session on a build node on Mox: salloc -p build srun --pty bash 2. add these paths to your ~/.bashrc (or just export them, you might not even need all of them
    Hyak User ContributionsOct 23, 2017
  3. Mox Job Profiling

    A Grafana dashboard containing graphs of job resource usage over time is available at: The dashboard can be reached from the campus network, or via Husky OnNet VPN
    Hyak User DocumentationMay 15, 2019
  4. Mox_scheduler

    Mox uses a scheduler called slurm. Below xyz is your hyak group name and abc is your UW netid. To logon to mox.hyak: ssh mailto … xyz To see your jobs: squeue -u abc Below mox specific command  shows  the number of nodes etc. of all allocations. hyakalloc Interactive Single Node Usage
    Hyak User DocumentationAug 25, 2020
  5. Mox_mpi

    Below xyz is your group name and abc is your userid. Compiling MPI program: See also Hyak Intel MPI Load one of below modules. Intel: Intel Mox: module load icc_18-impi_2018 Intel Ikt: module load icc_18-impi_2018 gcc: gcc Mox: module load gcc_4.8.5-impi_2017  gcc Ikt: module load gcc_4.4.7-impi_5.1.2
    Hyak User DocumentationNov 04, 2019
  6. Mox_per_core_scheduling

    Until October 2019, mox queues had node level scheduling. This means that srun and sbatch gave node access in increments of one node e.g. your srun and sbatch … not mention the --nodes option then it was equivalent to using "--nodes=1". After October 2019, new mox queues (slurm partitions) and some older mox queues
    Hyak User DocumentationNov 05, 2019
  7. Mox_gnu_parallel

    node. Mox nodes bought before August 2018 have 28 cores per node. Mox nodes bought after August 2018 have 32 cores per node. Mox nodes in 2019 have 40 cores
    Hyak User DocumentationNov 04, 2019
  8. Mox_Gaussian

    Access to Gaussian On mox, Gaussian can only be used by users who belong to below groups: ligroup-gaussian ligroup-gdv Only UW Seattle people can use … on mox are: contrib/g09.a02 contrib/g09.e01 contrib/g09.e01_debug contrib/g09.e01_rebuild contrib/g16.a03 contrib/g16.b01 After you load one
    Hyak User DocumentationAug 16, 2019
  9. Mox_memory_cpu_usage

    Below abc is your userid and xyz is your group. (1) Use below command on mox to get the jobID for your job whose memory and CPU usage you want to monitor … be near 28 for the 28-core mox nodes, near 32 for the 32-core mox nodes, and near 40 for the 40-core mox nodes. (It may be that your program cannot use all
    Hyak User DocumentationMay 02, 2019
  10. Mox_hyak_file_transfer

    ikt is hyak classic and mox is hyak nextgen. Below xyz is your group name and abc is your userid. (If you are using a non-default PATH environment variable then you can find hyakbbcp at this location  /sw/local/bin/hyakbbcp .) From ikt to mox ikt1$ hyakbbcp myfile
    Hyak User DocumentationMar 22, 2017