Child pages
  • Hyak_node_local_disk
Skip to end of metadata
Go to start of metadata

Here xyz is your group name and abc is your userid.

The node local scratch disk space is /scr. All data on /scr is deleted after your job ends.


At the beginning of your slurm script you can use lines like below to copy input data file to the local /scr.

mkdir -p /scr/xyz/abc/mydir

cp /gscratch/xyz/abc/mydir/inputfile /scr/xyz/abc/mydir

And at end of the slurm script you can use lines like below to copy the output data file back to /gscratch.

cp /scr/xyz/abc/mydir/outputfile /gscratch/xyz/abc/mydir


Older 28 core nodes have about 100GB of local disk space.

Newer 28 core nodes and 32 core nodes have about 200GB of local disk space.

The 40 core nodes have about 400GB of local disk.

The command below tells you about the local disk on your nodes.

sinfo -h -N -p xyz -O nodehost,disk,cpus,memory

If the above command shows that your group has 32 core or 40 core nodes, then you can specifically ask for those nodes by using below line in your slurm script.

For 32 core nodes (200GB local disk) use:
#SBATCH --cores-per-socket=16

For 40 core nodes (400GB local disk) use:
#SBATCH --cores-per-socket=20



  • No labels