Child pages
  • Mox_Gaussian
Skip to end of metadata
Go to start of metadata

Access to Gaussian

On mox, Gaussian can only be used by users who belong to below groups:

ligroup-gaussian

ligroup-gdv

Only UW Seattle people can use Gaussian. You can send an e-mail to Prof. Xiaosong Li of the Chemistry Department to request access to Gaussian on mox.hyak.

Gaussian modules

The Gaussian modules on mox are:

contrib/g09.a02
contrib/g09.e01
contrib/g09.e01_debug
contrib/g09.e01_rebuild
contrib/g16.a03
contrib/g16.b01

After you load one of the above modules, below environment variable is set to ensure that Gaussian scratch files
are written to local disk /scr.

declare -x GAUSS_SCRDIR="/scr"

Do NOT change the location of GAUSS_SCRDIR.

Most Gaussian scratch file data should be written to /scr. However, if the  scratch file data is tool large to fit on the local disk /scr then Issue below command to create a directory on gscratch to store the extra scratch data:

mkdir /gscratch/xyz/abc/gaussianscr

Local Disk on mox nodes

Currently, older 28  core compute nodes have about 100 GB local disk space. Newer 28 core nodes and 32 core nodes have about 200 GB disk space. The 40 core nodes have about 400 GB local disk space. If your Gaussian scratch file data does not fit on the local disk /scr then use below commands to specify extra space on gscratch. Do not specify these options if the Gaussian scratch file data fits on local disk /scr.

For the 28 core nodes, use below Gaussian option:

%RWF=/scr/,95GB,/gscratch/xyz/abc/gaussianscr/,95GB

For the 32 core nodes, use below Gaussian option:

%RWF=/scr/,195GB,/gscratch/xyz/abc/gaussianscr/,95GB

For the 40 core nodes, use below Gaussian option:

%RWF=/scr/,395GB,/gscratch/xyz/abc/gaussianscr/,95GB

See below for how to specify MaxDisk

https://gaussian.com/maxdisk/

https://gaussian.com/defroute/  (click on the examples tab)

Local disk for the chem group

Most chem nodes have about 100 GB local disk.

There are a few 32 core chem nodes which have about 200 GB local disk. In order to get a  32 core node use below line in your slurm script:
#SBATCH --cores-per-socket=16

If your Gaussian scratch files need more local disk space than your group's nodes have then contact help@uw.edu with subject "hyak Gaussian".


Parallel jobs

On mox, please configure Gaussian to use ssh. Details are at below link under the tab "Parallel Jobs":

http://gaussian.com/running/


 

  • No labels