Child pages
  • Hyak intel-mpich3.sh
Skip to end of metadata
Go to start of metadata

Use below pages for ikt.hyak (hyal classic) and for  mox.hyak (hyak next gen):

Mox_scheduler

Mox_mpi

==========================================================

Ignore below article. Below article is for historical interest.

Important Instructions

You will need to change parameters which are bold and red for your job. Depending on your application, you may need to change parameters which are bold and blue. In most cases, you should not change any text that is not colored.

Job Script

 #\!/bin/bash
 ##
 ## \!\! _NEVER_ remove # signs from in front of PBS or from the line above \!\!
 ##
 ## RENAME FOR YOUR JOB
 #PBS -N <font color=red>*intel-mpich3*</font>
 ## EDIT FOR YOUR JOB
 ## Request 16 CPUs (cores) on 2 nodes, 32 total cores
 #PBS -l nodes=<font color=red>*2*</font>:ppn=<font color=red>*16*</font>,mem=<font color=red>*22gb*</font>,feature=<font color=red>*16*</font>core
 ## WALLTIME DEFAULTS TO ONE HOUR - ALWAYS SPECIFY FOR LONGER JOBS
 ## If the job doesn't finish in 10 minutes, cancel it
 #PBS -l walltime=<font color=red>*00:10:00*</font>
 ## EDIT FOR YOUR JOB
 ## Put the output from jobs into the below directory
 #PBS -o <font color=red>*/gscratch/GROUPNAME/USERNAME/JOB_DIR*</font>
 ## Put both the stderr and stdout into a single file
 #PBS -j oe
 ## EDIT FOR YOUR JOB
 ## Specify the working directory for this job
 #PBS -d <font color=red>*/gscratch/GROUPNAME/USERNAME/JOB_DIR*</font>
 ## Some applications, particularly FORTRAN applications require
 ##  a larger than usual data stack size. Uncomment if your
 ##  application is exiting unexpectedly.
 #ulimit -s unlimited
  
 ## Disable regcache
 export MX_RCACHE=0
 ## Load the appropriate environment module.
 module load *<font color=red><latest module></font>* # icc_<version>-mpich_3.1.4
 ### Debugging information
 ### Include your job logs which contain output from the below commands
 ###  in any job-related help requests.
 # Total Number of processors (cores) to be used by the job
 HYAK_NPE=$(wc -l < $PBS_NODEFILE)
 # Number of nodes used
 HYAK_NNODES=$(uniq $PBS_NODEFILE | wc -l )
 echo "**** Job Debugging Information ****"
 echo "This job will run on $HYAK_NPE total CPUs on $HYAK_NNODES different nodes"
 echo ""
 echo "Node:CPUs Used"
 uniq -c $PBS_NODEFILE | awk '\{print $2 ":" $1\}'
 echo "SHARED LIBRARY CHECK"
 ldd <font color=red>*./test*</font>
 echo "ENVIRONMENT VARIABLES"
 set
 echo "**********************************************"
 ### End Debugging information
  
 ### Specify the app to run here                           ###
 ###                                                       ###
 # EDIT FOR YOUR JOB
 #
 mpiexec.hydra -launcher rsh -rmk pbs -bind-to core  <font color=red>*./test*</font>
 ### include any post processing here                      ###
 ###                                                       ###