Child pages
  • Hyak intel-openmpi.sh
Skip to end of metadata
Go to start of metadata

Important Instructions

You will need to change parameters which are bold and red for your job. Depending on your application, you may need to change parameters which are bold and blue. In most cases, you should not change any text that is not colored.

Job Script

#!/bin/bash
##
## !! NEVER remove # signs from in front of PBS or from the line above !!
##
## RENAME FOR YOUR JOB
#PBS -N intel-openmpi

## EDIT FOR YOUR JOB
## Request 16 CPUs (cores) on 2 nodes, 32 total cores
#PBS -l nodes=2:ppn=16,mem=22gb,feature=16core

## WALLTIME DEFAULTS TO ONE HOUR - ALWAYS SPECIFY FOR LONGER JOBS
## If the job doesn't finish in 10 minutes, cancel it
#PBS -l walltime=00:10:00

## EDIT FOR YOUR JOB
## Put the output from jobs into the below directory
#PBS -o /gscratch/GROUPNAME/USERNAME/JOB_DIR
## Put both the stderr and stdout into a single file
#PBS -j oe

## EDIT FOR YOUR JOB
## Specify the working directory for this job
#PBS -d /gscratch/GROUPNAME/USERNAME/JOB_DIR

## Some applications, particularly FORTRAN applications require
## a larger than usual data stack size. Uncomment if your
## application is exiting unexpectedly.
#ulimit -s unlimited

## Disable regcache
export MX_RCACHE=0

## Load the appropriate environment module with ompi support
## correct modules are in the form icc_<version>-ompi_<version>
## for instance, icc_14.0.3-ompi_1.8.3
module load *<latest module>*

### Debugging information
### Include your job logs which contain output from the below commands
### in any job-related help requests.
# Total Number of processors (cores) to be used by the job
HYAK_NPE=$(wc -l < $PBS_NODEFILE)
# Number of nodes used
HYAK_NNODES=$(uniq $PBS_NODEFILE | wc -l )
echo "**** Job Debugging Information ****"
echo "This job will run on $HYAK_NPE total CPUs on $HYAK_NNODES different nodes"
echo ""
echo "Node:CPUs Used"
uniq -c $PBS_NODEFILE | awk '{print $2 ":" $1}'
echo "SHARED LIBRARY CHECK"
ldd ./test
echo "ENVIRONMENT VARIABLES"
set
echo "**********************************************"
### End Debugging information

### Specify the app to run here ###
### ###
# EDIT FOR YOUR JOB
## Choose one alternative
## It must correspond to the compiler used
## and to the ompi module loaded above
# Open MPI 1.6.x
#mpirun --mca mtl mx --mca pml cm --bind-to-core ./test

# Open MPI 1.8.x
#mpirun --mca mtl mx --mca pml cm --bind-to core --map-by core ./test

### include any post processing here ###
### ###

  • No labels