These instructions are for building and running CESM 1.06 on hyak.
Replace abc by your hyak userid. Change directory names appropriately.
STEPS TO BUILD CESM
1.
srun -p build --time=2:00:00 --mem=100G --pty /bin/bash
2. Load modules:
module load icc_14.0.3-mpich2_mx_1.3.1_10
module load hdf5_1.8.13-icc_14.0.3
module load netcdf_fortran+c_4.4-icc_14.0.3
3. cd /gscratch/coenv/abc/CESM/CESM_builds/
4. Create instance of the model
../cesm1_0_6_icc14x/scripts/create_newcase -res f09_f09 -case CAM4_f09f09_noTWP_clmiarb -mach generic_linux_intel -compset F_2000_CN -din_loc_root_csmdata /gscratch/coenv/abc/CESM/CESM_inputs -max_tasks_per_node 16 -scratchroot /gscratch/coenv/abc/CESM/CESM_builds
5. Changes to env_mach_specific:
Add:
source /sw/Modules/default/init/csh
module purge
module load icc_14.0.3-mpich2_mx_1.3.1_10
module load hdf5_1.8.13-icc_14.0.3
module load netcdf_fortran+c_4.4-icc_14.0.3
setenv NETCDF_PATH $NETCDF
setenv MPICH_PATH /sw/openmpi-1.8.3_icc-14.0.3
6. Changes to Macros.generic_linux_intel:
Change:
MPI_LIB_NAME :=
Add:
NETCDF_DIR=$(NETCDF)/lib
SLIBS+= -L$(NETCDF_DIR) -lnetcdff –lnetcdf
7. ./configure –case
8. ./CAM4_f09f09_CTL_ICC14.generic_linux_intel.build
9. logout (from interactive node)
10. Changes to CAM4_f09f09_CTL_ICC14.generic_linux_intel.run:
See below for example of slurm batch script.
Mox_scheduler
Let N= (number_of_nodes) * (number_of_cores_per_node)
number_of_cores_per_node varies but could be more than 40 on mox.hyak assuming you had access to this newer node model.
Add these lines and end of the slurm script. Replace -n 32 by N.
#limit coredumpsize 1000000
limit stacksize unlimited
source /sw/Modules/login/modules.csh
module load icc_14.0.3-mpich2_mx_1.3.1_10
module load hdf5_1.8.13-icc_14.0.3
module load netcdf_fortran+c_4.4-icc_14.0.3
mpirun -n 32 ./ccsm.exe >&! ccsm.log.$LID
STEPS TO RUN CESM:
1. cd to the directory which contains CAM4_f09f09_CTL_ICC14.generic_linux_intel.run
and issue below command
sbatch CAM4_f09f09_CTL_ICC14.generic_linux_intel.run