You can use job arrays by putting the below line in your slurm sbatch script file:
The above line is equivalent to submitting 32 separate jobs. For each job of the job array, the environment variable SLURM_ARRAY_TASK_ID will be set to the job array index value.
You can limit the number of simultaneously running tasks from the job array by using a "%" separator followed by a number. For example the below line will limit the number of simultaneously running tasks from this job array to 8.
The commands in the part your slurm script after all the #sbatch commands should use the value of the environment variable SLURM_ARRAY_TASK_ID to decide what to do. Below are two examples.
if you are running a Python script after all the #SBATCH commands then the Python script should have code like below:
if (os.environ['SLURM_ARRAY_TASK_ID'] == 1):
elif (os.environ['SLURM_ARRAY_TASK_ID'] == 2):
#do something else
If you are using GNU parallel with the --joblog option after all the #SBATCH commands then the GNU parallel command should be like below:
cat mywork | parallel --joblog mylogfile_$SLURM_ARRAY_TASK_ID --resume -j 28