Skip to content

Commit 91f6c32

Browse files
committed
[BUGFIX] Fix white space in EOF command
1 parent ceada4e commit 91f6c32

File tree

1 file changed

+27
-27
lines changed

1 file changed

+27
-27
lines changed

docs/jobs/arrays.md

Lines changed: 27 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -188,7 +188,7 @@ Consider submitting the following job.
188188
srun \
189189
stress-ng \
190190
--cpu ${SLURM_CPUS_PER_TASK} \
191-
--timeout "${test_duration}"
191+
--timeout "${test_duration}"
192192
```
193193

194194
The tasks in `stress_test.sh` do not have sufficient time to finish. After submission the `TimeLimit` can be raised to 15min to allow tasks sufficient time to finish. Assume that `SLURM_ARRAY_JOB_ID=9625003`.
@@ -204,7 +204,7 @@ The tasks in `stress_test.sh` do not have sufficient time to finish. After submi
204204
- Update individual tasks:
205205
```
206206
scontrol update jobid=9625003_4 TimeLimit=00:15:00
207-
```
207+
```
208208

209209
## Job array scripts
210210

@@ -228,7 +228,7 @@ Consider a job array script designed to stress test a set of network file system
228228

229229
srun \
230230
stress-ng \
231-
--timeout "${test_duration}" \
231+
--timeout "${test_duration}" \
232232
--iomix "${SLURM_CPUS_PER_TASK}" \
233233
--temp-path "${FILE_SYSTEM_PATH_PREFIX}_${SLURM_ARRAY_TASK_ID}" \
234234
--verify \
@@ -273,7 +273,7 @@ Array indices can be used to differentiate the input of a task. In the following
273273

274274
declare max_parallel_tasks=16
275275
declare speed_step=0.01
276-
276+
277277
generate_commands() {
278278
local filename="${1}"
279279

@@ -287,39 +287,39 @@ Array indices can be used to differentiate the input of a task. In the following
287287
done
288288
done
289289
}
290-
290+
291291
generate_submission_script() {
292292
local submission_script="${1}"
293293
local command_script="${2}"
294294

295295
local n_commands="$(cat ${command_script} | wc --lines)"
296296
local max_task_id="$((${n_commands} - 1))"
297-
297+
298298
cat > job_array_script.sh <<EOF
299-
#!/bin/bash --login
300-
#SBATCH --job-name=parametric_analysis
301-
#SBATCH --array=0-${max_task_id}%${max_parallel_tasks}
302-
#SBATCH --partition=batch
303-
#SBATCH --qos=normal
304-
#SBATCH --nodes=1
305-
#SBATCH --ntasks-per-node=1
306-
#SBATCH --cpus-per-task=16
307-
#SBATCH --time=0-10:00:00
308-
#SBATCH --output=%x-%A_%a.out
309-
#SBATCH --error=%x-%A_%a.err
310-
311-
module load lang/Python
312-
313-
declade command="\$(sed "\${SLURM_ARRAY_TASK_ID}"'!d' ${command_script})"
314-
315-
echo "Running commnand: \${command}"
316-
eval "srun python \${command}"
317-
EOF
299+
#!/bin/bash --login
300+
#SBATCH --job-name=parametric_analysis
301+
#SBATCH --array=0-${max_task_id}%${max_parallel_tasks}
302+
#SBATCH --partition=batch
303+
#SBATCH --qos=normal
304+
#SBATCH --nodes=1
305+
#SBATCH --ntasks-per-node=1
306+
#SBATCH --cpus-per-task=16
307+
#SBATCH --time=0-10:00:00
308+
#SBATCH --output=%x-%A_%a.out
309+
#SBATCH --error=%x-%A_%a.err
310+
311+
module load lang/Python
312+
313+
declade command="\$(sed "\${SLURM_ARRAY_TASK_ID}"'!d' ${command_script})"
314+
315+
echo "Running commnand: \${command}"
316+
eval "srun python \${command}"
317+
EOF
318318
}
319319

320320
generate_commands 'commands.sh'
321321
generate_submission_script 'job_array_script.sh' 'commands.sh'
322-
322+
323323
sbatch job_array_script.sh
324324
```
325325

@@ -329,7 +329,7 @@ Run the `launch_parammetric_analysis.sh` script with the bash command.
329329
bash launch_parammetric_analysis.sh
330330
```
331331

332-
!!! info "Avoiding script generation"
332+
??? info "Avoiding script generation"
333333
Script generation is a complex and error prone command. In this example script generation is unavoidable, as the whole parametric analysis cannot run in a single job of the [`normal` QoS](/slurm/qos/#available-qoss) which has the default maximum wall time (`MaxWall`) of 2 days. The expected runtime on each simulation would be about $0.25$ to $0.5$ of the maximum wall time (`--time`) which is set at 10 hours.
334334

335335
If all the parametric analysis can run within the 2 day limit, then consider running the analysis in a single allocation using [GNU parallel](/jobs/gnu-parallel/). You can then generate the command file and lauch the simulation all from a single script in a single job allocation.

0 commit comments

Comments
 (0)