Skip to content

Commit da63a7b

Browse files
authored
Merge pull request #39 from FireDynamics/release-v0.10.0
Release v0.10.0
2 parents f03b69f + 4e30ba2 commit da63a7b

26 files changed

+2170
-37
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ pip install -r requirements.txt
2828
```
2929
5. Launch JupyterLab
3030
```
31-
jupyterlab
31+
jupyter-lab
3232
```
3333
6. Do some editing. The contents of the book are stored in the `content` folder.
3434
7. Build a local version of the book

book/_toc.yml

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -32,11 +32,11 @@
3232
- file: content/modelling/02_fluids/02_cfd
3333
- file: content/modelling/02_fluids/03_turbulence
3434

35-
# - file: content/modelling/03_compartments/00_overview
36-
# sections:
37-
# - file: content/modelling/03_compartments/01_fundamentals
38-
# - file: content/modelling/03_compartments/02_design_fire
39-
# - file: content/modelling/03_compartments/03_example
35+
- file: content/modelling/03_compartments/00_overview
36+
sections:
37+
- file: content/modelling/03_compartments/01_fundamentals
38+
- file: content/modelling/03_compartments/02_design_fire
39+
# - file: content/modelling/03_compartments/03_example
4040

4141
####################
4242
# TOOLS
@@ -48,12 +48,14 @@
4848
sections:
4949
- file: content/tools/01_fds_smv/01_fds_intro
5050
- file: content/tools/01_fds_smv/02_fds_tutorial
51-
- file: content/tools/01_fds_smv/02_smv
51+
# - file: content/tools/01_fds_smv/02_smv
5252

5353
- file: content/tools/02_hpc/00_overview
5454
sections:
5555
- file: content/tools/02_hpc/01_linux
5656
- file: content/tools/02_hpc/02_hpc
57+
- file: content/tools/02_hpc/03_parallel_fds
58+
- file: content/tools/02_hpc/04_benchmarking
5759

5860
- file: content/tools/03_analysis/00_overview
5961
sections:

book/content/modelling/03_compartments/01_fundamentals.ipynb

Lines changed: 697 additions & 0 deletions
Large diffs are not rendered by default.

book/content/modelling/03_compartments/01_fundamentals.md

Lines changed: 0 additions & 1 deletion
This file was deleted.
39 KB
Loading
40.9 KB
Loading

book/content/tools/01_fds_smv/01_fds_intro.ipynb

Lines changed: 30 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -69,11 +69,11 @@
6969
"$$\n",
7070
"\n",
7171
"$$ \n",
72-
"\\mf \\frac{\\partial \\rho \\textbf{u}}{\\partial t} + \\nabla \\cdot (\\rho \\textbf{uu}) = -\\nabla \\bar{p} + \\nabla \\cdot \\textbf{T} + (\\rho - \\rho_0)\\textbf{g}\n",
72+
"\\mf \\frac{\\partial \\rho \\textbf{u}}{\\partial t} + \\nabla \\cdot (\\rho \\textbf{uu}) = -\\nabla p + \\nabla \\cdot \\textbf{T} + (\\rho - \\rho_0)\\textbf{g}\n",
7373
"$$\n",
7474
"\n",
7575
"$$\n",
76-
"\\mf \\frac{\\partial \\rho h_s}{\\partial t} + \\nabla \\cdot (\\rho h_s \\textbf{u}) = \\frac{D\\bar{p}}{Dt} = \\dot{q}''' - \\nabla \\cdot \\dot{\\textbf{q}}''\n",
76+
"\\mf \\frac{\\partial \\rho h_s}{\\partial t} + \\nabla \\cdot (\\rho h_s \\textbf{u}) = \\frac{D\\bar{p}}{Dt} + \\dot{q}''' - \\nabla \\cdot \\dot{\\textbf{q}}''\n",
7777
"$$\n",
7878
"\n",
7979
"$$\n",
@@ -128,32 +128,45 @@
128128
"cell_type": "markdown",
129129
"id": "46317163",
130130
"metadata": {
131-
"tags": [
132-
"remove_cell"
133-
]
131+
"tags": []
134132
},
135133
"source": [
136134
"The momentum equation may be rewritten as \n",
137-
"$$\\frac{\\partial \\textbf{u}}{\\partial t} = -(\\textbf{F}+\\nabla H),$$ \n",
138135
"\n",
139-
"$$\\textbf{F} = \\underbrace{\\textbf{u} \\cdot \\textbf{u} - \\nabla K}_{-\\textbf{u}x(\\nabla x \\textbf{u})} - \\bar{p}\\nabla(1/\\rho)-\\frac{1}{\\rho}[\\nabla \\cdot \\textbf{T}+(\\rho - \\rho_0)\\textbf{g}],$$\n",
140-
"$$H \\equiv \\bar{p}/\\rho + K,$$\n",
136+
"$$\n",
137+
"\\mf \\frac{\\partial \\textbf{u}}{\\partial t} = -(\\textbf{F}+\\nabla H),\n",
138+
"$$ \n",
139+
"\n",
140+
"$$\n",
141+
"\\mf \\textbf{F} = \\underbrace{\\textbf{u} \\cdot \\textbf{u} - \\nabla K}_{-\\textbf{u}x(\\nabla x \\textbf{u})} - \\bar{p}\\nabla(1/\\rho)-\\frac{1}{\\rho}[\\nabla \\cdot \\textbf{T}+(\\rho - \\rho_0)\\textbf{g}],\n",
142+
"$$\n",
143+
"\n",
144+
"$$\n",
145+
"\\mf H \\equiv \\bar{p}/\\rho + K,\n",
146+
"$$\n",
147+
"\n",
148+
"with the resolved kinetic energy per unit mass $\\mf K \\equiv \\frac{1}{2}\\left|\\textbf{u}\\right|^2$. the Bernoulli intefral $H$ obeys the following Poisson equation\n",
141149
"\n",
142-
"with the resolved kinetic energy per unit mass $K \\equiv \\frac{1}{2}\\left|\\textbf{u}\\right|^2$. the Bernoulli intefral $H$ obeys the following Poisson equation\n",
143-
"$$\\nabla^2H = - \\left[ \\nabla \\cdot \\textbf{F} + \\frac{\\partial}{\\partial t}(\\nabla \\cdot \\textbf{u}) \\right].$$"
150+
"$$\n",
151+
"\\mf \\nabla^2H = - \\left[ \\nabla \\cdot \\textbf{F} + \\frac{\\partial}{\\partial t}(\\nabla \\cdot \\textbf{u}) \\right]\\quad .\n",
152+
"$$"
144153
]
145154
},
146155
{
147156
"cell_type": "markdown",
148157
"id": "8c56f81e",
149158
"metadata": {
150-
"tags": [
151-
"remove_cell"
152-
]
159+
"tags": []
153160
},
154161
"source": [
155162
"One way to compute the divergence is given by differentiating the equation of state, which leads to \n",
156-
"$$\\begin{align} \\nabla \\cdot \\textbf{u} &= \\left(\\frac{1}{\\rho c_pT}-\\frac{1}{\\bar{p}}\\right) \\frac{D \\bar{p}}{Dt}\\\\ &+ \\frac{1}{\\rho c_pT}[\\dot{q}'''-\\nabla \\cdot \\dot{\\textbf{q}}'']\\\\ &+ \\frac{1}{\\rho}\\sum_{\\alpha}\\left(\\frac{\\overline{W}}{W_{\\alpha}} - \\frac{h_{s,\\alpha}}{c_pT}\\right)[\\dot{m}_{\\alpha}'''-\\nabla \\cdot \\textbf{J}_{\\alpha}]\\end{align}$$"
163+
"\n",
164+
"$$\n",
165+
"\\mf \n",
166+
"\\begin{align} \\nabla \\cdot \\textbf{u} &\\mf = \\left(\\frac{1}{\\rho c_pT}-\\frac{1}{\\bar{p}}\\right) \\frac{D \\bar{p}}{Dt}\\\\ \n",
167+
"&\\mf + \\frac{1}{\\rho c_pT}[\\dot{q}'''-\\nabla \\cdot \\dot{\\textbf{q}}'']\\\\ \n",
168+
"&\\mf + \\frac{1}{\\rho}\\sum_{\\alpha}\\left(\\frac{\\overline{W}}{W_{\\alpha}} - \\frac{h_{s,\\alpha}}{c_pT}\\right)[\\dot{m}_{\\alpha}'''-\\nabla \\cdot \\textbf{J}_{\\alpha}]\\end{align}\n",
169+
"$$"
157170
]
158171
},
159172
{
@@ -202,7 +215,7 @@
202215
"The FDS guides and documentation for version 6.7.5 can be found in [this release](https://github.com/firemodels/fds/releases/tag/FDS6.7.5).\n",
203216
"\n",
204217
"### User's Guide \n",
205-
"An about 300 pages thick manual to introduce users to FDS {cite}`FDS-UG-6.7.5`. It covers the user aspects of \n",
218+
"An about 400 pages thick manual to introduce users to FDS {cite}`FDS-UG-6.7.5`. It covers the user aspects of \n",
206219
"* basics of FDS, getting started\n",
207220
"* structure of FDS input files\n",
208221
"* building geometric models\n",
@@ -226,7 +239,7 @@
226239
"* fire detection devices and HVAC\n",
227240
"\n",
228241
"### Verification and validation \n",
229-
"To demonstrate the applicability of the FDS model, there exist two documents (in total about 800 pages) about model verification {cite}`FDS-VE-6.7.5` and validation {cite}`FDS-VA-6.7.5`.\n",
242+
"To demonstrate the applicability of the FDS model, there exist two documents (in total over 1000 pages) about model verification {cite}`FDS-VE-6.7.5` and validation {cite}`FDS-VA-6.7.5`.\n",
230243
"All test are run every night to check the impact of source code changes.\n",
231244
"\n",
232245
"\n",
@@ -266,7 +279,7 @@
266279
"## Installation\n",
267280
"\n",
268281
"### Source Code and Binary Download\n",
269-
"The full source code (both FDS and Smokeview) is available at [GitHub](https://github.com/firemodels/fds-smv). This page also includes references to: \n",
282+
"The full source code (both FDS and Smokeview) is available at [GitHub](https://github.com/firemodels/fds). This page also includes references to: \n",
270283
"* binaries for Linux / Windows / OSX\n",
271284
"* all manuals\n",
272285
"* mailing lists\n",
1.4 KB
Loading

book/content/tools/02_hpc/03_parallel_fds.md

Lines changed: 180 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,183 @@
11
# Parallel Execution of FDS
22

3+
## Accessing JURECA
4+
5+
### Accounts
6+
7+
You should have received an invitation email, which asks you to register in the account management system. Once registered, you will receive an individual username and be asked to upload your login keys.
8+
9+
### SSH
10+
11+
To reach the computercluster JURECA you need to log in via the [secure shell protocol (SSH)](https://en.wikipedia.org/wiki/Secure_Shell_Protocol). It is recommened to read the user documention which can be found here: [access JURECA using SSH](https://apps.fz-juelich.de/jsc/hps/jureca/access.html).
12+
13+
Most Linux and MacOS systems have a SSH client installed. On Windows, you can use tools like [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
14+
15+
Depending on your SSH client, there are various ways to genearte a SSH key pair (public and private). In any case, it should be protected with a passphrase.
16+
17+
One of the safety measure on JURECA is, that you need to specify the IP range from which you access the system, see [kee restrictions](https://apps.fz-juelich.de/jsc/hps/jureca/access.html#key-upload-key-restriction). If you use VPN, e.g. provided by the University of Wuppertal, your `from` statement could include `*.uni-wuppertal.de`.
18+
19+
### Login
20+
21+
In order to login to JURECA on a Linux or MacOS you just need to execute
22+
23+
```none
24+
25+
```
26+
27+
Or you add a configuration to `~/.ssh/config` like
28+
29+
```
30+
Host jureca
31+
User username1
32+
Hostname jureca.fz-juelich.de
33+
```
34+
35+
which allow a shorter command
36+
37+
```
38+
> ssh jureca
39+
```
40+
41+
````{admonition} Task
42+
Login to JURECA and check your username, the server you have logged in to and the path to your home directory. The result should look similar to
43+
44+
```
45+
> whoami
46+
username1
47+
> hostname
48+
jrlogin06.jureca
49+
> echo $HOME
50+
/p/home/jusers/username1/jureca
51+
```
52+
````
53+
54+
55+
## FDS Module on JURECA
56+
57+
Modules offer a flexible environment to manage multiple versions of software. This system is also used on JURECA: [Module usage on JURECA](https://apps.fz-juelich.de/jsc/hps/jureca/software-modules.html).
58+
59+
As the FDS (and some other) module are not globaly installed, they need to be added to the user's environment. This can be done with
60+
61+
```
62+
> module use -a ~arnold1/modules_fire/
63+
```
64+
65+
Thus, adding this line to your batch scripts and startup script (`~/.bashrc`) will automatically add the related modules to the module environment.
66+
67+
````{admonition} Task
68+
69+
* Use the `module avail` command to list all available modules. Make sure, you see also the FDS modules.
70+
* Load a FSD module and invoke FDS with `fds`. Does the loaded version of FDS correspond to the one you have expected?
71+
72+
````
73+
74+
## Job Submission
75+
76+
A computercluster is often used by a lot of users. Therefore executing a programm which needs a lot of the CPU power could disturb other users by slowing down the rest of the softwares or even the OS. The solution to this is a queueing system which organizes the execution of many programms and manages the ressource distribution among them. JURECA uses the software called Slurm for queueing and distributing to compute nodes. More information is provided in [JURECA's batch system documentation](https://apps.fz-juelich.de/jsc/hps/jureca/batchsystem.html).
77+
78+
Instead of running our simulation on the cluster we either submit our simulation file to the queueing system or execute a submit code which includes the modules we need for FDS, sets the number of processes, theads and other important quantities.
79+
80+
81+
### Single Job
82+
83+
The structure of a Slurm job script is basically a shell script. This shell script will be executed by the batch system on the requested ressource. The definition of the ressource is done as comments (`#`) with the `SLURM` keywords. These are instructions for Slurm.
84+
85+
A simple example for a Slurm job scipt ({download}`fds-slurm-job-single.sh`) is given below. It executes FDS for a single FDS input file.
86+
87+
```{literalinclude} ./fds-slurm-job-single.sh
88+
```
89+
90+
91+
92+
The individual lines have the following functions
93+
94+
* **Naming**
95+
96+
```#SBATCH --job-name=my_FDS_simulation```
97+
98+
You can name your job in order to find it quicker in the job lists. The name has no other function.
99+
100+
* **Accounting**
101+
102+
```#SBATCH --account=ias-7```
103+
104+
On Jureca you need to have a computing time project and your account needs to be assinged to it. This budget is used to "buy" a normal/high priority in the queueing system. Using `account` you specify which computing time contingency will be debited for the specified job. Here `ias-7` is indicating the project we will use for this lecture. It is the budget of the IAS-7 at the Forschungszentrum Jülich.
105+
106+
* **Partition**
107+
108+
```#SBATCH --partition=dc-cpu```
109+
110+
JURECA's batch system is divided into multipe partitions, which represent different computing architectures. In our case we want to execute the simulation on common CPU cores and therefore we use the partition `dc-cpu` – more information on [JURECA's partitions](https://apps.fz-juelich.de/jsc/hps/jureca/batchsystem.html#slurm-partitions).
111+
112+
* **MPI Tasks**
113+
114+
```#SBATCH --ntasks=128```
115+
```#SBATCH --cpus-per-task=1```
116+
117+
There are different ways how to define the number of requested cores. It is possible to state how many MPI tasks (`ntasks`) are to be started and how many cores each of them (`cpus-per-task`) will get assigned. The product of these will lead to the number of physical cores and thus to the number of nodes to be allocated. In the current configuration of the `dc-cpu` partition, each node has 128 cores. An alternative is to specify the number of nodes, which would lead to the number of MPI tasks to be started.
118+
119+
* **Terminal Output**
120+
121+
```#SBATCH --output=stdout.%j```
122+
```#SBATCH --error=stderr.%j```
123+
124+
Here the file name for the standard output and error log are defined. `%j` will be replaced by the job id generated by Slurm.
125+
126+
* **Wall Clock Time**
127+
128+
```#SBATCH --time=00:30:00```
129+
130+
This line specifies the maximum time the job can run on the requested resource. The maximal wall clock time is stated in the documentation for the individual partitions. The `cd-cpu` partition has a limit of 24 hours.
131+
132+
* **Setting OpenMP Environment**
133+
134+
```export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}```
135+
136+
As FDS can utilise OpenMP, the according environment variable (`OMP_NUM_THREADS`) needs to be set. The above command sets it automatically to the number given by `cpus-per-task` in the Slurm section.
137+
138+
* **Load FDS Modules**
139+
140+
```module use -a ~arnold1/modules_fire/```
141+
```module load FDS/6.7.5-IntelComp2020.2_ParaStationMPI_5.4.7```
142+
143+
Here, first the module environment containig the FDS module is included, then the specified FDS module is loaded.
144+
145+
* **Execute FDS**
146+
147+
```srun fds ./*.fds```
148+
149+
A MPI-parallel application is started with `srun` on a Slurm system. If not explicitly stated, like in the above line, the number of MPI tasks is specified by the Slurm environment.
150+
151+
```{note}
152+
It is important to keep in mind, that JURECA's usage concept is to assign compute nodes **exclusively** to a single job. Thus, the resources used are given by the number of nodes used and the wall clock time. In the current setup the `dc-cpu` partition has nodes with 128 cores, so even if jobs use just a few cores, the account is charged with 128 cores (a full node).
153+
```
154+
155+
A Slurm job can be submitted via
156+
157+
```
158+
> sbatch fds-slurm-job-single.sh
159+
```
160+
161+
The current status of a user's queue can be listed with
162+
163+
```
164+
> squeue -u $USER
165+
```
166+
167+
### Chain Jobs
168+
169+
As there is a limit of 24 hours on JURECA, each job has to restart after this time. The following scripts automate the process for FDS on JURECA. It is important that the FDS input file has the `RESTART` parameter defined, typically initailly set to `.FALSE.`.
170+
171+
The main idea is to invoke multiple jobs ({download}`fds-slurm-chain-starter.sh`) with a dependency, i.e. a chain is created, so that the jobs are consequatively executed.
172+
173+
```{literalinclude} ./fds-slurm-chain-starter.sh
174+
```
175+
176+
The individual jobs ({download}`fds-slurm-chain-job.sh`) are similar to the above simple setting. However, they add the functionality to create a `STOP` file to stop FDS before the requested wall clock time is reached. This way restart files are written out by FDS, which can be used for the next chain element. The value of `RESTART` is automatically set to `.TRUE.` after the first execution of FDS.
177+
178+
```{literalinclude} ./fds-slurm-chain-job.sh
179+
```
180+
3181
## Mesh Decomposition
4182

5183
Python scipt to automate decomposition of a single `MESH` statement: {download}`decompose_fds_mesh.py`.
@@ -27,7 +205,7 @@ resulting number of meshes: 1
27205
&MESH ID='1' IJK=12,12,12 XB=0.0000,12.0000,0.0000,12.0000,0.0000,12.0000 /
28206
```
29207

30-
**Four meshes in x-directio**
208+
**Four meshes in x-direction**
31209

32210
```
33211
> python decompose_fds_mesh.py "&MESH IJK=12,12,12 XB=0,12,0,12,0,12 /" "4,1,1"
@@ -51,7 +229,7 @@ resulting number of meshes: 4
51229
&MESH ID='4' IJK=3,12,12 XB=9.0000,12.0000,0.0000,12.0000,0.0000,12.0000 /
52230
```
53231

54-
**Nine meshes, three in each directio**
232+
**Nine meshes, three in each direction**
55233
```
56234
> python decompose_fds_mesh.py "&MESH IJK=12,12,12 XB=0,12,0,12,0,12 /" "3,3,3"
57235
raw input mesh: &MESH IJK=12,12,12 XB=0,12,0,12,0,12 /

0 commit comments

Comments
 (0)