1
1
---
2
2
title : " Calibration scenarios comparison"
3
- author : " FMJ Bulot"
4
- date : " `r format(Sys.time(), '%d %B, %Y, %H:%M')`"
3
+ author :
" FMJ Bulot ([email protected] ) "
4
+ date : " Last generated: `r format(Sys.time(), '%d %B, %Y, %H:%M')`"
5
5
knit : (function(inputFile, encoding) {
6
- out_dir <- '../../ docs/';
6
+ out_dir <- '../docs/';
7
7
rmarkdown::render(inputFile,
8
8
encoding=encoding,
9
9
output_file=file.path(dirname(inputFile), out_dir, 'calibration_scenarios_comparison.html'))})
@@ -21,25 +21,33 @@ editor_options:
21
21
---
22
22
23
23
This notebook explores the results from the scripts in the folder
24
- ` calibration_scenarios_Xweeks_Xmonths ` . Note that these scripts require a lot
25
- of computational time and a lost of space to store the results of the
26
- different calibration (about 174Gb).
27
- For each calibration scenario, the script has been divided into several files
28
- so they can be ran in parallel (for instance
29
- using background jobs, launched from the root directory) to speed up the
30
- calculations.
31
-
32
- These represents the different calibration scenarios presented in the paper: - 1
33
- week of pre-deployment calibration, 2 months of evaluation, 1 week of
34
- post-deployment calibration - 2 weeks of pre-deployment calibration, 2 months of
35
- evaluation, 2 weeks of post-deployment calibration - 2 weeks of pre-deployment
36
- calibration, 4 months of evaluation, 2 weeks of post-deployment calibration - 2
37
- weeks of pre-deployment calibration, 6 months of evaluation, 2 weeks of
38
- post-deployment calibration - 1 week of pre-deployment calibration, 2 months of
39
- evaluation - 2 weeks of pre-deployment calibration, 2 months of evaluation - 4
40
- weeks of pre-deployment calibration, 2 months of evaluation
24
+ ` calibration_scenarios_Xweeks_Xmonths ` . Note that these scripts require a lot of
25
+ computational time and a lost of space to store the results of the different
26
+ calibration (about 174Gb). For each calibration scenario, the script has been
27
+ divided into several files so they can be ran in parallel (for instance using
28
+ background jobs, launched from the root directory) to speed up the calculations.
41
29
42
- ``` {r}
30
+ These represents the different calibration scenarios presented in the paper:
31
+
32
+ - 1 week of pre-deployment calibration, 2 months of evaluation, 1 week of
33
+ post-deployment calibration
34
+
35
+ - 2 weeks of pre-deployment calibration, 2 months of evaluation, 2 weeks of
36
+ post-deployment calibration
37
+
38
+ - 2 weeks of pre-deployment calibration, 4 months of evaluation, 2 weeks of
39
+ post-deployment calibration
40
+
41
+ - 2 weeks of pre-deployment calibration, 6 months of evaluation, 2 weeks of
42
+ post-deployment calibration
43
+
44
+ - 1 week of pre-deployment calibration, 2 months of evaluation
45
+
46
+ - 2 weeks of pre-deployment calibration, 2 months of evaluation
47
+
48
+ - 4 weeks of pre-deployment calibration, 2 months of evaluation
49
+
50
+ ``` {r, message=F}
43
51
44
52
source("utilities/utilities.R")
45
53
source("utilities/nested_models.r")
@@ -323,12 +331,11 @@ pg_l
323
331
324
332
## On RLM_part.+RH only
325
333
326
- In this section we only focus on the four methods that performed both during the
334
+ In this section we only focus on the four methods that performed both during the
327
335
robust method selection ((presented in in [ Calibration 2 weeks 40
328
336
days] ( calibration_2weeks_40days.html ) ), with a special focus on RLM_part.+RH for
329
337
clarity in the graph.
330
338
331
-
332
339
``` {r}
333
340
334
341
688
695
689
696
```
690
697
691
-
692
698
# Results in tables
693
699
694
700
``` {r}
0 commit comments