Open
Description
Working with adjust_probability_calibration()
and I get slightly different calibration results across tuning parameters:
# A tibble: 9 × 7
threshold .metric .estimator mean n std_err .config
<dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0 roc_auc binary 0.712 10 0.0216 pre0_mod0_post1
2 0 sensitivity binary 1 10 0 pre0_mod0_post1
3 0 specificity binary 0 10 0 pre0_mod0_post1
4 0.5 roc_auc binary 0.710 10 0.0212 pre0_mod0_post2
5 0.5 sensitivity binary 0.195 10 0.0213 pre0_mod0_post2
6 0.5 specificity binary 0.969 10 0.00612 pre0_mod0_post2
7 1 roc_auc binary 0.710 10 0.0229 pre0_mod0_post3
8 1 sensitivity binary 0.0248 10 0.0184 pre0_mod0_post3
9 1 specificity binary 0.996 10 0.00297 pre0_mod0_post3
This isn't a big deal, but it could confuse users.
I suggest adding an argument seed = sample.int(10^4, 1)
for adjustments that use random numbers. This will be evaluated when the tailor is made and using withr we can fix the stream when the adjustment is trained.
Metadata
Metadata
Assignees
Labels
No labels