From 38099b49e1781901991549cb16c315b0128a3856 Mon Sep 17 00:00:00 2001
From: Nicholas Clark Notice how we have four different time series in these simulated
@@ -447,25 +447,29 @@ Depending on the observation families you plan to use when building
models, there may be some restrictions that need to be satisfied within
the outcome variable. For example, a Beta regression can only handle
@@ -514,33 +518,31 @@ A call to But the same call to Next we call Now the call to Just like with the Or we can look more closely at the distribution for the first time
series: If you have split your data into training and testing folds (i.e. for
forecast evaluation), you can include the test data in your plots: Create the And we can plot the conditional effects of the splines (on the link
scale) to see that they are estimated to be highly nonlinear And now the length scale (\(\rho\))
parameters: We can again plot the nonlinear effects: The estimates for the temporal trends are fairly similar for the two
models, but below we will see if they produce similar forecasts We can plot the forecasts for some series from each model using the
Clearly the two models do not produce equivalent forecasts. We will
come back to scoring these forecasts in a moment. The forecasts will be nearly identical to those calculated
previously: The returned list contains a We can also compare forecasts against out of sample observations
using the Expected Log Predictive Density (ELPD; also known as the
log score). The ELPD is a strictly proper scoring rule that can be
@@ -812,31 +802,31 @@ Finally, when we have multiple time series it may also make sense to
use a multivariate proper scoring rule. The returned object still provides information on interval coverage
@@ -868,31 +858,31 @@ Formatting data for use in mvgam
Nicholas J Clark
-2024-04-16
+2024-09-04
Required long data format
simdat <- sim_mvgam(n_series = 4, T = 24, prop_missing = 0.2)
head(simdat$data_train, 16)
#> y season year series time
-#> 1 NA 1 1 series_1 1
-#> 2 0 1 1 series_2 1
-#> 3 0 1 1 series_3 1
+#> 1 1 1 1 series_1 1
+#> 2 2 1 1 series_2 1
+#> 3 1 1 1 series_3 1
#> 4 0 1 1 series_4 1
#> 5 0 2 1 series_1 2
-#> 6 1 2 1 series_2 2
+#> 6 5 2 1 series_2 2
#> 7 1 2 1 series_3 2
-#> 8 1 2 1 series_4 2
+#> 8 0 2 1 series_4 2
#> 9 0 3 1 series_1 3
-#> 10 NA 3 1 series_2 3
+#> 10 0 3 1 series_2 3
#> 11 0 3 1 series_3 3
-#> 12 NA 3 1 series_4 3
+#> 12 0 3 1 series_4 3
#> 13 1 4 1 series_1 4
-#> 14 0 4 1 series_2 4
+#> 14 NA 4 1 series_2 4
#> 15 0 4 1 series_3 4
-#> 16 2 4 1 series_4 4
series
as a factor
variableA single outcome variable
#> Call:
#> glm(formula = y ~ series + time, family = poisson(), data = simdat$data_train)
#>
-#> Coefficients:
-#> Estimate Std. Error z value Pr(>|z|)
-#> (Intercept) -0.05275 0.38870 -0.136 0.8920
-#> seriesseries_2 -0.80716 0.45417 -1.777 0.0755 .
-#> seriesseries_3 -1.21614 0.51290 -2.371 0.0177 *
-#> seriesseries_4 0.55084 0.31854 1.729 0.0838 .
-#> time 0.01725 0.02701 0.639 0.5229
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> (Dispersion parameter for poisson family taken to be 1)
-#>
-#> Null deviance: 120.029 on 56 degrees of freedom
-#> Residual deviance: 96.641 on 52 degrees of freedom
-#> (15 observations deleted due to missingness)
-#> AIC: 166.83
-#>
-#> Number of Fisher Scoring iterations: 6summary(gam(y ~ series + s(time, by = series),
+#> Deviance Residuals:
+#> Min 1Q Median 3Q Max
+#> -2.1579 -1.1802 -0.5262 0.7172 2.3509
+#>
+#> Coefficients:
+#> Estimate Std. Error z value Pr(>|z|)
+#> (Intercept) -0.83568 0.41529 -2.012 0.04419 *
+#> seriesseries_2 1.13007 0.40629 2.781 0.00541 **
+#> seriesseries_3 1.29822 0.39666 3.273 0.00106 **
+#> seriesseries_4 0.32518 0.46472 0.700 0.48410
+#> time 0.02126 0.02161 0.984 0.32530
+#> ---
+#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#>
+#> (Dispersion parameter for poisson family taken to be 1)
+#>
+#> Null deviance: 111.421 on 60 degrees of freedom
+#> Residual deviance: 91.775 on 56 degrees of freedom
+#> (11 observations deleted due to missingness)
+#> AIC: 189.07
+#>
+#> Number of Fisher Scoring iterations: 5
summary(mgcv::gam(y ~ series + s(time, by = series),
data = simdat$data_train,
family = poisson()))
#>
@@ -476,23 +480,23 @@
A single outcome variable
#> y ~ series + s(time, by = series)
#>
#> Parametric coefficients:
-#> Estimate Std. Error z value Pr(>|z|)
-#> (Intercept) -4.293 5.500 -0.781 0.435
-#> seriesseries_2 3.001 5.533 0.542 0.588
-#> seriesseries_3 3.193 5.518 0.579 0.563
-#> seriesseries_4 4.795 5.505 0.871 0.384
-#>
-#> Approximate significance of smooth terms:
-#> edf Ref.df Chi.sq p-value
-#> s(time):seriesseries_1 7.737 8.181 6.541 0.5585
-#> s(time):seriesseries_2 3.444 4.213 4.739 0.3415
-#> s(time):seriesseries_3 1.000 1.000 0.006 0.9365
-#> s(time):seriesseries_4 3.958 4.832 11.636 0.0363 *
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#> Estimate Std. Error z value Pr(>|z|)
+#> (Intercept) -0.63089 0.35691 -1.768 0.07712 .
+#> seriesseries_2 -0.84795 1.40241 -0.605 0.54542
+#> seriesseries_3 1.25102 0.40263 3.107 0.00189 **
+#> seriesseries_4 -0.08734 0.55278 -0.158 0.87446
+#> ---
+#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#>
+#> Approximate significance of smooth terms:
+#> edf Ref.df Chi.sq p-value
+#> s(time):seriesseries_1 1.173 1.326 0.311 0.819
+#> s(time):seriesseries_2 8.519 8.900 8.394 0.493
+#> s(time):seriesseries_3 1.637 2.039 3.001 0.222
+#> s(time):seriesseries_4 2.614 3.300 5.397 0.171
#>
-#> R-sq.(adj) = 0.605 Deviance explained = 66.2%
-#> UBRE = 0.4193 Scale est. = 1 n = 57A single outcome variable
time = 1:10)
gauss_dat
#> outcome series time
-#> 1 -1.51807964 series1 1
-#> 2 -0.12895041 series1 2
-#> 3 0.91902592 series1 3
-#> 4 -0.78329254 series1 4
-#> 5 0.28469724 series1 5
-#> 6 0.07481887 series1 6
-#> 7 0.03770728 series1 7
-#> 8 -0.37485636 series1 8
-#> 9 0.23694172 series1 9
-#> 10 -0.53988302 series1 10
+#> 1 -0.58030616 series1 1
+#> 2 0.44128220 series1 2
+#> 3 0.27653780 series1 3
+#> 4 -0.22917850 series1 4
+#> 5 -0.00336678 series1 5
+#> 6 0.36685658 series1 6
+#> 7 0.82568801 series1 7
+#> 8 -0.67968342 series1 8
+#> 9 0.22944588 series1 9
+#> 10 -1.74780687 series1 10
gam
using the mgcv
package leads
to a model that actually fits (though it does give an unhelpful warning
message):gam(outcome ~ time,
+
mgcv::gam(outcome ~ time,
family = betar(),
data = gauss_dat)
-#> Warning in family$saturated.ll(y, prior.weights, theta): saturated likelihood
-#> may be inaccurate
-#>
-#> Family: Beta regression(0.44)
-#> Link function: logit
-#>
-#> Formula:
-#> outcome ~ time
-#> Total model degrees of freedom 2
-#>
-#> REML score: -127.2706
mvgam
gives us something more
useful:mvgam(outcome ~ time,
@@ -627,15 +629,15 @@
Checking data with
series = factor('series_1'),
outcome = rnorm(8))
bad_times
-#> time series outcome
-#> 1 1 series_1 1.4681068
-#> 2 3 series_1 0.1796627
-#> 3 5 series_1 -0.4204020
-#> 4 7 series_1 -1.0729359
-#> 5 9 series_1 -0.1738239
-#> 6 11 series_1 -0.5463268
-#> 7 13 series_1 0.8275198
-#> 8 15 series_1 2.2085085get_mvgam_priors
get_mvgam_priors
by simply specifying an
intercept-only model, which is enough to trigger all the checks:get_mvgam_priors(outcome ~ 1,
@@ -652,24 +654,23 @@
Checking data with
series = factor(unique(bad_times$series),
levels = levels(bad_times$series)))) %>%
dplyr::arrange(time) -> good_times
-#> Joining with `by = join_by(time, series)`
-good_times
-#> time series outcome
-#> 1 1 series_1 1.4681068
-#> 2 2 series_1 NA
-#> 3 3 series_1 0.1796627
-#> 4 4 series_1 NA
-#> 5 5 series_1 -0.4204020
-#> 6 6 series_1 NA
-#> 7 7 series_1 -1.0729359
-#> 8 8 series_1 NA
-#> 9 9 series_1 -0.1738239
-#> 10 10 series_1 NA
-#> 11 11 series_1 -0.5463268
-#> 12 12 series_1 NA
-#> 13 13 series_1 0.8275198
-#> 14 14 series_1 NA
-#> 15 15 series_1 2.2085085get_mvgam_priors
get_mvgam_priors
, using our filled in
data, should work:get_mvgam_priors(outcome ~ 1,
@@ -678,9 +679,9 @@
Checking data with
#> param_name param_length param_info
#> 1 (Intercept) 1 (Intercept)
#> 2 vector<lower=0>[n_series] sigma_obs; 1 observation error sd
-#> prior example_change
-#> 1 (Intercept) ~ student_t(3, 0, 2.5); (Intercept) ~ normal(0, 1);
-#> 2 sigma_obs ~ student_t(3, 0, 2.5); sigma_obs ~ normal(-0.22, 0.33);
+#> prior example_change
+#> 1 (Intercept) ~ student_t(3, -0.2, 2.5); (Intercept) ~ normal(0, 1);
+#> 2 sigma_obs ~ student_t(3, 0, 2.5); sigma_obs ~ normal(0.68, 0.48);
#> new_lowerbound new_upperbound
#> 1 NA NA
#> 2 NA NAget_mvgam_priors
Checking data with
#> param_name param_length param_info
#> 1 (Intercept) 1 (Intercept)
#> 2 vector<lower=0>[n_series] sigma_obs; 1 observation error sd
-#> prior example_change
-#> 1 (Intercept) ~ student_t(3, -1, 2.5); (Intercept) ~ normal(0, 1);
-#> 2 sigma_obs ~ student_t(3, 0, 2.5); sigma_obs ~ normal(0.98, 0.91);
+#> prior example_change
+#> 1 (Intercept) ~ student_t(3, 0.2, 2.5); (Intercept) ~ normal(0, 1);
+#> 2 sigma_obs ~ student_t(3, 0, 2.5); sigma_obs ~ normal(-0.45, 0.27);
#> new_lowerbound new_upperbound
#> 1 NA NA
#> 2 NA NA
@@ -746,22 +747,32 @@ get_mvgam_priors
Covariates with no
levels = 'series1'),
time = 1:10)
miss_dat
-#> outcome cov series time
-#> 1 0.77436859 NA series1 1
-#> 2 0.33222199 -0.2653819 series1 2
-#> 3 0.50385503 0.6658354 series1 3
-#> 4 -0.99577591 0.3541730 series1 4
-#> 5 -1.09812817 -2.3125954 series1 5
-#> 6 -0.49687774 -1.0778578 series1 6
-#> 7 -1.26666072 -0.1973507 series1 7
-#> 8 -0.11638041 -3.0585179 series1 8
-#> 9 0.08890432 1.7964928 series1 9
-#> 10 -0.64375459 0.7894733 series1 10
+#> outcome cov series time
+#> 1 -0.80111595 NA series1 1
+#> 2 -0.06592837 -0.10013821 series1 2
+#> 3 0.76397054 -0.21375436 series1 3
+#> 4 -1.46481331 -0.95513990 series1 4
+#> 5 -1.55414709 -0.21727050 series1 5
+#> 6 -0.77822622 -0.70180317 series1 6
+#> 7 -0.03800835 -0.07235289 series1 7
+#> 8 0.97922249 -1.17732465 series1 8
+#> 9 -1.30428803 1.03775159 series1 9
+#> 10 1.52371605 0.10077859 series1 10
NA
sget_mvgam_priors(outcome ~ cov,
data = miss_dat,
family = gaussian())
-#> Error: Missing values found in data predictors:
-#> Error in na.fail.default(structure(list(outcome = c(0.774368589907313, : missing values in object
mgcv
package, mvgam
can
also accept data as a list
object. This is useful if you
want to set up linear
@@ -778,8 +789,30 @@ Covariates with no
NA
sget_mvgam_priors(outcome ~ cov,
data = miss_dat,
family = gaussian())
-#> Error: Missing values found in data predictors:
-#> Error in na.fail.default(structure(list(outcome = c(-0.708736388395862, : missing values in object
Plotting with
-
+
plot_mvgam_series
Example with NEON tick data
@@ -896,9 +929,7 @@ Example with NEON tick data
# match up, in case you need the siteID for anything else later on
dplyr::left_join(all_neon_tick_data %>%
dplyr::select(siteID, plotID) %>%
- dplyr::distinct()) -> model_dat
-#> Joining with `by = join_by(Year, epiWeek, plotID)`
-#> Joining with `by = join_by(plotID)`series
variable needed for mvgam
modelling:model_dat %>%
@@ -965,12 +996,12 @@
Example with NEON tick data
#> $ S6 : num [1:8, 1:8] 1.037 -0.416 0.419 0.117 0.188 ...
#> $ S7 : num [1:8, 1:8] 1.037 -0.416 0.419 0.117 0.188 ...
#> $ S8 : num [1:8, 1:8] 1.037 -0.416 0.419 0.117 0.188 ...
-#> $ p_coefs : Named num 0.806
+#> $ p_coefs : Named num 0
#> ..- attr(*, "names")= chr "(Intercept)"
-#> $ p_taus : num 301
+#> $ p_taus : num 0.88
#> $ ytimes : int [1:416, 1:8] 1 9 17 25 33 41 49 57 65 73 ...
#> $ n_series : int 8
-#> $ sp : Named num [1:9] 4.68 59.57 1.11 2.73 6.5 ...
+#> $ sp : Named num [1:9] 0.368 0.368 0.368 0.368 0.368 ...
#> ..- attr(*, "names")= chr [1:9] "s(epiWeek):seriesBLAN_005" "s(epiWeek):seriesBLAN_012" "s(epiWeek):seriesSCBI_002" "s(epiWeek):seriesSCBI_013" ...
#> $ y_observed : num [1:416, 1:8] 0 0 0 0 0 0 0 0 0 0 ...
#> $ total_obs : int 3328
@@ -1019,7 +1050,7 @@ Example with NEON tick data
#> vector[1] mu_raw;
#>
#> // latent trend AR1 terms
-#> vector<lower=-1.5, upper=1.5>[n_series] ar1;
+#> vector<lower=-1, upper=1>[n_series] ar1;
#>
#> // latent trend variance parameters
#> vector<lower=0>[n_series] sigma;
diff --git a/doc/forecast_evaluation.R b/doc/forecast_evaluation.R
index 8412f550..16985002 100644
--- a/doc/forecast_evaluation.R
+++ b/doc/forecast_evaluation.R
@@ -1,10 +1,13 @@
-## ----echo = FALSE-------------------------------------------------------------
-NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true")
+params <-
+list(EVAL = TRUE)
+
+## ---- echo = FALSE------------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
- purl = NOT_CRAN,
- eval = NOT_CRAN
+ message = FALSE,
+ warning = FALSE,
+ eval = if (isTRUE(exists("params"))) params$EVAL else FALSE
)
@@ -35,12 +38,12 @@ simdat <- sim_mvgam(T = 100,
str(simdat)
-## ----fig.alt = "Plotting time series features for GAM models in mvgam"--------
+## ---- fig.alt = "Plotting time series features for GAM models in mvgam"-------
plot_mvgam_series(data = simdat$data_train,
series = 'all')
-## ----fig.alt = "Plotting time series features for GAM models in mvgam"--------
+## ---- fig.alt = "Plotting time series features for GAM models in mvgam"-------
plot_mvgam_series(data = simdat$data_train,
newdata = simdat$data_test,
series = 1)
@@ -67,7 +70,7 @@ mod1 <- mvgam(y ~ s(season, bs = 'cc', k = 8) +
summary(mod1, include_betas = FALSE)
-## ----fig.alt = "Plotting GAM smooth functions using mvgam"--------------------
+## ---- fig.alt = "Plotting GAM smooth functions using mvgam"-------------------
conditional_effects(mod1, type = 'link')
@@ -96,15 +99,15 @@ mod2 <- mvgam(y ~ s(season, bs = 'cc', k = 8) +
summary(mod2, include_betas = FALSE)
-## ----fig.alt = "Summarising latent Gaussian Process parameters in mvgam"------
+## ---- fig.alt = "Summarising latent Gaussian Process parameters in mvgam"-----
mcmc_plot(mod2, variable = c('alpha_gp'), regex = TRUE, type = 'areas')
-## ----fig.alt = "Summarising latent Gaussian Process parameters in mvgam"------
+## ---- fig.alt = "Summarising latent Gaussian Process parameters in mvgam"-----
mcmc_plot(mod2, variable = c('rho_gp'), regex = TRUE, type = 'areas')
-## ----fig.alt = "Plotting latent Gaussian Process effects in mvgam and marginaleffects"----
+## ---- fig.alt = "Plotting latent Gaussian Process effects in mvgam and marginaleffects"----
conditional_effects(mod2, type = 'link')
diff --git a/doc/forecast_evaluation.Rmd b/doc/forecast_evaluation.Rmd
index 8979593d..90bc3105 100644
--- a/doc/forecast_evaluation.Rmd
+++ b/doc/forecast_evaluation.Rmd
@@ -9,14 +9,16 @@ vignette: >
%\VignetteIndexEntry{Forecasting and forecast evaluation in mvgam}
%\VignetteEngine{knitr::rmarkdown}
\usepackage[utf8]{inputenc}
+params:
+ EVAL: !r identical(tolower(Sys.getenv("NOT_CRAN")), "true")
---
```{r, echo = FALSE}
-NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true")
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
- purl = NOT_CRAN,
- eval = NOT_CRAN
+ message = FALSE,
+ warning = FALSE,
+ eval = if (isTRUE(exists("params"))) params$EVAL else FALSE
)
```
diff --git a/doc/forecast_evaluation.html b/doc/forecast_evaluation.html
index ae761ae5..2e518447 100644
--- a/doc/forecast_evaluation.html
+++ b/doc/forecast_evaluation.html
@@ -12,7 +12,7 @@
-
+
Forecasting and forecast evaluation in
mvgam
Nicholas J Clark
-2024-07-01
+2024-09-04
Modelling dynamics with splines
#> GAM formula:
#> y ~ s(season, bs = "cc", k = 8) + s(time, by = series, bs = "cr",
#> k = 20)
-#> <environment: 0x000001b67206d110>
+#> <environment: 0x000002c8194b2058>
#>
#> Family:
#> poisson
@@ -468,15 +468,15 @@ Modelling dynamics with splines
#>
#>
#> GAM coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) -0.41 -0.21 -0.052 1 855
+#> 2.5% 50% 97.5% Rhat n_eff
+#> (Intercept) -0.4 -0.21 -0.045 1 1035
#>
#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(season) 3.82 6 19.6 0.0037 **
-#> s(time):seriesseries_1 7.25 19 13.2 0.7969
-#> s(time):seriesseries_2 9.81 19 173.3 0.0019 **
-#> s(time):seriesseries_3 6.05 19 19.4 0.7931
+#> edf Ref.df Chi.sq p-value
+#> s(season) 3.28 6 20.4 0.0215 *
+#> s(time):seriesseries_1 6.95 19 14.0 0.8153
+#> s(time):seriesseries_2 10.86 19 156.5 0.0044 **
+#> s(time):seriesseries_3 6.79 19 18.1 0.5529
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
@@ -487,14 +487,14 @@ Modelling dynamics with splines
#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
#> E-FMI indicated no pathological behavior
#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:26:51 AM 2024.
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:27:30 AM 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split MCMC chains
#> (at convergence, Rhat = 1)Modelling dynamics with GPs
@@ -517,7 +517,7 @@ Modelling dynamics with GPs
#> GAM formula:
#> y ~ s(season, bs = "cc", k = 8) + gp(time, by = series, c = 5/4,
#> k = 20, scale = FALSE)
-#> <environment: 0x000001b67206d110>
+#> <environment: 0x000002c8194b2058>
#>
#> Family:
#> poisson
@@ -542,32 +542,32 @@ Modelling dynamics with GPs
#>
#> GAM coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) -1.1 -0.51 0.34 1 694
+#> (Intercept) -1.1 -0.51 0.25 1 731
#>
#> GAM gp term marginal deviation (alpha) and length scale (rho) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> alpha_gp(time):seriesseries_1 0.21 0.78 2.2 1 819
-#> alpha_gp(time):seriesseries_2 0.73 1.30 2.9 1 761
-#> alpha_gp(time):seriesseries_3 0.46 1.10 2.8 1 1262
-#> rho_gp(time):seriesseries_1 1.30 5.40 22.0 1 682
-#> rho_gp(time):seriesseries_2 2.10 10.00 17.0 1 450
-#> rho_gp(time):seriesseries_3 1.50 8.80 24.0 1 769
+#> alpha_gp(time):seriesseries_1 0.19 0.75 2 1.00 932
+#> alpha_gp(time):seriesseries_2 0.76 1.40 3 1.00 845
+#> alpha_gp(time):seriesseries_3 0.48 1.10 3 1.00 968
+#> rho_gp(time):seriesseries_1 1.10 4.90 25 1.01 678
+#> rho_gp(time):seriesseries_2 2.20 10.00 17 1.00 645
+#> rho_gp(time):seriesseries_3 1.50 9.20 24 1.00 908
#>
#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(season) 3.36 6 21.1 0.0093 **
+#> edf Ref.df Chi.sq p-value
+#> s(season) 3.4 6 21.1 0.011 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Stan MCMC diagnostics:
#> n_eff / iter looks reasonable for all parameters
#> Rhat looks reasonable for all parameters
-#> 1 of 2000 iterations ended with a divergence (0.05%)
+#> 7 of 2000 iterations ended with a divergence (0.35%)
#> *Try running with larger adapt_delta to remove the divergences
#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
#> E-FMI indicated no pathological behavior
#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:27:28 AM 2024.
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:28:27 AM 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split MCMC chains
#> (at convergence, Rhat = 1)Modelling dynamics with GPs
the marginal deviation (\(\alpha\))
parameters:Forecasting with the
forecast()
functionstr(fc_mod1)
#> List of 16
#> $ call :Class 'formula' language y ~ s(season, bs = "cc", k = 8) + s(time, by = series, bs = "cr", k = 20)
-#> .. ..- attr(*, ".Environment")=<environment: 0x000001b67206d110>
+#> .. ..- attr(*, ".Environment")=<environment: 0x000002c8194b2058>
#> $ trend_call : NULL
#> $ family : chr "poisson"
#> $ family_pars : NULL
@@ -635,42 +635,34 @@
Forecasting with the
#> ..$ series_3: int [1:25] 1 0 0 1 0 0 1 0 1 0 ...
#> $ test_times : int [1:25] 76 77 78 79 80 81 82 83 84 85 ...
#> $ hindcasts :List of 3
-#> ..$ series_1: num [1:2000, 1:75] 1 1 0 0 0 0 0 0 0 0 ...
+#> ..$ series_1: num [1:2000, 1:75] 0 0 0 1 0 1 0 0 0 0 ...
#> .. ..- attr(*, "dimnames")=List of 2
#> .. .. ..$ : NULL
#> .. .. ..$ : chr [1:75] "ypred[1,1]" "ypred[2,1]" "ypred[3,1]" "ypred[4,1]" ...
-#> ..$ series_2: num [1:2000, 1:75] 0 0 0 0 0 0 0 0 0 0 ...
+#> ..$ series_2: num [1:2000, 1:75] 0 0 0 0 0 1 0 0 0 0 ...
#> .. ..- attr(*, "dimnames")=List of 2
#> .. .. ..$ : NULL
#> .. .. ..$ : chr [1:75] "ypred[1,2]" "ypred[2,2]" "ypred[3,2]" "ypred[4,2]" ...
-#> ..$ series_3: num [1:2000, 1:75] 1 4 0 4 4 1 1 6 3 1 ...
+#> ..$ series_3: num [1:2000, 1:75] 3 2 2 0 1 3 2 3 1 3 ...
#> .. ..- attr(*, "dimnames")=List of 2
#> .. .. ..$ : NULL
#> .. .. ..$ : chr [1:75] "ypred[1,3]" "ypred[2,3]" "ypred[3,3]" "ypred[4,3]" ...
#> $ forecasts :List of 3
-#> ..$ series_1: num [1:2000, 1:25] 0 1 0 0 0 1 1 0 0 3 ...
-#> ..$ series_2: num [1:2000, 1:25] 0 2 0 1 0 2 1 0 1 0 ...
-#> ..$ series_3: num [1:2000, 1:25] 1 8 4 2 2 1 2 0 2 0 ...
+#> ..$ series_1: num [1:2000, 1:25] 0 1 0 0 0 1 1 1 1 1 ...
+#> ..$ series_2: num [1:2000, 1:25] 0 0 1 1 0 0 0 1 0 0 ...
+#> ..$ series_3: num [1:2000, 1:25] 0 0 1 1 0 3 1 0 3 2 ...
#> - attr(*, "class")= chr "mvgam_forecast"forecast()
functionS3 plot
method for objects of this class:Forecasting with
newdata
in mvgam()
Scoring forecast distributions
@@ -716,54 +706,54 @@ Scoring forecast distributions
str(crps_mod1)
#> List of 4
#> $ series_1 :'data.frame': 25 obs. of 5 variables:
-#> ..$ score : num [1:25] 0.186 0.129 1.372 NA 0.037 ...
+#> ..$ score : num [1:25] 0.1797 0.1341 1.3675 NA 0.0386 ...
#> ..$ in_interval : num [1:25] 1 1 1 NA 1 1 1 1 1 1 ...
#> ..$ interval_width: num [1:25] 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 ...
#> ..$ eval_horizon : int [1:25] 1 2 3 4 5 6 7 8 9 10 ...
#> ..$ score_type : chr [1:25] "crps" "crps" "crps" "crps" ...
#> $ series_2 :'data.frame': 25 obs. of 5 variables:
-#> ..$ score : num [1:25] 0.354 0.334 0.947 0.492 0.542 ...
+#> ..$ score : num [1:25] 0.375 0.283 1.003 0.516 0.649 ...
#> ..$ in_interval : num [1:25] 1 1 1 1 1 1 1 1 1 NA ...
#> ..$ interval_width: num [1:25] 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 ...
#> ..$ eval_horizon : int [1:25] 1 2 3 4 5 6 7 8 9 10 ...
#> ..$ score_type : chr [1:25] "crps" "crps" "crps" "crps" ...
#> $ series_3 :'data.frame': 25 obs. of 5 variables:
-#> ..$ score : num [1:25] 0.31 0.616 0.4 0.349 0.215 ...
+#> ..$ score : num [1:25] 0.318 0.604 0.4 0.353 0.212 ...
#> ..$ in_interval : num [1:25] 1 1 1 1 1 1 1 1 1 1 ...
#> ..$ interval_width: num [1:25] 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 ...
#> ..$ eval_horizon : int [1:25] 1 2 3 4 5 6 7 8 9 10 ...
#> ..$ score_type : chr [1:25] "crps" "crps" "crps" "crps" ...
#> $ all_series:'data.frame': 25 obs. of 3 variables:
-#> ..$ score : num [1:25] 0.85 1.079 2.719 NA 0.794 ...
+#> ..$ score : num [1:25] 0.873 1.021 2.77 NA 0.9 ...
#> ..$ eval_horizon: int [1:25] 1 2 3 4 5 6 7 8 9 10 ...
#> ..$ score_type : chr [1:25] "sum_crps" "sum_crps" "sum_crps" "sum_crps" ...
crps_mod1$series_1
#> score in_interval interval_width eval_horizon score_type
-#> 1 0.18582425 1 0.9 1 crps
-#> 2 0.12933350 1 0.9 2 crps
-#> 3 1.37181050 1 0.9 3 crps
+#> 1 0.17965150 1 0.9 1 crps
+#> 2 0.13411725 1 0.9 2 crps
+#> 3 1.36749550 1 0.9 3 crps
#> 4 NA NA 0.9 4 crps
-#> 5 0.03698600 1 0.9 5 crps
-#> 6 1.53997900 1 0.9 6 crps
-#> 7 1.50467675 1 0.9 7 crps
-#> 8 0.63460725 1 0.9 8 crps
-#> 9 0.61682725 1 0.9 9 crps
-#> 10 0.62428875 1 0.9 10 crps
-#> 11 1.33824700 1 0.9 11 crps
-#> 12 2.06378300 1 0.9 12 crps
-#> 13 0.59247200 1 0.9 13 crps
-#> 14 0.13560025 1 0.9 14 crps
-#> 15 0.66512975 1 0.9 15 crps
-#> 16 0.08238525 1 0.9 16 crps
-#> 17 0.08152900 1 0.9 17 crps
-#> 18 0.09446425 1 0.9 18 crps
-#> 19 0.12084700 1 0.9 19 crps
+#> 5 0.03860225 1 0.9 5 crps
+#> 6 1.55334350 1 0.9 6 crps
+#> 7 1.49198325 1 0.9 7 crps
+#> 8 0.64088650 1 0.9 8 crps
+#> 9 0.61613650 1 0.9 9 crps
+#> 10 0.60528025 1 0.9 10 crps
+#> 11 1.30624075 1 0.9 11 crps
+#> 12 2.06809025 1 0.9 12 crps
+#> 13 0.61887550 1 0.9 13 crps
+#> 14 0.13920225 1 0.9 14 crps
+#> 15 0.67819725 1 0.9 15 crps
+#> 16 0.07817500 1 0.9 16 crps
+#> 17 0.07567500 1 0.9 17 crps
+#> 18 0.09510025 1 0.9 18 crps
+#> 19 0.12604375 1 0.9 19 crps
#> 20 NA NA 0.9 20 crps
-#> 21 0.21286925 1 0.9 21 crps
-#> 22 0.85799700 1 0.9 22 crps
+#> 21 0.20347825 1 0.9 21 crps
+#> 22 0.82202975 1 0.9 22 crps
#> 23 NA NA 0.9 23 crps
-#> 24 1.14954750 1 0.9 24 crps
-#> 25 0.85131425 1 0.9 25 crpsdata.frame
for each series
in the data that shows the CRPS score for each evaluation in the testing
data, along with some other useful information about the fit of the
@@ -777,31 +767,31 @@ Scoring forecast distributions
crps_mod1 <- score(fc_mod1, score = 'crps', interval_width = 0.6)
crps_mod1$series_1
#> score in_interval interval_width eval_horizon score_type
-#> 1 0.18582425 1 0.6 1 crps
-#> 2 0.12933350 1 0.6 2 crps
-#> 3 1.37181050 0 0.6 3 crps
+#> 1 0.17965150 1 0.6 1 crps
+#> 2 0.13411725 1 0.6 2 crps
+#> 3 1.36749550 0 0.6 3 crps
#> 4 NA NA 0.6 4 crps
-#> 5 0.03698600 1 0.6 5 crps
-#> 6 1.53997900 0 0.6 6 crps
-#> 7 1.50467675 0 0.6 7 crps
-#> 8 0.63460725 1 0.6 8 crps
-#> 9 0.61682725 1 0.6 9 crps
-#> 10 0.62428875 1 0.6 10 crps
-#> 11 1.33824700 0 0.6 11 crps
-#> 12 2.06378300 0 0.6 12 crps
-#> 13 0.59247200 1 0.6 13 crps
-#> 14 0.13560025 1 0.6 14 crps
-#> 15 0.66512975 1 0.6 15 crps
-#> 16 0.08238525 1 0.6 16 crps
-#> 17 0.08152900 1 0.6 17 crps
-#> 18 0.09446425 1 0.6 18 crps
-#> 19 0.12084700 1 0.6 19 crps
+#> 5 0.03860225 1 0.6 5 crps
+#> 6 1.55334350 0 0.6 6 crps
+#> 7 1.49198325 0 0.6 7 crps
+#> 8 0.64088650 1 0.6 8 crps
+#> 9 0.61613650 1 0.6 9 crps
+#> 10 0.60528025 1 0.6 10 crps
+#> 11 1.30624075 0 0.6 11 crps
+#> 12 2.06809025 0 0.6 12 crps
+#> 13 0.61887550 1 0.6 13 crps
+#> 14 0.13920225 1 0.6 14 crps
+#> 15 0.67819725 1 0.6 15 crps
+#> 16 0.07817500 1 0.6 16 crps
+#> 17 0.07567500 1 0.6 17 crps
+#> 18 0.09510025 1 0.6 18 crps
+#> 19 0.12604375 1 0.6 19 crps
#> 20 NA NA 0.6 20 crps
-#> 21 0.21286925 1 0.6 21 crps
-#> 22 0.85799700 1 0.6 22 crps
+#> 21 0.20347825 1 0.6 21 crps
+#> 22 0.82202975 1 0.6 22 crps
#> 23 NA NA 0.6 23 crps
-#> 24 1.14954750 1 0.6 24 crps
-#> 25 0.85131425 1 0.6 25 crps
Scoring forecast distributions
link_mod1 <- forecast(mod1, newdata = simdat$data_test, type = 'link')
score(link_mod1, score = 'elpd')$series_1
#> score eval_horizon score_type
-#> 1 -0.5343784 1 elpd
-#> 2 -0.4326190 2 elpd
-#> 3 -2.9699450 3 elpd
+#> 1 -0.5285206 1 elpd
+#> 2 -0.4286994 2 elpd
+#> 3 -2.9660940 3 elpd
#> 4 NA 4 elpd
-#> 5 -0.1998425 5 elpd
-#> 6 -3.3976729 6 elpd
-#> 7 -3.2989297 7 elpd
-#> 8 -2.0490633 8 elpd
-#> 9 -2.0690163 9 elpd
-#> 10 -2.0822051 10 elpd
-#> 11 -3.1101639 11 elpd
-#> 12 -3.7240924 12 elpd
-#> 13 -2.1578701 13 elpd
-#> 14 -0.2899481 14 elpd
-#> 15 -2.3811862 15 elpd
-#> 16 -0.2085375 16 elpd
-#> 17 -0.1960501 17 elpd
-#> 18 -0.2036978 18 elpd
-#> 19 -0.2154374 19 elpd
+#> 5 -0.1988847 5 elpd
+#> 6 -3.3821055 6 elpd
+#> 7 -3.2797236 7 elpd
+#> 8 -2.0571076 8 elpd
+#> 9 -2.0794559 9 elpd
+#> 10 -2.0882202 10 elpd
+#> 11 -3.0870256 11 elpd
+#> 12 -3.7065927 12 elpd
+#> 13 -2.1601960 13 elpd
+#> 14 -0.2931143 14 elpd
+#> 15 -2.3694878 15 elpd
+#> 16 -0.2110566 16 elpd
+#> 17 -0.1986209 17 elpd
+#> 18 -0.2069720 18 elpd
+#> 19 -0.2193413 19 elpd
#> 20 NA 20 elpd
-#> 21 -0.2341597 21 elpd
-#> 22 -2.6552948 22 elpd
+#> 21 -0.2379762 21 elpd
+#> 22 -2.6261457 22 elpd
#> 23 NA 23 elpd
-#> 24 -2.6652717 24 elpd
-#> 25 -0.2759126 25 elpd
mvgam
offers two
such options: the Energy score and the Variogram score. The first
@@ -860,7 +850,7 @@ Scoring forecast distributions
#> ..$ interval_width: num [1:25] 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 ...
#> ..$ eval_horizon : int [1:25] 1 2 3 4 5 6 7 8 9 10 ...
#> $ all_series:'data.frame': 25 obs. of 3 variables:
-#> ..$ score : num [1:25] 0.771 1.133 1.26 NA 0.443 ...
+#> ..$ score : num [1:25] 0.755 1.12 1.245 NA 0.447 ...
#> ..$ eval_horizon: int [1:25] 1 2 3 4 5 6 7 8 9 10 ...
#> ..$ score_type : chr [1:25] "energy" "energy" "energy" "energy" ...
Scoring forecast distributions
now (which is provided in the all_series
slot):
energy_mod2$all_series
#> score eval_horizon score_type
-#> 1 0.7705198 1 energy
-#> 2 1.1330328 2 energy
-#> 3 1.2600785 3 energy
+#> 1 0.7546579 1 energy
+#> 2 1.1200630 2 energy
+#> 3 1.2447843 3 energy
#> 4 NA 4 energy
-#> 5 0.4427578 5 energy
-#> 6 1.8848308 6 energy
-#> 7 1.4186997 7 energy
-#> 8 0.7280518 8 energy
-#> 9 1.0467755 9 energy
+#> 5 0.4465348 5 energy
+#> 6 1.8231460 6 energy
+#> 7 1.4418019 7 energy
+#> 8 0.7172890 8 energy
+#> 9 1.0762943 9 energy
#> 10 NA 10 energy
-#> 11 1.4172423 11 energy
-#> 12 3.2326925 12 energy
-#> 13 1.5987732 13 energy
-#> 14 1.1798872 14 energy
-#> 15 1.0311968 15 energy
-#> 16 1.8261356 16 energy
+#> 11 1.4112423 11 energy
+#> 12 3.2385416 12 energy
+#> 13 1.5836460 13 energy
+#> 14 1.1953349 14 energy
+#> 15 1.0412578 15 energy
+#> 16 1.8348615 16 energy
#> 17 NA 17 energy
-#> 18 0.7170961 18 energy
-#> 19 0.8927311 19 energy
+#> 18 0.7142977 18 energy
+#> 19 0.9059773 19 energy
#> 20 NA 20 energy
-#> 21 1.0544501 21 energy
-#> 22 1.3280321 22 energy
+#> 21 1.1043397 21 energy
+#> 22 1.3292391 22 energy
#> 23 NA 23 energy
-#> 24 2.1843621 24 energy
-#> 25 1.2352041 25 energy
You can use your score(s) of choice to compare different models. For example, we can compute and plot the difference in CRPS scores for each series in data. Here, a negative value means the Gaussian Process model @@ -914,7 +904,7 @@
diff_scores <- crps_mod2$series_2$score -
@@ -930,7 +920,7 @@ Scoring forecast distributions
title(main = paste0('GP better in ', gp_better, ' of 25 evaluations',
'\nMean difference = ',
round(mean(diff_scores, na.rm = TRUE), 2)))
diff_scores <- crps_mod2$series_3$score -
crps_mod1$series_3$score
@@ -945,7 +935,7 @@ Scoring forecast distributions
title(main = paste0('GP better in ', gp_better, ' of 25 evaluations',
'\nMean difference = ',
round(mean(diff_scores, na.rm = TRUE), 2)))
The GP model consistently gives better forecasts, and the difference between scores grows quickly as the forecast horizon increases. This is not unexpected given the way that splines linearly extrapolate outside diff --git a/doc/mvgam_overview.R b/doc/mvgam_overview.R index c8b40d2d..86859c9f 100644 --- a/doc/mvgam_overview.R +++ b/doc/mvgam_overview.R @@ -1,16 +1,20 @@ -## ----echo = FALSE------------------------------------------------------------- -NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true") +params <- +list(EVAL = TRUE) + +## ---- echo = FALSE------------------------------------------------------------ knitr::opts_chunk$set( collapse = TRUE, comment = "#>", - purl = NOT_CRAN, - eval = NOT_CRAN + message = FALSE, + warning = FALSE, + eval = if (isTRUE(exists("params"))) params$EVAL else FALSE ) + ## ----setup, include=FALSE----------------------------------------------------- knitr::opts_chunk$set( echo = TRUE, - dpi = 150, + dpi = 100, fig.asp = 0.8, fig.width = 6, out.width = "60%", @@ -19,19 +23,24 @@ library(mvgam) library(ggplot2) theme_set(theme_bw(base_size = 12, base_family = 'serif')) + ## ----Access time series data-------------------------------------------------- data("portal_data") + ## ----Inspect data format and structure---------------------------------------- head(portal_data) + ## ----------------------------------------------------------------------------- dplyr::glimpse(portal_data) + ## ----------------------------------------------------------------------------- data <- sim_mvgam(n_series = 4, T = 24) head(data$data_train, 12) + ## ----Wrangle data for modelling----------------------------------------------- portal_data %>% @@ -54,95 +63,106 @@ portal_data %>% # Select the variables of interest to keep in the model_data dplyr::select(series, year, time, count, mintemp, ndvi) -> model_data + ## ----------------------------------------------------------------------------- head(model_data) + ## ----------------------------------------------------------------------------- dplyr::glimpse(model_data) + ## ----Summarise variables------------------------------------------------------ summary(model_data) -## ----------------------------------------------------------------------------- -image(is.na(t(model_data %>% - dplyr::arrange(dplyr::desc(time)))), axes = F, - col = c('grey80', 'darkred')) -axis(3, at = seq(0,1, len = NCOL(model_data)), labels = colnames(model_data)) ## ----------------------------------------------------------------------------- plot_mvgam_series(data = model_data, series = 1, y = 'count') + ## ----------------------------------------------------------------------------- model_data %>% # Create a 'year_fac' factor version of 'year' dplyr::mutate(year_fac = factor(year)) -> model_data + ## ----------------------------------------------------------------------------- dplyr::glimpse(model_data) levels(model_data$year_fac) + ## ----model1, include=FALSE, results='hide'------------------------------------ model1 <- mvgam(count ~ s(year_fac, bs = 're') - 1, family = poisson(), data = model_data, parallel = FALSE) + ## ----eval=FALSE--------------------------------------------------------------- -# model1 <- mvgam(count ~ s(year_fac, bs = 're') - 1, -# family = poisson(), -# data = model_data) +## model1 <- mvgam(count ~ s(year_fac, bs = 're') - 1, +## family = poisson(), +## data = model_data) + ## ----------------------------------------------------------------------------- get_mvgam_priors(count ~ s(year_fac, bs = 're') - 1, family = poisson(), data = model_data) + ## ----------------------------------------------------------------------------- summary(model1) + ## ----Extract coefficient posteriors------------------------------------------- beta_post <- as.data.frame(model1, variable = 'betas') dplyr::glimpse(beta_post) + ## ----------------------------------------------------------------------------- code(model1) + ## ----Plot random effect estimates--------------------------------------------- plot(model1, type = 're') + ## ----------------------------------------------------------------------------- mcmc_plot(object = model1, variable = 'betas', type = 'areas') + ## ----------------------------------------------------------------------------- pp_check(object = model1) -pp_check(model1, type = "rootogram") + ## ----Plot posterior hindcasts------------------------------------------------- plot(model1, type = 'forecast') + ## ----Extract posterior hindcast----------------------------------------------- hc <- hindcast(model1) str(hc) + ## ----Extract hindcasts on the linear predictor scale-------------------------- hc <- hindcast(model1, type = 'link') range(hc$hindcasts$PP) -## ----Plot hindcasts on the linear predictor scale----------------------------- -plot(hc) ## ----Plot posterior residuals------------------------------------------------- plot(model1, type = 'residuals') + ## ----------------------------------------------------------------------------- model_data %>% dplyr::filter(time <= 160) -> data_train model_data %>% dplyr::filter(time > 160) -> data_test + ## ----include=FALSE, message=FALSE, warning=FALSE------------------------------ model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1, family = poisson(), @@ -150,25 +170,23 @@ model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1, newdata = data_test, parallel = FALSE) -## ----eval=FALSE--------------------------------------------------------------- -# model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1, -# family = poisson(), -# data = data_train, -# newdata = data_test) -## ----------------------------------------------------------------------------- -plot(model1b, type = 're') +## ----eval=FALSE--------------------------------------------------------------- +## model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1, +## family = poisson(), +## data = data_train, +## newdata = data_test) -## ----------------------------------------------------------------------------- -plot(model1b, type = 'forecast') ## ----Plotting predictions against test data----------------------------------- plot(model1b, type = 'forecast', newdata = data_test) + ## ----Extract posterior forecasts---------------------------------------------- fc <- forecast(model1b) str(fc) + ## ----model2, include=FALSE, message=FALSE, warning=FALSE---------------------- model2 <- mvgam(count ~ s(year_fac, bs = 're') + ndvi - 1, @@ -177,26 +195,28 @@ model2 <- mvgam(count ~ s(year_fac, bs = 're') + newdata = data_test, parallel = FALSE) + ## ----eval=FALSE--------------------------------------------------------------- -# model2 <- mvgam(count ~ s(year_fac, bs = 're') + -# ndvi - 1, -# family = poisson(), -# data = data_train, -# newdata = data_test) +## model2 <- mvgam(count ~ s(year_fac, bs = 're') + +## ndvi - 1, +## family = poisson(), +## data = data_train, +## newdata = data_test) -## ----class.output="scroll-300"------------------------------------------------ + +## ---- class.output="scroll-300"----------------------------------------------- summary(model2) + ## ----Posterior quantiles of model coefficients-------------------------------- coef(model2) -## ----Plot NDVI effect--------------------------------------------------------- -plot(model2, type = 'pterms') ## ----------------------------------------------------------------------------- beta_post <- as.data.frame(model2, variable = 'betas') dplyr::glimpse(beta_post) + ## ----Histogram of NDVI effects------------------------------------------------ hist(beta_post$ndvi, xlim = c(-1 * max(abs(beta_post$ndvi)), @@ -210,16 +230,10 @@ hist(beta_post$ndvi, lwd = 2) abline(v = 0, lwd = 2.5) -## ----warning=FALSE------------------------------------------------------------ -plot_predictions(model2, - condition = "ndvi", - # include the observed count values - # as points, and show rugs for the observed - # ndvi and count values on the axes - points = 0.5, rug = TRUE) ## ----warning=FALSE------------------------------------------------------------ -plot(conditional_effects(model2), ask = FALSE) +conditional_effects(model2) + ## ----model3, include=FALSE, message=FALSE, warning=FALSE---------------------- model3 <- mvgam(count ~ s(time, bs = 'bs', k = 15) + @@ -229,38 +243,31 @@ model3 <- mvgam(count ~ s(time, bs = 'bs', k = 15) + newdata = data_test, parallel = FALSE) -## ----eval=FALSE--------------------------------------------------------------- -# model3 <- mvgam(count ~ s(time, bs = 'bs', k = 15) + -# ndvi, -# family = poisson(), -# data = data_train, -# newdata = data_test) -## ----------------------------------------------------------------------------- -summary(model3) +## ----eval=FALSE--------------------------------------------------------------- +## model3 <- mvgam(count ~ s(time, bs = 'bs', k = 15) + +## ndvi, +## family = poisson(), +## data = data_train, +## newdata = data_test) -## ----------------------------------------------------------------------------- -plot(model3, type = 'smooths') ## ----------------------------------------------------------------------------- -plot(model3, type = 'smooths', realisations = TRUE, - n_realisations = 30) +summary(model3) -## ----Plot smooth term derivatives, warning = FALSE, fig.asp = 1--------------- -plot(model3, type = 'smooths', derivatives = TRUE) ## ----warning=FALSE------------------------------------------------------------ -plot(conditional_effects(model3), ask = FALSE) +conditional_effects(model3, type = 'link') -## ----warning=FALSE------------------------------------------------------------ -plot(conditional_effects(model3, type = 'link'), ask = FALSE) -## ----class.output="scroll-300"------------------------------------------------ +## ---- class.output="scroll-300"----------------------------------------------- code(model3) + ## ----------------------------------------------------------------------------- plot(model3, type = 'forecast', newdata = data_test) + ## ----Plot extrapolated temporal functions using newdata----------------------- plot_mvgam_smooth(model3, smooth = 's(time)', # feed newdata to the plot function to generate @@ -270,6 +277,7 @@ plot_mvgam_smooth(model3, smooth = 's(time)', ndvi = 0)) abline(v = max(data_train$time), lty = 'dashed', lwd = 2) + ## ----model4, include=FALSE---------------------------------------------------- model4 <- mvgam(count ~ s(ndvi, k = 6), family = poisson(), @@ -278,30 +286,31 @@ model4 <- mvgam(count ~ s(ndvi, k = 6), trend_model = 'AR1', parallel = FALSE) + ## ----eval=FALSE--------------------------------------------------------------- -# model4 <- mvgam(count ~ s(ndvi, k = 6), -# family = poisson(), -# data = data_train, -# newdata = data_test, -# trend_model = 'AR1') +## model4 <- mvgam(count ~ s(ndvi, k = 6), +## family = poisson(), +## data = data_train, +## newdata = data_test, +## trend_model = 'AR1') + ## ----Summarise the mvgam autocorrelated error model, class.output="scroll-300"---- summary(model4) -## ----warning=FALSE, message=FALSE--------------------------------------------- -plot_predictions(model4, - condition = "ndvi", - points = 0.5, rug = TRUE) ## ----------------------------------------------------------------------------- plot(model4, type = 'forecast', newdata = data_test) + ## ----------------------------------------------------------------------------- plot(model4, type = 'trend', newdata = data_test) + ## ----------------------------------------------------------------------------- loo_compare(model3, model4) + ## ----------------------------------------------------------------------------- fc_mod3 <- forecast(model3) fc_mod4 <- forecast(model4) diff --git a/doc/mvgam_overview.Rmd b/doc/mvgam_overview.Rmd index 508c63d3..3b5108db 100644 --- a/doc/mvgam_overview.Rmd +++ b/doc/mvgam_overview.Rmd @@ -9,21 +9,23 @@ vignette: > %\VignetteIndexEntry{Overview of the mvgam package} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} +params: + EVAL: !r identical(tolower(Sys.getenv("NOT_CRAN")), "true") --- ```{r, echo = FALSE} -NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true") knitr::opts_chunk$set( collapse = TRUE, comment = "#>", - purl = NOT_CRAN, - eval = NOT_CRAN + message = FALSE, + warning = FALSE, + eval = if (isTRUE(exists("params"))) params$EVAL else FALSE ) ``` ```{r setup, include=FALSE} knitr::opts_chunk$set( echo = TRUE, - dpi = 150, + dpi = 100, fig.asp = 0.8, fig.width = 6, out.width = "60%", @@ -146,7 +148,7 @@ z_{i,t} & \sim \text{Normal}(ar1_i^{distance} * z_{i,t-1}, \sigma_i) \end{align* Where $distance$ is a vector of non-negative measurements of the time differences between successive observations. See the **Examples** section in `?CAR` for an illustration of how to set these models up. ## Regression formulae -`mvgam` supports an observation model regression formula, built off the `mvgcv` package, as well as an optional process model regression formula. The formulae supplied to \code{\link{mvgam}} are exactly like those supplied to `glm()` except that smooth terms, `s()`, +`mvgam` supports an observation model regression formula, built off the `mgcv` package, as well as an optional process model regression formula. The formulae supplied to \code{\link{mvgam}} are exactly like those supplied to `glm()` except that smooth terms, `s()`, `te()`, `ti()` and `t2()`, time-varying effects using `dynamic()`, monotonically increasing (using `s(x, bs = 'moi')`) or decreasing splines (using `s(x, bs = 'mod')`; see `?smooth.construct.moi.smooth.spec` for details), as well as Gaussian Process functions using `gp()`, can be added to the right hand side (and `.` is not supported in `mvgam` formulae). See `?mvgam_formulae` for more guidance. For setting up State-Space models, the optional process model formula can be used (see [the State-Space model vignette](https://nicholasjclark.github.io/mvgam/articles/trend_formulas.html) and [the shared latent states vignette](https://nicholasjclark.github.io/mvgam/articles/trend_formulas.html) for guidance on using trend formulae). @@ -223,15 +225,7 @@ You can also summarize multiple variables, which is helpful to search for data r summary(model_data) ``` -We have some `NA`s in our response variable `count`. Let's visualize the data as a heatmap to get a sense of where these are distributed (`NA`s are shown as red bars in the below plot) -```{r} -image(is.na(t(model_data %>% - dplyr::arrange(dplyr::desc(time)))), axes = F, - col = c('grey80', 'darkred')) -axis(3, at = seq(0,1, len = NCOL(model_data)), labels = colnames(model_data)) -``` - -These observations will generally be thrown out by most modelling packages in \R. But as you will see when we work through the tutorials, `mvgam` keeps these in the data so that predictions can be automatically returned for the full dataset. The time series and some of its descriptive features can be plotted using `plot_mvgam_series()`: +We have some `NA`s in our response variable `count`. These observations will generally be thrown out by most modelling packages in \R. But as you will see when we work through the tutorials, `mvgam` keeps these in the data so that predictions can be automatically returned for the full dataset. The time series and some of its descriptive features can be plotted using `plot_mvgam_series()`: ```{r} plot_mvgam_series(data = model_data, series = 1, y = 'count') ``` @@ -314,7 +308,6 @@ mcmc_plot(object = model1, We can also use the wide range of posterior checking functions available in `bayesplot` (see `?mvgam::ppc_check.mvgam` for details): ```{r} pp_check(object = model1) -pp_check(model1, type = "rootogram") ``` There is clearly some variation in these yearly intercept estimates. But how do these translate into time-varying predictions? To understand this, we can plot posterior hindcasts from this model for the training period using `plot.mvgam` with `type = 'forecast'` @@ -334,13 +327,6 @@ hc <- hindcast(model1, type = 'link') range(hc$hindcasts$PP) ``` -Objects of class `mvgam_forecast` have an associated plot function as well: -```{r Plot hindcasts on the linear predictor scale} -plot(hc) -``` - -This plot can look a bit confusing as it seems like there is linear interpolation from the end of one year to the start of the next. But this is just due to the way the lines are automatically connected in base \R plots - In any regression analysis, a key question is whether the residuals show any patterns that can be indicative of un-modelled sources of variation. For GLMs, we can use a modified residual called the [Dunn-Smyth, or randomized quantile, residual](https://www.jstor.org/stable/1390802){target="_blank"}. Inspect Dunn-Smyth residuals from the model using `plot.mvgam` with `type = 'residuals'` ```{r Plot posterior residuals} plot(model1, type = 'residuals') @@ -365,22 +351,12 @@ model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1, ```{r eval=FALSE} model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1, - family = poisson(), - data = data_train, - newdata = data_test) -``` - -Repeating the plots above gives insight into how the model's hierarchical prior formulation provides all the structure needed to sample values for un-modelled years -```{r} -plot(model1b, type = 're') -``` - - -```{r} -plot(model1b, type = 'forecast') + family = poisson(), + data = data_train, + newdata = data_test) ``` -We can also view the test data in the forecast plot to see that the predictions do not capture the temporal variation in the test set +We can view the test data in the forecast plot to see that the predictions do not capture the temporal variation in the test set ```{r Plotting predictions against test data} plot(model1b, type = 'forecast', newdata = data_test) ``` @@ -428,12 +404,7 @@ Rather than printing the summary each time, we can also quickly look at the post coef(model2) ``` -Look at the estimated effect of `ndvi` using `plot.mvgam` with `type = 'pterms'` -```{r Plot NDVI effect} -plot(model2, type = 'pterms') -``` - -This plot indicates a positive linear effect of `ndvi` on `log(counts)`. But it may be easier to visualise using a histogram, especially for parametric (linear) effects. This can be done by first extracting the posterior coefficients as we did in the first example: +Look at the estimated effect of `ndvi` using using a histogram. This can be done by first extracting the posterior coefficients: ```{r} beta_post <- as.data.frame(model2, variable = 'betas') dplyr::glimpse(beta_post) @@ -455,19 +426,9 @@ abline(v = 0, lwd = 2.5) ``` ### `marginaleffects` support -Given our model used a nonlinear link function (log link in this example), it can still be difficult to fully understand what relationship our model is estimating between a predictor and the response. Fortunately, the `marginaleffects` package makes this relatively straightforward. Objects of class `mvgam` can be used with `marginaleffects` to inspect contrasts, scenario-based predictions, conditional and marginal effects, all on the outcome scale. Here we will use the `plot_predictions` function from `marginaleffects` to inspect the conditional effect of `ndvi` (use `?plot_predictions` for guidance on how to modify these plots): +Given our model used a nonlinear link function (log link in this example), it can still be difficult to fully understand what relationship our model is estimating between a predictor and the response. Fortunately, the `marginaleffects` package makes this relatively straightforward. Objects of class `mvgam` can be used with `marginaleffects` to inspect contrasts, scenario-based predictions, conditional and marginal effects, all on the outcome scale. Like `brms`, `mvgam` has the simple `conditional_effects` function to make quick and informative plots for main effects, which rely on `marginaleffects` support. This will likely be your go-to function for quickly understanding patterns from fitted `mvgam` models ```{r warning=FALSE} -plot_predictions(model2, - condition = "ndvi", - # include the observed count values - # as points, and show rugs for the observed - # ndvi and count values on the axes - points = 0.5, rug = TRUE) -``` - -Now it is easier to get a sense of the nonlinear but positive relationship estimated between `ndvi` and `count`. Like `brms`, `mvgam` has the simple `conditional_effects` function to make quick and informative plots for main effects. This will likely be your go-to function for quickly understanding patterns from fitted `mvgam` models -```{r warning=FALSE} -plot(conditional_effects(model2), ask = FALSE) +conditional_effects(model2) ``` ## Adding predictors as smooths @@ -504,33 +465,9 @@ Where the smooth function $f_{time}$ is built by summing across a set of weighte summary(model3) ``` -The summary above now contains posterior estimates for the smoothing parameters as well as the basis coefficients for the nonlinear effect of `time`. We can visualize the conditional `time` effect using the `plot` function with `type = 'smooths'`: -```{r} -plot(model3, type = 'smooths') -``` - -By default this plots shows posterior empirical quantiles, but it can also be helpful to view some realizations of the underlying function (here, each line is a different potential curve drawn from the posterior of all possible curves): -```{r} -plot(model3, type = 'smooths', realisations = TRUE, - n_realisations = 30) -``` - -### Derivatives of smooths -A useful question when modelling using GAMs is to identify where the function is changing most rapidly. To address this, we can plot estimated 1st derivatives of the spline: -```{r Plot smooth term derivatives, warning = FALSE, fig.asp = 1} -plot(model3, type = 'smooths', derivatives = TRUE) -``` - -Here, values above `>0` indicate the function was increasing at that time point, while values `<0` indicate the function was declining. The most rapid declines appear to have been happening around timepoints 50 and again toward the end of the training period, for example. - -Use `conditional_effects` again for useful plots on the outcome scale: +The summary above now contains posterior estimates for the smoothing parameters as well as the basis coefficients for the nonlinear effect of `time`. We can visualize `conditional_effects` as before: ```{r warning=FALSE} -plot(conditional_effects(model3), ask = FALSE) -``` - -Or on the link scale: -```{r warning=FALSE} -plot(conditional_effects(model3, type = 'link'), ask = FALSE) +conditional_effects(model3, type = 'link') ``` Inspect the underlying `Stan` code to gain some idea of how the spline is being penalized: @@ -591,13 +528,6 @@ Here the term $z_t$ captures autocorrelated latent residuals, which are modelled summary(model4) ``` -View conditional smooths for the `ndvi` effect: -```{r warning=FALSE, message=FALSE} -plot_predictions(model4, - condition = "ndvi", - points = 0.5, rug = TRUE) -``` - View posterior hindcasts / forecasts and compare against the out of sample test data ```{r} plot(model4, type = 'forecast', newdata = data_test) diff --git a/doc/mvgam_overview.html b/doc/mvgam_overview.html index 6ebc839b..d905d14a 100644 --- a/doc/mvgam_overview.html +++ b/doc/mvgam_overview.html @@ -12,7 +12,7 @@ - +
marginaleffects
supportmvgam
mvgam
supports an observation model regression formula,
-built off the mvgcv
package, as well as an optional process
+built off the mgcv
package, as well as an optional process
model regression formula. The formulae supplied to are exactly like
those supplied to glm()
except that smooth terms,
s()
, te()
, ti()
and
@@ -769,17 +767,17 @@
Notice how we have four different time series in these simulated data, but we do not spread the outcome values into different columns. Rather, there is only a single column for the outcome variable, labelled @@ -863,22 +861,14 @@
We have some NA
s in our response variable
-count
. Let’s visualize the data as a heatmap to get a sense
-of where these are distributed (NA
s are shown as red bars
-in the below plot)
image(is.na(t(model_data %>%
- dplyr::arrange(dplyr::desc(time)))), axes = F,
- col = c('grey80', 'darkred'))
-axis(3, at = seq(0,1, len = NCOL(model_data)), labels = colnames(model_data))
These observations will generally be thrown out by most modelling
-packages in . But as you will see when we work through the tutorials,
-mvgam
keeps these in the data so that predictions can be
-automatically returned for the full dataset. The time series and some of
-its descriptive features can be plotted using
+count
. These observations will generally be thrown out by
+most modelling packages in . But as you will see when we work through
+the tutorials, mvgam
keeps these in the data so that
+predictions can be automatically returned for the full dataset. The time
+series and some of its descriptive features can be plotted using
plot_mvgam_series()
:
?smooth.construct.re.smooth.spec
for details about the
re
basis construction that is used by both
mvgam
and mgcv
-model_data %>%
-
- # Create a 'year_fac' factor version of 'year'
- dplyr::mutate(year_fac = factor(year)) -> model_data
model_data %>%
+
+ # Create a 'year_fac' factor version of 'year'
+ dplyr::mutate(year_fac = factor(year)) -> model_data
Preview the dataset to ensure year is now a factor with a unique factor level for each year in the data
-dplyr::glimpse(model_data)
-#> Rows: 199
-#> Columns: 7
-#> $ series <fct> PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, P…
-#> $ year <int> 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2…
-#> $ time <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18…
-#> $ count <int> 0, 1, 2, NA, 10, NA, NA, 16, 18, 12, NA, 3, 2, NA, NA, 13, NA…
-#> $ mintemp <dbl> -9.710, -5.924, -0.220, 1.931, 6.568, 11.590, 14.370, 16.520,…
-#> $ ndvi <dbl> 1.4658889, 1.5585069, 1.3378172, 1.6589129, 1.8536561, 1.7613…
-#> $ year_fac <fct> 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2…
-levels(model_data$year_fac)
-#> [1] "2004" "2005" "2006" "2007" "2008" "2009" "2010" "2011" "2012" "2013"
-#> [11] "2014" "2015" "2016" "2017" "2018" "2019" "2020"
dplyr::glimpse(model_data)
+#> Rows: 199
+#> Columns: 7
+#> $ series <fct> PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, P…
+#> $ year <int> 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2…
+#> $ time <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18…
+#> $ count <int> 0, 1, 2, NA, 10, NA, NA, 16, 18, 12, NA, 3, 2, NA, NA, 13, NA…
+#> $ mintemp <dbl> -9.710, -5.924, -0.220, 1.931, 6.568, 11.590, 14.370, 16.520,…
+#> $ ndvi <dbl> 1.4658889, 1.5585069, 1.3378172, 1.6589129, 1.8536561, 1.7613…
+#> $ year_fac <fct> 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2…
+levels(model_data$year_fac)
+#> [1] "2004" "2005" "2006" "2007" "2008" "2009" "2010" "2011" "2012" "2013"
+#> [11] "2014" "2015" "2016" "2017" "2018" "2019" "2020"
We are now ready for our first mvgam
model. The syntax
will be familiar to users who have previously built models with
mgcv
. But for a refresher, see ?formula.gam
@@ -943,9 +933,9 @@
Stan
user’s guide for more information
about the software and the enormous variety of models that can be
tackled with HMC.
-
+
The model can be described mathematically for each timepoint \(t\) as follows: \[\begin{align*}
\boldsymbol{count}_t & \sim \text{Poisson}(\lambda_t) \\
log(\lambda_t) & = \beta_{year[year_t]} \\
@@ -959,90 +949,91 @@ GLMs with temporal random effects
similar functionality to the options available in brms
. For
example, the default priors on \((\mu_{year})\) and \((\sigma_{year})\) can be viewed using the
following code:
get_mvgam_priors(count ~ s(year_fac, bs = 're') - 1,
- family = poisson(),
- data = model_data)
-#> param_name param_length param_info
-#> 1 vector[1] mu_raw; 1 s(year_fac) pop mean
-#> 2 vector<lower=0>[1] sigma_raw; 1 s(year_fac) pop sd
-#> prior example_change
-#> 1 mu_raw ~ std_normal(); mu_raw ~ normal(0.17, 0.76);
-#> 2 sigma_raw ~ student_t(3, 0, 2.5); sigma_raw ~ exponential(0.7);
-#> new_lowerbound new_upperbound
-#> 1 NA NA
-#> 2 NA NA
get_mvgam_priors(count ~ s(year_fac, bs = 're') - 1,
+ family = poisson(),
+ data = model_data)
+#> param_name param_length param_info
+#> 1 vector[1] mu_raw; 1 s(year_fac) pop mean
+#> 2 vector<lower=0>[1] sigma_raw; 1 s(year_fac) pop sd
+#> prior example_change
+#> 1 mu_raw ~ std_normal(); mu_raw ~ normal(-0.7, 0.81);
+#> 2 sigma_raw ~ student_t(3, 0, 2.5); sigma_raw ~ exponential(0.85);
+#> new_lowerbound new_upperbound
+#> 1 NA NA
+#> 2 NA NA
See examples in ?get_mvgam_priors
to find out different
ways that priors can be altered. Once the model has finished, the first
step is to inspect the summary
to ensure no major
diagnostic warnings have been produced and to quickly summarise
posterior distributions for key parameters
summary(model1)
-#> GAM formula:
-#> count ~ s(year_fac, bs = "re") - 1
-#>
-#> Family:
-#> poisson
-#>
-#> Link function:
-#> log
-#>
-#> Trend model:
-#> None
-#>
-#> N series:
-#> 1
-#>
-#> N timepoints:
-#> 199
-#>
-#> Status:
-#> Fitted using Stan
-#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
-#> Total post-warmup draws = 2000
-#>
-#>
-#> GAM coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> s(year_fac).1 1.80 2.1 2.3 1.00 2663
-#> s(year_fac).2 2.50 2.7 2.8 1.00 2468
-#> s(year_fac).3 3.00 3.1 3.2 1.00 3105
-#> s(year_fac).4 3.10 3.3 3.4 1.00 2822
-#> s(year_fac).5 1.90 2.1 2.3 1.00 3348
-#> s(year_fac).6 1.50 1.8 2.0 1.00 2859
-#> s(year_fac).7 1.80 2.0 2.3 1.00 2995
-#> s(year_fac).8 2.80 3.0 3.1 1.00 3126
-#> s(year_fac).9 3.10 3.3 3.4 1.00 2816
-#> s(year_fac).10 2.60 2.8 2.9 1.00 2289
-#> s(year_fac).11 3.00 3.1 3.2 1.00 2725
-#> s(year_fac).12 3.10 3.2 3.3 1.00 2581
-#> s(year_fac).13 2.00 2.2 2.5 1.00 2885
-#> s(year_fac).14 2.50 2.6 2.8 1.00 2749
-#> s(year_fac).15 1.90 2.2 2.4 1.00 2943
-#> s(year_fac).16 1.90 2.1 2.3 1.00 2991
-#> s(year_fac).17 -0.33 1.1 1.9 1.01 356
-#>
-#> GAM group-level estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> mean(s(year_fac)) 2.00 2.40 2.7 1.01 193
-#> sd(s(year_fac)) 0.44 0.67 1.1 1.02 172
-#>
-#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(year_fac) 13.8 17 23477 <2e-16 ***
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Tue Apr 16 12:59:57 PM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
summary(model1)
+#> GAM formula:
+#> count ~ s(year_fac, bs = "re") - 1
+#> <environment: 0x0000024869b48078>
+#>
+#> Family:
+#> poisson
+#>
+#> Link function:
+#> log
+#>
+#> Trend model:
+#> None
+#>
+#> N series:
+#> 1
+#>
+#> N timepoints:
+#> 199
+#>
+#> Status:
+#> Fitted using Stan
+#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
+#> Total post-warmup draws = 2000
+#>
+#>
+#> GAM coefficient (beta) estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> s(year_fac).1 1.80 2.0 2.3 1.00 2660
+#> s(year_fac).2 2.50 2.7 2.9 1.00 2901
+#> s(year_fac).3 3.00 3.1 3.2 1.00 3803
+#> s(year_fac).4 3.10 3.3 3.4 1.00 2872
+#> s(year_fac).5 1.90 2.1 2.3 1.00 3127
+#> s(year_fac).6 1.50 1.8 2.0 1.00 2977
+#> s(year_fac).7 1.80 2.0 2.3 1.00 2586
+#> s(year_fac).8 2.80 3.0 3.1 1.00 3608
+#> s(year_fac).9 3.10 3.3 3.4 1.00 2846
+#> s(year_fac).10 2.60 2.8 2.9 1.00 2911
+#> s(year_fac).11 3.00 3.1 3.2 1.00 3026
+#> s(year_fac).12 3.10 3.2 3.3 1.00 2736
+#> s(year_fac).13 2.00 2.2 2.5 1.00 2887
+#> s(year_fac).14 2.50 2.6 2.8 1.00 3026
+#> s(year_fac).15 1.90 2.2 2.4 1.00 2268
+#> s(year_fac).16 1.90 2.1 2.3 1.00 2649
+#> s(year_fac).17 -0.26 1.1 1.9 1.01 384
+#>
+#> GAM group-level estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> mean(s(year_fac)) 2.00 2.40 2.8 1.01 209
+#> sd(s(year_fac)) 0.46 0.68 1.1 1.02 193
+#>
+#> Approximate significance of GAM smooths:
+#> edf Ref.df Chi.sq p-value
+#> s(year_fac) 13.7 17 1637 <2e-16 ***
+#> ---
+#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#>
+#> Stan MCMC diagnostics:
+#> n_eff / iter looks reasonable for all parameters
+#> Rhat looks reasonable for all parameters
+#> 0 of 2000 iterations ended with a divergence (0%)
+#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
+#> E-FMI indicated no pathological behavior
+#>
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:30:51 AM 2024.
+#> For each parameter, n_eff is a crude measure of effective sample size,
+#> and Rhat is the potential scale reduction factor on split MCMC chains
+#> (at convergence, Rhat = 1)
The diagnostic messages at the bottom of the summary show that the HMC sampler did not encounter any problems or difficult posterior spaces. This is a good sign. Posterior distributions for model @@ -1051,85 +1042,85 @@
data.frame
using:
-beta_post <- as.data.frame(model1, variable = 'betas')
-dplyr::glimpse(beta_post)
-#> Rows: 2,000
-#> Columns: 17
-#> $ `s(year_fac).1` <dbl> 2.17023, 2.08413, 1.99815, 2.17572, 2.11308, 2.03050,…
-#> $ `s(year_fac).2` <dbl> 2.70488, 2.69887, 2.65551, 2.79651, 2.76044, 2.75108,…
-#> $ `s(year_fac).3` <dbl> 3.08617, 3.13429, 3.04575, 3.14824, 3.10917, 3.09809,…
-#> $ `s(year_fac).4` <dbl> 3.29529, 3.21044, 3.22018, 3.26644, 3.29880, 3.25638,…
-#> $ `s(year_fac).5` <dbl> 2.11053, 2.14516, 2.13959, 2.05244, 2.26847, 2.20820,…
-#> $ `s(year_fac).6` <dbl> 1.80418, 1.83343, 1.75987, 1.76972, 1.64782, 1.70765,…
-#> $ `s(year_fac).7` <dbl> 1.99033, 1.95772, 1.98093, 2.01777, 2.04849, 1.97815,…
-#> $ `s(year_fac).8` <dbl> 3.01204, 2.91291, 3.14762, 2.83082, 2.90250, 3.04050,…
-#> $ `s(year_fac).9` <dbl> 3.22248, 3.20205, 3.30373, 3.23181, 3.24927, 3.25232,…
-#> $ `s(year_fac).10` <dbl> 2.71922, 2.62225, 2.82574, 2.65027, 2.69077, 2.75249,…
-#> $ `s(year_fac).11` <dbl> 3.10525, 3.03951, 3.12914, 3.03849, 3.01198, 3.14391,…
-#> $ `s(year_fac).12` <dbl> 3.20887, 3.23337, 3.24350, 3.16821, 3.23516, 3.18216,…
-#> $ `s(year_fac).13` <dbl> 2.18530, 2.15358, 2.39908, 2.21862, 2.14648, 2.17067,…
-#> $ `s(year_fac).14` <dbl> 2.66153, 2.67202, 2.64594, 2.57457, 2.38109, 2.44175,…
-#> $ `s(year_fac).15` <dbl> 2.24898, 2.24912, 2.03587, 2.33842, 2.27868, 2.24643,…
-#> $ `s(year_fac).16` <dbl> 2.20947, 2.21717, 2.03610, 2.17374, 2.16442, 2.14900,…
-#> $ `s(year_fac).17` <dbl> 0.1428430, 0.8005170, -0.0136294, 0.6880930, 0.192034…
beta_post <- as.data.frame(model1, variable = 'betas')
+dplyr::glimpse(beta_post)
+#> Rows: 2,000
+#> Columns: 17
+#> $ `s(year_fac).1` <dbl> 2.21482, 1.93877, 2.29682, 1.74808, 1.95131, 2.24533,…
+#> $ `s(year_fac).2` <dbl> 2.76723, 2.60145, 2.79886, 2.65931, 2.71225, 2.68233,…
+#> $ `s(year_fac).3` <dbl> 2.94903, 3.09928, 3.09161, 3.10005, 3.18758, 3.16841,…
+#> $ `s(year_fac).4` <dbl> 3.26861, 3.25354, 3.33401, 3.27266, 3.32254, 3.28928,…
+#> $ `s(year_fac).5` <dbl> 2.24324, 2.07771, 2.09387, 2.20329, 2.09680, 2.22082,…
+#> $ `s(year_fac).6` <dbl> 1.84890, 1.72516, 1.75302, 1.59202, 1.89224, 1.89458,…
+#> $ `s(year_fac).7` <dbl> 2.06401, 2.20919, 1.88047, 2.09898, 2.12196, 2.01012,…
+#> $ `s(year_fac).8` <dbl> 3.02799, 2.87193, 3.06112, 3.03637, 2.92891, 2.91725,…
+#> $ `s(year_fac).9` <dbl> 3.21091, 3.28829, 3.20874, 3.15167, 3.27000, 3.29526,…
+#> $ `s(year_fac).10` <dbl> 2.81849, 2.68918, 2.81146, 2.74898, 2.65924, 2.81729,…
+#> $ `s(year_fac).11` <dbl> 3.06540, 3.01926, 3.12711, 3.12696, 3.06283, 3.10466,…
+#> $ `s(year_fac).12` <dbl> 3.16679, 3.14866, 3.20039, 3.17305, 3.27057, 3.16223,…
+#> $ `s(year_fac).13` <dbl> 2.12371, 2.38737, 2.07735, 2.22997, 2.08580, 2.14124,…
+#> $ `s(year_fac).14` <dbl> 2.69768, 2.57665, 2.66538, 2.51016, 2.64938, 2.42338,…
+#> $ `s(year_fac).15` <dbl> 2.21619, 2.14404, 2.24750, 2.14194, 2.10472, 2.24652,…
+#> $ `s(year_fac).16` <dbl> 2.15356, 2.06512, 2.02614, 2.14004, 2.21221, 2.07094,…
+#> $ `s(year_fac).17` <dbl> -0.0533815, 0.4696020, -0.2424530, 1.2282600, 1.16988…
With any model fitted in mvgam
, the underlying
Stan
code can be viewed using the code
function:
code(model1)
-#> // Stan model code generated by package mvgam
-#> data {
-#> int<lower=0> total_obs; // total number of observations
-#> int<lower=0> n; // number of timepoints per series
-#> int<lower=0> n_series; // number of series
-#> int<lower=0> num_basis; // total number of basis coefficients
-#> matrix[total_obs, num_basis] X; // mgcv GAM design matrix
-#> array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
-#> int<lower=0> n_nonmissing; // number of nonmissing observations
-#> array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
-#> matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
-#> array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
-#> }
-#> parameters {
-#> // raw basis coefficients
-#> vector[num_basis] b_raw;
-#>
-#> // random effect variances
-#> vector<lower=0>[1] sigma_raw;
-#>
-#> // random effect means
-#> vector[1] mu_raw;
-#> }
-#> transformed parameters {
-#> // basis coefficients
-#> vector[num_basis] b;
-#> b[1 : 17] = mu_raw[1] + b_raw[1 : 17] * sigma_raw[1];
-#> }
-#> model {
-#> // prior for random effect population variances
-#> sigma_raw ~ student_t(3, 0, 2.5);
-#>
-#> // prior for random effect population means
-#> mu_raw ~ std_normal();
-#>
-#> // prior (non-centred) for s(year_fac)...
-#> b_raw[1 : 17] ~ std_normal();
-#> {
-#> // likelihood functions
-#> flat_ys ~ poisson_log_glm(flat_xs, 0.0, b);
-#> }
-#> }
-#> generated quantities {
-#> vector[total_obs] eta;
-#> matrix[n, n_series] mus;
-#> array[n, n_series] int ypred;
-#>
-#> // posterior predictions
-#> eta = X * b;
-#> for (s in 1 : n_series) {
-#> mus[1 : n, s] = eta[ytimes[1 : n, s]];
-#> ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
-#> }
-#> }
code(model1)
+#> // Stan model code generated by package mvgam
+#> data {
+#> int<lower=0> total_obs; // total number of observations
+#> int<lower=0> n; // number of timepoints per series
+#> int<lower=0> n_series; // number of series
+#> int<lower=0> num_basis; // total number of basis coefficients
+#> matrix[total_obs, num_basis] X; // mgcv GAM design matrix
+#> array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
+#> int<lower=0> n_nonmissing; // number of nonmissing observations
+#> array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
+#> matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
+#> array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
+#> }
+#> parameters {
+#> // raw basis coefficients
+#> vector[num_basis] b_raw;
+#>
+#> // random effect variances
+#> vector<lower=0>[1] sigma_raw;
+#>
+#> // random effect means
+#> vector[1] mu_raw;
+#> }
+#> transformed parameters {
+#> // basis coefficients
+#> vector[num_basis] b;
+#> b[1 : 17] = mu_raw[1] + b_raw[1 : 17] * sigma_raw[1];
+#> }
+#> model {
+#> // prior for random effect population variances
+#> sigma_raw ~ student_t(3, 0, 2.5);
+#>
+#> // prior for random effect population means
+#> mu_raw ~ std_normal();
+#>
+#> // prior (non-centred) for s(year_fac)...
+#> b_raw[1 : 17] ~ std_normal();
+#> {
+#> // likelihood functions
+#> flat_ys ~ poisson_log_glm(flat_xs, 0.0, b);
+#> }
+#> }
+#> generated quantities {
+#> vector[total_obs] eta;
+#> matrix[n, n_series] mus;
+#> array[n, n_series] int ypred;
+#>
+#> // posterior predictions
+#> eta = X * b;
+#> for (s in 1 : n_series) {
+#> mus[1 : n, s] = eta[ytimes[1 : n, s]];
+#> ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
+#> }
+#> }
Now for interrogating the model. We can get some sense of the @@ -1139,8 +1130,8 @@
type = 're'
. See ?plot.mvgam
for more details
about the types of plots that can be produced from fitted
mvgam
objects
-
-
+
+
bayesplot
supportbayesplot
supportbayesplot
package to visualize posterior
distributions and diagnostics (see ?mvgam::mcmc_plot.mvgam
for details):
-
-
+
+
We can also use the wide range of posterior checking functions
available in bayesplot
(see
?mvgam::ppc_check.mvgam
for details):
pp_check(object = model1)
-#> Using 10 posterior draws for ppc type 'dens_overlay' by default.
-#> Warning in pp_check.mvgam(object = model1): NA responses are not shown in
-#> 'pp_check'.
pp_check(model1, type = "rootogram")
-#> Using all posterior draws for ppc type 'rootogram' by default.
-#> Warning in pp_check.mvgam(model1, type = "rootogram"): NA responses are not
-#> shown in 'pp_check'.
There is clearly some variation in these yearly intercept estimates.
But how do these translate into time-varying predictions? To understand
this, we can plot posterior hindcasts from this model for the training
period using plot.mvgam
with
type = 'forecast'
If you wish to extract these hindcasts for other downstream analyses,
the hindcast
function can be used. This will return a list
object of class mvgam_forecast
. In the
hindcasts
slot, a matrix of posterior retrodictions will be
returned for each series in the data (only one series in our
example):
hc <- hindcast(model1)
-str(hc)
-#> List of 15
-#> $ call :Class 'formula' language count ~ s(year_fac, bs = "re") - 1
-#> .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
-#> $ trend_call : NULL
-#> $ family : chr "poisson"
-#> $ trend_model : chr "None"
-#> $ drift : logi FALSE
-#> $ use_lv : logi FALSE
-#> $ fit_engine : chr "stan"
-#> $ type : chr "response"
-#> $ series_names : chr "PP"
-#> $ train_observations:List of 1
-#> ..$ PP: int [1:199] 0 1 2 NA 10 NA NA 16 18 12 ...
-#> $ train_times : num [1:199] 1 2 3 4 5 6 7 8 9 10 ...
-#> $ test_observations : NULL
-#> $ test_times : NULL
-#> $ hindcasts :List of 1
-#> ..$ PP: num [1:2000, 1:199] 9 6 10 6 12 8 10 5 8 6 ...
-#> .. ..- attr(*, "dimnames")=List of 2
-#> .. .. ..$ : NULL
-#> .. .. ..$ : chr [1:199] "ypred[1,1]" "ypred[2,1]" "ypred[3,1]" "ypred[4,1]" ...
-#> $ forecasts : NULL
-#> - attr(*, "class")= chr "mvgam_forecast"
hc <- hindcast(model1)
+str(hc)
+#> List of 15
+#> $ call :Class 'formula' language count ~ s(year_fac, bs = "re") - 1
+#> .. ..- attr(*, ".Environment")=<environment: 0x0000024869b48078>
+#> $ trend_call : NULL
+#> $ family : chr "poisson"
+#> $ trend_model : chr "None"
+#> $ drift : logi FALSE
+#> $ use_lv : logi FALSE
+#> $ fit_engine : chr "stan"
+#> $ type : chr "response"
+#> $ series_names : chr "PP"
+#> $ train_observations:List of 1
+#> ..$ PP: int [1:199] 0 1 2 NA 10 NA NA 16 18 12 ...
+#> $ train_times : num [1:199] 1 2 3 4 5 6 7 8 9 10 ...
+#> $ test_observations : NULL
+#> $ test_times : NULL
+#> $ hindcasts :List of 1
+#> ..$ PP: num [1:2000, 1:199] 11 3 14 9 10 9 15 9 11 10 ...
+#> .. ..- attr(*, "dimnames")=List of 2
+#> .. .. ..$ : NULL
+#> .. .. ..$ : chr [1:199] "ypred[1,1]" "ypred[2,1]" "ypred[3,1]" "ypred[4,1]" ...
+#> $ forecasts : NULL
+#> - attr(*, "class")= chr "mvgam_forecast"
You can also extract these hindcasts on the linear predictor scale, which in this case is the log scale (our Poisson GLM used a log link function). Sometimes this can be useful for asking more targeted questions about drivers of variation:
- -Objects of class mvgam_forecast
have an associated plot
-function as well:
This plot can look a bit confusing as it seems like there is linear -interpolation from the end of one year to the start of the next. But -this is just due to the way the lines are automatically connected in -base plots
+In any regression analysis, a key question is whether the residuals
show any patterns that can be indicative of un-modelled sources of
variation. For GLMs, we can use a modified residual called the Dunn-Smyth,
or randomized quantile, residual. Inspect Dunn-Smyth residuals from
the model using plot.mvgam
with
type = 'residuals'
newdata
. For splitting, we will make use of the
filter
function from dplyr
-model_data %>%
- dplyr::filter(time <= 160) -> data_train
-model_data %>%
- dplyr::filter(time > 160) -> data_test
model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1,
- family = poisson(),
- data = data_train,
- newdata = data_test)
Repeating the plots above gives insight into how the model’s -hierarchical prior formulation provides all the structure needed to -sample values for un-modelled years
- - - - -#> Out of sample DRPS:
-#> [1] 182.6177
-We can also view the test data in the forecast plot to see that the +
model_data %>%
+ dplyr::filter(time <= 160) -> data_train
+model_data %>%
+ dplyr::filter(time > 160) -> data_test
model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1,
+ family = poisson(),
+ data = data_train,
+ newdata = data_test)
We can view the test data in the forecast plot to see that the predictions do not capture the temporal variation in the test set
- - -#> Out of sample DRPS:
-#> [1] 182.6177
+
+
As with the hindcast
function, we can use the
forecast
function to automatically extract the posterior
distributions for these predictions. This also returns an object of
class mvgam_forecast
, but now it will contain both the
hindcasts and forecasts for each series in the data:
fc <- forecast(model1b)
-str(fc)
-#> List of 16
-#> $ call :Class 'formula' language count ~ s(year_fac, bs = "re") - 1
-#> .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
-#> $ trend_call : NULL
-#> $ family : chr "poisson"
-#> $ family_pars : NULL
-#> $ trend_model : chr "None"
-#> $ drift : logi FALSE
-#> $ use_lv : logi FALSE
-#> $ fit_engine : chr "stan"
-#> $ type : chr "response"
-#> $ series_names : Factor w/ 1 level "PP": 1
-#> $ train_observations:List of 1
-#> ..$ PP: int [1:160] 0 1 2 NA 10 NA NA 16 18 12 ...
-#> $ train_times : num [1:160] 1 2 3 4 5 6 7 8 9 10 ...
-#> $ test_observations :List of 1
-#> ..$ PP: int [1:39] NA 0 0 10 3 14 18 NA 28 46 ...
-#> $ test_times : num [1:39] 161 162 163 164 165 166 167 168 169 170 ...
-#> $ hindcasts :List of 1
-#> ..$ PP: num [1:2000, 1:160] 10 7 8 11 6 11 9 11 5 2 ...
-#> .. ..- attr(*, "dimnames")=List of 2
-#> .. .. ..$ : NULL
-#> .. .. ..$ : chr [1:160] "ypred[1,1]" "ypred[2,1]" "ypred[3,1]" "ypred[4,1]" ...
-#> $ forecasts :List of 1
-#> ..$ PP: num [1:2000, 1:39] 5 12 10 7 7 8 11 14 8 12 ...
-#> .. ..- attr(*, "dimnames")=List of 2
-#> .. .. ..$ : NULL
-#> .. .. ..$ : chr [1:39] "ypred[161,1]" "ypred[162,1]" "ypred[163,1]" "ypred[164,1]" ...
-#> - attr(*, "class")= chr "mvgam_forecast"
fc <- forecast(model1b)
+str(fc)
+#> List of 16
+#> $ call :Class 'formula' language count ~ s(year_fac, bs = "re") - 1
+#> .. ..- attr(*, ".Environment")=<environment: 0x0000024869b48078>
+#> $ trend_call : NULL
+#> $ family : chr "poisson"
+#> $ family_pars : NULL
+#> $ trend_model : chr "None"
+#> $ drift : logi FALSE
+#> $ use_lv : logi FALSE
+#> $ fit_engine : chr "stan"
+#> $ type : chr "response"
+#> $ series_names : Factor w/ 1 level "PP": 1
+#> $ train_observations:List of 1
+#> ..$ PP: int [1:160] 0 1 2 NA 10 NA NA 16 18 12 ...
+#> $ train_times : num [1:160] 1 2 3 4 5 6 7 8 9 10 ...
+#> $ test_observations :List of 1
+#> ..$ PP: int [1:39] NA 0 0 10 3 14 18 NA 28 46 ...
+#> $ test_times : num [1:39] 161 162 163 164 165 166 167 168 169 170 ...
+#> $ hindcasts :List of 1
+#> ..$ PP: num [1:2000, 1:160] 12 7 15 10 5 13 13 8 3 11 ...
+#> .. ..- attr(*, "dimnames")=List of 2
+#> .. .. ..$ : NULL
+#> .. .. ..$ : chr [1:160] "ypred[1,1]" "ypred[2,1]" "ypred[3,1]" "ypred[4,1]" ...
+#> $ forecasts :List of 1
+#> ..$ PP: num [1:2000, 1:39] 7 6 8 8 10 14 13 5 10 7 ...
+#> .. ..- attr(*, "dimnames")=List of 2
+#> .. .. ..$ : NULL
+#> .. .. ..$ : chr [1:39] "ypred[161,1]" "ypred[162,1]" "ypred[163,1]" "ypred[164,1]" ...
+#> - attr(*, "class")= chr "mvgam_forecast"
ndvi
as a linear predictor:
-model2 <- mvgam(count ~ s(year_fac, bs = 're') +
- ndvi - 1,
- family = poisson(),
- data = data_train,
- newdata = data_test)
model2 <- mvgam(count ~ s(year_fac, bs = 're') +
+ ndvi - 1,
+ family = poisson(),
+ data = data_train,
+ newdata = data_test)
The model can be described mathematically as follows: \[\begin{align*}
\boldsymbol{count}_t & \sim \text{Poisson}(\lambda_t) \\
log(\lambda_t) & = \beta_{year[year_t]} + \beta_{ndvi} *
@@ -1319,145 +1283,140 @@ Where the \(\beta_{year}\) effects
are the same as before but we now have another predictor \((\beta_{ndvi})\) that applies to the
Rather than printing the summary each time, we can also quickly look
at the posterior empirical quantiles for the fixed effect of
Look at the estimated effect of This plot indicates a positive linear effect of Look at the estimated effect of The posterior distribution for the effect of Given our model used a nonlinear link function (log link in this
@@ -1467,25 +1426,13 @@ Now it is easier to get a sense of the nonlinear but positive
-relationship estimated between Adding predictors as “fixed” effects
ndvi
value at each timepoint \(t\). Inspect the summary of this modelsummary(model2)
-#> GAM formula:
-#> count ~ ndvi + s(year_fac, bs = "re") - 1
-#>
-#> Family:
-#> poisson
-#>
-#> Link function:
-#> log
-#>
-#> Trend model:
-#> None
-#>
-#> N series:
-#> 1
-#>
-#> N timepoints:
-#> 160
-#>
-#> Status:
-#> Fitted using Stan
-#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
-#> Total post-warmup draws = 2000
-#>
-#>
-#> GAM coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> ndvi 0.32 0.39 0.46 1 1696
-#> s(year_fac).1 1.10 1.40 1.70 1 2512
-#> s(year_fac).2 1.80 2.00 2.20 1 2210
-#> s(year_fac).3 2.20 2.40 2.60 1 2109
-#> s(year_fac).4 2.30 2.50 2.70 1 1780
-#> s(year_fac).5 1.20 1.40 1.60 1 2257
-#> s(year_fac).6 1.00 1.30 1.50 1 2827
-#> s(year_fac).7 1.10 1.40 1.70 1 2492
-#> s(year_fac).8 2.10 2.30 2.50 1 2188
-#> s(year_fac).9 2.70 2.90 3.00 1 2014
-#> s(year_fac).10 2.00 2.20 2.40 1 2090
-#> s(year_fac).11 2.30 2.40 2.60 1 1675
-#> s(year_fac).12 2.50 2.70 2.80 1 2108
-#> s(year_fac).13 1.40 1.60 1.80 1 2161
-#> s(year_fac).14 0.46 2.00 3.20 1 1849
-#> s(year_fac).15 0.53 2.00 3.30 1 1731
-#> s(year_fac).16 0.53 2.00 3.30 1 1859
-#> s(year_fac).17 0.59 1.90 3.20 1 1761
-#>
-#> GAM group-level estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> mean(s(year_fac)) 1.6 2.00 2.3 1.01 397
-#> sd(s(year_fac)) 0.4 0.59 1.0 1.01 395
-#>
-#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(year_fac) 11.2 17 3096 <2e-16 ***
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Tue Apr 16 1:00:50 PM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
summary(model2)
+#> GAM formula:
+#> count ~ ndvi + s(year_fac, bs = "re") - 1
+#> <environment: 0x0000024869b48078>
+#>
+#> Family:
+#> poisson
+#>
+#> Link function:
+#> log
+#>
+#> Trend model:
+#> None
+#>
+#> N series:
+#> 1
+#>
+#> N timepoints:
+#> 199
+#>
+#> Status:
+#> Fitted using Stan
+#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
+#> Total post-warmup draws = 2000
+#>
+#>
+#> GAM coefficient (beta) estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> ndvi 0.32 0.39 0.46 1 1835
+#> s(year_fac).1 1.10 1.40 1.70 1 2267
+#> s(year_fac).2 1.80 2.00 2.20 1 2518
+#> s(year_fac).3 2.20 2.40 2.60 1 2140
+#> s(year_fac).4 2.30 2.50 2.70 1 1978
+#> s(year_fac).5 1.20 1.40 1.60 1 2341
+#> s(year_fac).6 1.00 1.30 1.50 1 2318
+#> s(year_fac).7 1.20 1.40 1.70 1 2447
+#> s(year_fac).8 2.10 2.30 2.50 1 2317
+#> s(year_fac).9 2.70 2.90 3.00 1 1916
+#> s(year_fac).10 2.00 2.20 2.40 1 2791
+#> s(year_fac).11 2.30 2.40 2.60 1 2214
+#> s(year_fac).12 2.50 2.70 2.80 1 2010
+#> s(year_fac).13 1.40 1.60 1.80 1 2976
+#> s(year_fac).14 0.68 2.00 3.30 1 1581
+#> s(year_fac).15 0.69 2.00 3.30 1 1874
+#> s(year_fac).16 0.56 2.00 3.40 1 1442
+#> s(year_fac).17 0.60 2.00 3.30 1 1671
+#>
+#> GAM group-level estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> mean(s(year_fac)) 1.6 2.0 2.3 1.01 417
+#> sd(s(year_fac)) 0.4 0.6 1.0 1.01 417
+#>
+#> Approximate significance of GAM smooths:
+#> edf Ref.df Chi.sq p-value
+#> s(year_fac) 10.9 17 265 <2e-16 ***
+#> ---
+#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#>
+#> Stan MCMC diagnostics:
+#> n_eff / iter looks reasonable for all parameters
+#> Rhat looks reasonable for all parameters
+#> 0 of 2000 iterations ended with a divergence (0%)
+#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
+#> E-FMI indicated no pathological behavior
+#>
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:32:01 AM 2024.
+#> For each parameter, n_eff is a crude measure of effective sample size,
+#> and Rhat is the potential scale reduction factor on split MCMC chains
+#> (at convergence, Rhat = 1)
ndvi
(and other linear predictor coefficients) using
coef
:coef(model2)
-#> 2.5% 50% 97.5% Rhat n_eff
-#> ndvi 0.3198694 0.3899835 0.4571083 1 1696
-#> s(year_fac).1 1.1176373 1.4085900 1.6603838 1 2512
-#> s(year_fac).2 1.8008470 2.0005000 2.2003670 1 2210
-#> s(year_fac).3 2.1842727 2.3822950 2.5699363 1 2109
-#> s(year_fac).4 2.3267037 2.5022700 2.6847912 1 1780
-#> s(year_fac).5 1.1945853 1.4215950 1.6492038 1 2257
-#> s(year_fac).6 1.0332160 1.2743050 1.5091052 1 2827
-#> s(year_fac).7 1.1467567 1.4119100 1.6751850 1 2492
-#> s(year_fac).8 2.0710285 2.2713050 2.4596285 1 2188
-#> s(year_fac).9 2.7198967 2.8557300 2.9874662 1 2014
-#> s(year_fac).10 1.9798730 2.1799600 2.3932595 1 2090
-#> s(year_fac).11 2.2734940 2.4374700 2.6130482 1 1675
-#> s(year_fac).12 2.5421157 2.6935350 2.8431822 1 2108
-#> s(year_fac).13 1.3786087 1.6177850 1.8495872 1 2161
-#> s(year_fac).14 0.4621041 1.9744700 3.2480377 1 1849
-#> s(year_fac).15 0.5293684 2.0014200 3.2766722 1 1731
-#> s(year_fac).16 0.5285142 1.9786450 3.2859085 1 1859
-#> s(year_fac).17 0.5909969 1.9462850 3.2306940 1 1761
ndvi
using
-plot.mvgam
with type = 'pterms'
ndvi
on
-log(counts)
. But it may be easier to visualise using a
-histogram, especially for parametric (linear) effects. This can be done
-by first extracting the posterior coefficients as we did in the first
-example:beta_post <- as.data.frame(model2, variable = 'betas')
-dplyr::glimpse(beta_post)
-#> Rows: 2,000
-#> Columns: 18
-#> $ ndvi <dbl> 0.330568, 0.398734, 0.357498, 0.484288, 0.380087, 0.3…
-#> $ `s(year_fac).1` <dbl> 1.55868, 1.27949, 1.24414, 1.02997, 1.64712, 1.07519,…
-#> $ `s(year_fac).2` <dbl> 1.98967, 2.00846, 2.07493, 1.84431, 2.01590, 2.16466,…
-#> $ `s(year_fac).3` <dbl> 2.41434, 2.16020, 2.67324, 2.33332, 2.32415, 2.45516,…
-#> $ `s(year_fac).4` <dbl> 2.62215, 2.53992, 2.50659, 2.23671, 2.56663, 2.40054,…
-#> $ `s(year_fac).5` <dbl> 1.37221, 1.44795, 1.53019, 1.27623, 1.50771, 1.49515,…
-#> $ `s(year_fac).6` <dbl> 1.323980, 1.220200, 1.165610, 1.271620, 1.193820, 1.3…
-#> $ `s(year_fac).7` <dbl> 1.52005, 1.30735, 1.42566, 1.13335, 1.61581, 1.31853,…
-#> $ `s(year_fac).8` <dbl> 2.40223, 2.20021, 2.44366, 2.17192, 2.20837, 2.33066,…
-#> $ `s(year_fac).9` <dbl> 2.91580, 2.90942, 2.87679, 2.64941, 2.85401, 2.78744,…
-#> $ `s(year_fac).10` <dbl> 2.46559, 2.01466, 2.08319, 2.01400, 2.22965, 2.26523,…
-#> $ `s(year_fac).11` <dbl> 2.52118, 2.45406, 2.46667, 2.20664, 2.42495, 2.46256,…
-#> $ `s(year_fac).12` <dbl> 2.72360, 2.63546, 2.86718, 2.59035, 2.76576, 2.56130,…
-#> $ `s(year_fac).13` <dbl> 1.67388, 1.50790, 1.52463, 1.39004, 1.72927, 1.61023,…
-#> $ `s(year_fac).14` <dbl> 2.583650, 2.034240, 1.819820, 1.579280, 2.426880, 1.8…
-#> $ `s(year_fac).15` <dbl> 2.57365, 2.28723, 1.67404, 1.46796, 2.49512, 2.71230,…
-#> $ `s(year_fac).16` <dbl> 1.801660, 2.185540, 1.756500, 2.098760, 2.270640, 1.8…
-#> $ `s(year_fac).17` <dbl> 0.886081, 3.409300, -0.371795, 2.494990, 1.822150, 2.…
coef(model2)
+#> 2.5% 50% 97.5% Rhat n_eff
+#> ndvi 0.3219980 0.3897445 0.4574249 1 1835
+#> s(year_fac).1 1.1251775 1.3947750 1.6786482 1 2267
+#> s(year_fac).2 1.8029418 2.0028550 2.1983995 1 2518
+#> s(year_fac).3 2.1828568 2.3833850 2.5667087 1 2140
+#> s(year_fac).4 2.3216057 2.5041200 2.6876810 1 1978
+#> s(year_fac).5 1.1947695 1.4253400 1.6404350 1 2341
+#> s(year_fac).6 1.0302375 1.2719450 1.5071408 1 2318
+#> s(year_fac).7 1.1527560 1.4237300 1.6631115 1 2447
+#> s(year_fac).8 2.1024232 2.2693150 2.4510250 1 2317
+#> s(year_fac).9 2.7252610 2.8546400 2.9843943 1 1916
+#> s(year_fac).10 1.9848580 2.1821250 2.3831660 1 2791
+#> s(year_fac).11 2.2655225 2.4353150 2.6026625 1 2214
+#> s(year_fac).12 2.5445065 2.6914450 2.8342325 1 2010
+#> s(year_fac).13 1.3758873 1.6138700 1.8464015 1 2976
+#> s(year_fac).14 0.6763885 2.0087850 3.2876045 1 1581
+#> s(year_fac).15 0.6927259 1.9904050 3.3384072 1 1874
+#> s(year_fac).16 0.5608287 1.9907700 3.3807122 1 1442
+#> s(year_fac).17 0.5989812 2.0276550 3.3404602 1 1671
ndvi
using using a
+histogram. This can be done by first extracting the posterior
+coefficients:beta_post <- as.data.frame(model2, variable = 'betas')
+dplyr::glimpse(beta_post)
+#> Rows: 2,000
+#> Columns: 18
+#> $ ndvi <dbl> 0.356033, 0.419265, 0.401340, 0.408547, 0.365358, 0.4…
+#> $ `s(year_fac).1` <dbl> 1.35402, 1.49105, 1.26309, 1.22981, 1.55560, 1.46103,…
+#> $ `s(year_fac).2` <dbl> 2.08371, 2.03387, 1.81765, 1.82714, 2.20838, 2.12063,…
+#> $ `s(year_fac).3` <dbl> 2.43563, 2.31586, 2.36554, 2.40869, 2.38362, 2.24695,…
+#> $ `s(year_fac).4` <dbl> 2.66322, 2.44635, 2.48168, 2.50480, 2.57476, 2.42963,…
+#> $ `s(year_fac).5` <dbl> 1.40050, 1.33274, 1.38421, 1.36805, 1.40633, 1.34501,…
+#> $ `s(year_fac).6` <dbl> 1.431430, 1.341260, 1.332540, 1.300360, 1.243430, 1.2…
+#> $ `s(year_fac).7` <dbl> 1.46134, 1.25454, 1.42857, 1.41534, 1.40782, 1.40564,…
+#> $ `s(year_fac).8` <dbl> 2.38557, 2.27800, 2.25584, 2.26500, 2.31796, 2.20621,…
+#> $ `s(year_fac).9` <dbl> 2.80751, 2.83031, 2.73530, 2.78705, 2.84959, 2.77223,…
+#> $ `s(year_fac).10` <dbl> 2.09385, 2.08026, 2.22424, 2.23312, 2.18318, 2.09513,…
+#> $ `s(year_fac).11` <dbl> 2.41232, 2.36077, 2.43681, 2.47770, 2.51130, 2.44691,…
+#> $ `s(year_fac).12` <dbl> 2.74478, 2.67146, 2.68431, 2.72280, 2.80983, 2.60829,…
+#> $ `s(year_fac).13` <dbl> 1.63171, 1.55937, 1.71905, 1.70544, 1.52311, 1.48743,…
+#> $ `s(year_fac).14` <dbl> 2.628480, 1.679080, 1.817260, 1.805210, 1.732590, 2.2…
+#> $ `s(year_fac).15` <dbl> 2.5601600, 1.3652300, 1.7714600, 1.7648100, 2.4456900…
+#> $ `s(year_fac).16` <dbl> 1.345990, 2.422970, 1.949050, 1.983840, 2.160030, 1.8…
+#> $ `s(year_fac).17` <dbl> 2.206790, 1.674940, 1.699170, 1.657840, 2.982300, 1.1…
ndvi
is
stored in the ndvi
column. A quick histogram confirms our
inference that log(counts)
respond positively to increases
in ndvi
:hist(beta_post$ndvi,
- xlim = c(-1 * max(abs(beta_post$ndvi)),
- max(abs(beta_post$ndvi))),
- col = 'darkred',
- border = 'white',
- xlab = expression(beta[NDVI]),
- ylab = '',
- yaxt = 'n',
- main = '',
- lwd = 2)
-abline(v = 0, lwd = 2.5)
hist(beta_post$ndvi,
+ xlim = c(-1 * max(abs(beta_post$ndvi)),
+ max(abs(beta_post$ndvi))),
+ col = 'darkred',
+ border = 'white',
+ xlab = expression(beta[NDVI]),
+ ylab = '',
+ yaxt = 'n',
+ main = '',
+ lwd = 2)
+abline(v = 0, lwd = 2.5)
marginaleffects
support
this relatively straightforward. Objects of class marginaleffects
supportmvgam
can
be used with marginaleffects
to inspect contrasts,
scenario-based predictions, conditional and marginal effects, all on the
-outcome scale. Here we will use the plot_predictions
-function from marginaleffects
to inspect the conditional
-effect of ndvi
(use ?plot_predictions
for
-guidance on how to modify these plots):
-plot_predictions(model2,
- condition = "ndvi",
- # include the observed count values
- # as points, and show rugs for the observed
- # ndvi and count values on the axes
- points = 0.5, rug = TRUE)
ndvi
and count
.
-Like brms
, mvgam
has the simple
+outcome scale. Like brms
, mvgam
has the simple
conditional_effects
function to make quick and informative
-plots for main effects. This will likely be your go-to function for
-quickly understanding patterns from fitted mvgam
modelsmarginaleffects
+support. This will likely be your go-to function for quickly
+understanding patterns from fitted mvgam
models
+
+
-1
in the formula now):
-model3 <- mvgam(count ~ s(time, bs = 'bs', k = 15) +
- ndvi,
- family = poisson(),
- data = data_train,
- newdata = data_test)
model3 <- mvgam(count ~ s(time, bs = 'bs', k = 15) +
+ ndvi,
+ family = poisson(),
+ data = data_train,
+ newdata = data_test)
The model can be described mathematically as follows: \[\begin{align*}
\boldsymbol{count}_t & \sim \text{Poisson}(\lambda_t) \\
log(\lambda_t) & = f(\boldsymbol{time})_t + \beta_{ndvi} *
@@ -1543,178 +1490,149 @@ Adding predictors as smooths
wiggly spline. Note that sometimes there are multiple smoothing
penalties that contribute to the covariance matrix, but I am only
showing one here for simplicity. View the summary as before
summary(model3)
-#> GAM formula:
-#> count ~ s(time, bs = "bs", k = 15) + ndvi
-#>
-#> Family:
-#> poisson
-#>
-#> Link function:
-#> log
-#>
-#> Trend model:
-#> None
-#>
-#> N series:
-#> 1
-#>
-#> N timepoints:
-#> 160
-#>
-#> Status:
-#> Fitted using Stan
-#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
-#> Total post-warmup draws = 2000
-#>
-#>
-#> GAM coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) 2.00 2.10 2.200 1.00 903
-#> ndvi 0.26 0.33 0.390 1.00 942
-#> s(time).1 -2.10 -1.10 0.029 1.01 484
-#> s(time).2 0.45 1.30 2.400 1.01 411
-#> s(time).3 -0.43 0.45 1.500 1.02 347
-#> s(time).4 1.60 2.50 3.600 1.02 342
-#> s(time).5 -1.10 -0.22 0.880 1.02 375
-#> s(time).6 -0.53 0.36 1.600 1.01 352
-#> s(time).7 -1.50 -0.51 0.560 1.01 406
-#> s(time).8 0.63 1.50 2.600 1.02 340
-#> s(time).9 1.20 2.10 3.200 1.02 346
-#> s(time).10 -0.34 0.54 1.600 1.01 364
-#> s(time).11 0.92 1.80 2.900 1.02 332
-#> s(time).12 0.67 1.50 2.500 1.01 398
-#> s(time).13 -1.20 -0.32 0.700 1.01 420
-#> s(time).14 -7.90 -4.20 -1.200 1.01 414
-#>
-#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(time) 9.41 14 790 <2e-16 ***
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Tue Apr 16 1:01:29 PM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
summary(model3)
+#> GAM formula:
+#> count ~ s(time, bs = "bs", k = 15) + ndvi
+#> <environment: 0x0000024869b48078>
+#>
+#> Family:
+#> poisson
+#>
+#> Link function:
+#> log
+#>
+#> Trend model:
+#> None
+#>
+#> N series:
+#> 1
+#>
+#> N timepoints:
+#> 199
+#>
+#> Status:
+#> Fitted using Stan
+#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
+#> Total post-warmup draws = 2000
+#>
+#>
+#> GAM coefficient (beta) estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> (Intercept) 2.00 2.10 2.2000 1.00 815
+#> ndvi 0.26 0.33 0.4000 1.01 856
+#> s(time).1 -2.10 -1.10 0.0073 1.01 513
+#> s(time).2 0.48 1.30 2.3000 1.01 433
+#> s(time).3 -0.49 0.43 1.5000 1.01 389
+#> s(time).4 1.60 2.40 3.5000 1.01 375
+#> s(time).5 -1.10 -0.23 0.8300 1.01 399
+#> s(time).6 -0.54 0.37 1.5000 1.01 415
+#> s(time).7 -1.50 -0.54 0.5000 1.01 423
+#> s(time).8 0.62 1.50 2.5000 1.01 378
+#> s(time).9 1.20 2.00 3.1000 1.01 379
+#> s(time).10 -0.31 0.52 1.6000 1.01 380
+#> s(time).11 0.80 1.70 2.8000 1.01 377
+#> s(time).12 0.71 1.50 2.4000 1.01 399
+#> s(time).13 -1.20 -0.35 0.6400 1.01 497
+#> s(time).14 -7.50 -4.10 -1.2000 1.01 490
+#>
+#> Approximate significance of GAM smooths:
+#> edf Ref.df Chi.sq p-value
+#> s(time) 9.83 14 64.4 <2e-16 ***
+#> ---
+#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#>
+#> Stan MCMC diagnostics:
+#> n_eff / iter looks reasonable for all parameters
+#> Rhat looks reasonable for all parameters
+#> 0 of 2000 iterations ended with a divergence (0%)
+#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
+#> E-FMI indicated no pathological behavior
+#>
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:32:49 AM 2024.
+#> For each parameter, n_eff is a crude measure of effective sample size,
+#> and Rhat is the potential scale reduction factor on split MCMC chains
+#> (at convergence, Rhat = 1)
The summary above now contains posterior estimates for the smoothing
parameters as well as the basis coefficients for the nonlinear effect of
-time
. We can visualize the conditional time
-effect using the plot
function with
-type = 'smooths'
:
By default this plots shows posterior empirical quantiles, but it can -also be helpful to view some realizations of the underlying function -(here, each line is a different potential curve drawn from the posterior -of all possible curves):
- - -A useful question when modelling using GAMs is to identify where the -function is changing most rapidly. To address this, we can plot -estimated 1st derivatives of the spline:
- - -Here, values above >0
indicate the function was
-increasing at that time point, while values <0
indicate
-the function was declining. The most rapid declines appear to have been
-happening around timepoints 50 and again toward the end of the training
-period, for example.
Use conditional_effects
again for useful plots on the
-outcome scale:
Or on the link scale:
- - +time
. We can visualize conditional_effects
as
+before:
+
+
Inspect the underlying Stan
code to gain some idea of
how the spline is being penalized:
code(model3)
-#> // Stan model code generated by package mvgam
-#> data {
-#> int<lower=0> total_obs; // total number of observations
-#> int<lower=0> n; // number of timepoints per series
-#> int<lower=0> n_sp; // number of smoothing parameters
-#> int<lower=0> n_series; // number of series
-#> int<lower=0> num_basis; // total number of basis coefficients
-#> vector[num_basis] zero; // prior locations for basis coefficients
-#> matrix[total_obs, num_basis] X; // mgcv GAM design matrix
-#> array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
-#> matrix[14, 28] S1; // mgcv smooth penalty matrix S1
-#> int<lower=0> n_nonmissing; // number of nonmissing observations
-#> array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
-#> matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
-#> array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
-#> }
-#> parameters {
-#> // raw basis coefficients
-#> vector[num_basis] b_raw;
-#>
-#> // smoothing parameters
-#> vector<lower=0>[n_sp] lambda;
-#> }
-#> transformed parameters {
-#> // basis coefficients
-#> vector[num_basis] b;
-#> b[1 : num_basis] = b_raw[1 : num_basis];
-#> }
-#> model {
-#> // prior for (Intercept)...
-#> b_raw[1] ~ student_t(3, 2.6, 2.5);
-#>
-#> // prior for ndvi...
-#> b_raw[2] ~ student_t(3, 0, 2);
-#>
-#> // prior for s(time)...
-#> b_raw[3 : 16] ~ multi_normal_prec(zero[3 : 16],
-#> S1[1 : 14, 1 : 14] * lambda[1]
-#> + S1[1 : 14, 15 : 28] * lambda[2]);
-#>
-#> // priors for smoothing parameters
-#> lambda ~ normal(5, 30);
-#> {
-#> // likelihood functions
-#> flat_ys ~ poisson_log_glm(flat_xs, 0.0, b);
-#> }
-#> }
-#> generated quantities {
-#> vector[total_obs] eta;
-#> matrix[n, n_series] mus;
-#> vector[n_sp] rho;
-#> array[n, n_series] int ypred;
-#> rho = log(lambda);
-#>
-#> // posterior predictions
-#> eta = X * b;
-#> for (s in 1 : n_series) {
-#> mus[1 : n, s] = eta[ytimes[1 : n, s]];
-#> ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
-#> }
-#> }
code(model3)
+#> // Stan model code generated by package mvgam
+#> data {
+#> int<lower=0> total_obs; // total number of observations
+#> int<lower=0> n; // number of timepoints per series
+#> int<lower=0> n_sp; // number of smoothing parameters
+#> int<lower=0> n_series; // number of series
+#> int<lower=0> num_basis; // total number of basis coefficients
+#> vector[num_basis] zero; // prior locations for basis coefficients
+#> matrix[total_obs, num_basis] X; // mgcv GAM design matrix
+#> array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
+#> matrix[14, 28] S1; // mgcv smooth penalty matrix S1
+#> int<lower=0> n_nonmissing; // number of nonmissing observations
+#> array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
+#> matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
+#> array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
+#> }
+#> parameters {
+#> // raw basis coefficients
+#> vector[num_basis] b_raw;
+#>
+#> // smoothing parameters
+#> vector<lower=0>[n_sp] lambda;
+#> }
+#> transformed parameters {
+#> // basis coefficients
+#> vector[num_basis] b;
+#> b[1 : num_basis] = b_raw[1 : num_basis];
+#> }
+#> model {
+#> // prior for (Intercept)...
+#> b_raw[1] ~ student_t(3, 2.6, 2.5);
+#>
+#> // prior for ndvi...
+#> b_raw[2] ~ student_t(3, 0, 2);
+#>
+#> // prior for s(time)...
+#> b_raw[3 : 16] ~ multi_normal_prec(zero[3 : 16],
+#> S1[1 : 14, 1 : 14] * lambda[1]
+#> + S1[1 : 14, 15 : 28] * lambda[2]);
+#>
+#> // priors for smoothing parameters
+#> lambda ~ normal(5, 30);
+#> {
+#> // likelihood functions
+#> flat_ys ~ poisson_log_glm(flat_xs, 0.0, b);
+#> }
+#> }
+#> generated quantities {
+#> vector[total_obs] eta;
+#> matrix[n, n_series] mus;
+#> vector[n_sp] rho;
+#> array[n, n_series] int ypred;
+#> rho = log(lambda);
+#>
+#> // posterior predictions
+#> eta = X * b;
+#> for (s in 1 : n_series) {
+#> mus[1 : n, s] = eta[ytimes[1 : n, s]];
+#> ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
+#> }
+#> }
The line below // prior for s(time)...
shows how the
spline basis coefficients are drawn from a zero-centred multivariate
normal distribution. The precision matrix \(S\) is penalized by two different smoothing
parameters (the \(\lambda\)’s) to
enforce smoothness and reduce overfitting
mvgam
Forecasts from the above model are not ideal:
- - -#> Out of sample DRPS:
-#> [1] 288.3844
+
+
Why is this happening? The forecasts are driven almost entirely by variation in the temporal spline, which is extrapolating linearly forever beyond the edge of the training data. Any slight @@ -1723,14 +1641,14 @@
mvgam
plot_mvgam_smooth(model3, smooth = 's(time)',
- # feed newdata to the plot function to generate
- # predictions of the temporal smooth to the end of the
- # testing period
- newdata = data.frame(time = 1:max(data_test$time),
- ndvi = 0))
-abline(v = max(data_train$time), lty = 'dashed', lwd = 2)
plot_mvgam_smooth(model3, smooth = 's(time)',
+ # feed newdata to the plot function to generate
+ # predictions of the temporal smooth to the end of the
+ # testing period
+ newdata = data.frame(time = 1:max(data_test$time),
+ ndvi = 0))
+abline(v = max(data_train$time), lty = 'dashed', lwd = 2)
This model is not doing well. Clearly we need to somehow account for
the strong temporal autocorrelation when modelling these data without
using a smooth function of time
. Now onto another prominent
@@ -1745,11 +1663,11 @@
mvgam
mvgam
can include combinations of smooths and dynamic
components:
-model4 <- mvgam(count ~ s(ndvi, k = 6),
- family = poisson(),
- data = data_train,
- newdata = data_test,
- trend_model = 'AR1')
model4 <- mvgam(count ~ s(ndvi, k = 6),
+ family = poisson(),
+ data = data_train,
+ newdata = data_test,
+ trend_model = 'AR1')
The model can be described mathematically as follows: \[\begin{align*}
\boldsymbol{count}_t & \sim \text{Poisson}(\lambda_t) \\
log(\lambda_t) & = f(\boldsymbol{ndvi})_t + z_t \\
@@ -1767,84 +1685,76 @@ Latent dynamics in
more realistic estimates of the residual autocorrelation parameters.
Summarise the model to see how it now returns posterior summaries for
the latent AR1 process:mvgam
summary(model4)
-#> GAM formula:
-#> count ~ s(ndvi, k = 6)
-#>
-#> Family:
-#> poisson
-#>
-#> Link function:
-#> log
-#>
-#> Trend model:
-#> AR1
-#>
-#> N series:
-#> 1
-#>
-#> N timepoints:
-#> 160
-#>
-#> Status:
-#> Fitted using Stan
-#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
-#> Total post-warmup draws = 2000
-#>
-#>
-#> GAM coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) 1.200 1.9000 2.500 1.03 63
-#> s(ndvi).1 -0.066 0.0100 0.160 1.01 318
-#> s(ndvi).2 -0.110 0.0190 0.340 1.00 286
-#> s(ndvi).3 -0.048 -0.0019 0.051 1.00 560
-#> s(ndvi).4 -0.210 0.1200 1.500 1.01 198
-#> s(ndvi).5 -0.079 0.1500 0.360 1.01 350
-#>
-#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(ndvi) 2.26 5 87.8 0.1
-#>
-#> Latent trend parameter AR estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> ar1[1] 0.70 0.81 0.92 1.01 234
-#> sigma[1] 0.68 0.80 0.96 1.00 488
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Tue Apr 16 1:02:26 PM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
View conditional smooths for the ndvi
effect:
summary(model4)
+#> GAM formula:
+#> count ~ s(ndvi, k = 6)
+#> <environment: 0x0000024869b48078>
+#>
+#> Family:
+#> poisson
+#>
+#> Link function:
+#> log
+#>
+#> Trend model:
+#> AR1
+#>
+#> N series:
+#> 1
+#>
+#> N timepoints:
+#> 199
+#>
+#> Status:
+#> Fitted using Stan
+#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
+#> Total post-warmup draws = 2000
+#>
+#>
+#> GAM coefficient (beta) estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> (Intercept) 1.100 2.0000 2.600 1.13 19
+#> s(ndvi).1 -0.180 -0.0110 0.073 1.01 335
+#> s(ndvi).2 -0.130 0.0200 0.300 1.01 381
+#> s(ndvi).3 -0.052 -0.0018 0.040 1.00 893
+#> s(ndvi).4 -0.210 0.1300 1.300 1.02 253
+#> s(ndvi).5 -0.080 0.1500 0.350 1.00 407
+#>
+#> Approximate significance of GAM smooths:
+#> edf Ref.df Chi.sq p-value
+#> s(ndvi) 2.47 5 5.7 0.34
+#>
+#> Latent trend parameter AR estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> ar1[1] 0.70 0.81 0.92 1 416
+#> sigma[1] 0.68 0.79 0.95 1 378
+#>
+#> Stan MCMC diagnostics:
+#> n_eff / iter looks reasonable for all parameters
+#> Rhats above 1.05 found for 94 parameters
+#> *Diagnose further to investigate why the chains have not mixed
+#> 0 of 2000 iterations ended with a divergence (0%)
+#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
+#> E-FMI indicated no pathological behavior
+#>
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:34:00 AM 2024.
+#> For each parameter, n_eff is a crude measure of effective sample size,
+#> and Rhat is the potential scale reduction factor on split MCMC chains
+#> (at convergence, Rhat = 1)
View posterior hindcasts / forecasts and compare against the out of sample test data
- - -#> Out of sample DRPS:
-#> [1] 150.5241
+
+
The trend is evolving as an AR1 process, which we can also view:
- - + +In-sample model performance can be interrogated using leave-one-out
cross-validation utilities from the loo
package (a higher
value is preferred for this metric):
loo_compare(model3, model4)
-#> Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
-
-#> Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
-#> elpd_diff se_diff
-#> model4 0.0 0.0
-#> model3 -558.9 66.4
The higher estimated log predictive density (ELPD) value for the dynamic model suggests it provides a better fit to the in-sample data.
@@ -1853,12 +1763,12 @@mvgam
forecast
and score
functions. Here we will
compare models based on their Discrete Ranked Probability Scores (a
lower value is preferred for this metric)
-fc_mod3 <- forecast(model3)
-fc_mod4 <- forecast(model4)
-score_mod3 <- score(fc_mod3, score = 'drps')
-score_mod4 <- score(fc_mod4, score = 'drps')
-sum(score_mod4$PP$score, na.rm = TRUE) - sum(score_mod3$PP$score, na.rm = TRUE)
-#> [1] -137.8603
fc_mod3 <- forecast(model3)
+fc_mod4 <- forecast(model4)
+score_mod3 <- score(fc_mod3, score = 'drps')
+score_mod4 <- score(fc_mod4, score = 'drps')
+sum(score_mod4$PP$score, na.rm = TRUE) - sum(score_mod3$PP$score, na.rm = TRUE)
+#> [1] -132.6078
A strongly negative value here suggests the score for the dynamic model (model 4) is much smaller than the score for the model with a smooth function of time (model 3)
diff --git a/doc/nmixtures.R b/doc/nmixtures.R index a4e7a766..b1000e6e 100644 --- a/doc/nmixtures.R +++ b/doc/nmixtures.R @@ -1,16 +1,20 @@ -## ----echo = FALSE------------------------------------------------------------- -NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true") +params <- +list(EVAL = TRUE) + +## ---- echo = FALSE------------------------------------------------------------ knitr::opts_chunk$set( collapse = TRUE, comment = "#>", - purl = NOT_CRAN, - eval = NOT_CRAN + message = FALSE, + warning = FALSE, + eval = if (isTRUE(exists("params"))) params$EVAL else FALSE ) + ## ----setup, include=FALSE----------------------------------------------------- knitr::opts_chunk$set( echo = TRUE, - dpi = 150, + dpi = 100, fig.asp = 0.8, fig.width = 6, out.width = "60%", @@ -33,6 +37,7 @@ options(ggplot2.discrete.colour = c("#A25050", 'darkred', "#010048")) + ## ----------------------------------------------------------------------------- set.seed(999) # Simulate observations for species 1, which shows a declining trend and 0.7 detection probability @@ -92,16 +97,19 @@ testdat = testdat %>% cap = 50) %>% dplyr::select(-replicate)) + ## ----------------------------------------------------------------------------- testdat$species <- factor(testdat$species, levels = unique(testdat$species)) testdat$series <- factor(testdat$series, levels = unique(testdat$series)) + ## ----------------------------------------------------------------------------- dplyr::glimpse(testdat) head(testdat, 12) + ## ----------------------------------------------------------------------------- testdat %>% # each unique combination of site*species is a separate process @@ -110,6 +118,7 @@ testdat %>% dplyr::distinct() -> trend_map trend_map + ## ----include = FALSE, results='hide'------------------------------------------ mod <- mvgam( # the observation formula sets up linear predictors for @@ -132,48 +141,55 @@ mod <- mvgam( prior(normal(1, 1.5), class = Intercept_trend)), samples = 1000) + ## ----eval = FALSE------------------------------------------------------------- -# mod <- mvgam( -# # the observation formula sets up linear predictors for -# # detection probability on the logit scale -# formula = obs ~ species - 1, -# -# # the trend_formula sets up the linear predictors for -# # the latent abundance processes on the log scale -# trend_formula = ~ s(time, by = trend, k = 4) + species, -# -# # the trend_map takes care of the mapping -# trend_map = trend_map, -# -# # nmix() family and data -# family = nmix(), -# data = testdat, -# -# # priors can be set in the usual way -# priors = c(prior(std_normal(), class = b), -# prior(normal(1, 1.5), class = Intercept_trend)), -# samples = 1000) +## mod <- mvgam( +## # the observation formula sets up linear predictors for +## # detection probability on the logit scale +## formula = obs ~ species - 1, +## +## # the trend_formula sets up the linear predictors for +## # the latent abundance processes on the log scale +## trend_formula = ~ s(time, by = trend, k = 4) + species, +## +## # the trend_map takes care of the mapping +## trend_map = trend_map, +## +## # nmix() family and data +## family = nmix(), +## data = testdat, +## +## # priors can be set in the usual way +## priors = c(prior(std_normal(), class = b), +## prior(normal(1, 1.5), class = Intercept_trend)), +## samples = 1000) + ## ----------------------------------------------------------------------------- code(mod) + ## ----------------------------------------------------------------------------- summary(mod) + ## ----------------------------------------------------------------------------- loo(mod) + ## ----------------------------------------------------------------------------- plot(mod, type = 'smooths', trend_effects = TRUE) + ## ----------------------------------------------------------------------------- -plot_predictions(mod, condition = 'species', +marginaleffects::plot_predictions(mod, condition = 'species', type = 'detection') + ylab('Pr(detection)') + ylim(c(0, 1)) + theme_classic() + theme(legend.position = 'none') + ## ----------------------------------------------------------------------------- hc <- hindcast(mod, type = 'latent_N') @@ -227,10 +243,12 @@ plot_latentN = function(hindcasts, data, species = 'sp_1'){ title = species) } + ## ----------------------------------------------------------------------------- plot_latentN(hc, testdat, species = 'sp_1') plot_latentN(hc, testdat, species = 'sp_2') + ## ----------------------------------------------------------------------------- # Date link load(url('https://github.com/doserjef/spAbundance/raw/main/data/dataNMixSim.rda')) @@ -251,6 +269,7 @@ det.cov[is.na(det.cov)] <- rnorm(length(which(is.na(det.cov)))) det.cov2 <- dataNMixSim$det.covs$det.cov.2 det.cov2[is.na(det.cov2)] <- rnorm(length(which(is.na(det.cov2)))) + ## ----------------------------------------------------------------------------- mod_data <- do.call(rbind, lapply(1:NROW(data.one.sp$y), function(x){ @@ -269,11 +288,13 @@ mod_data <- do.call(rbind, time = 1, cap = max(data.one.sp$y, na.rm = TRUE) + 20) + ## ----------------------------------------------------------------------------- NROW(mod_data) dplyr::glimpse(mod_data) head(mod_data) + ## ----------------------------------------------------------------------------- mod_data %>% # each unique combination of site*species is a separate process @@ -285,6 +306,7 @@ trend_map %>% dplyr::arrange(trend) %>% head(12) + ## ----include = FALSE, results='hide'------------------------------------------ mod <- mvgam( # effects of covariates on detection probability; @@ -317,43 +339,47 @@ mod <- mvgam( residuals = FALSE, samples = 1000) + ## ----eval=FALSE--------------------------------------------------------------- -# mod <- mvgam( -# # effects of covariates on detection probability; -# # here we use penalized splines for both continuous covariates -# formula = y ~ s(det_cov, k = 4) + s(det_cov2, k = 4), -# -# # effects of the covariates on latent abundance; -# # here we use a penalized spline for the continuous covariate and -# # hierarchical intercepts for the factor covariate -# trend_formula = ~ s(abund_cov, k = 4) + -# s(abund_fac, bs = 're'), -# -# # link multiple observations to each site -# trend_map = trend_map, -# -# # nmix() family and supplied data -# family = nmix(), -# data = mod_data, -# -# # standard normal priors on key regression parameters -# priors = c(prior(std_normal(), class = 'b'), -# prior(std_normal(), class = 'Intercept'), -# prior(std_normal(), class = 'Intercept_trend'), -# prior(std_normal(), class = 'sigma_raw_trend')), -# -# # use Stan's variational inference for quicker results -# algorithm = 'meanfield', -# -# # no need to compute "series-level" residuals -# residuals = FALSE, -# samples = 1000) +## mod <- mvgam( +## # effects of covariates on detection probability; +## # here we use penalized splines for both continuous covariates +## formula = y ~ s(det_cov, k = 4) + s(det_cov2, k = 4), +## +## # effects of the covariates on latent abundance; +## # here we use a penalized spline for the continuous covariate and +## # hierarchical intercepts for the factor covariate +## trend_formula = ~ s(abund_cov, k = 4) + +## s(abund_fac, bs = 're'), +## +## # link multiple observations to each site +## trend_map = trend_map, +## +## # nmix() family and supplied data +## family = nmix(), +## data = mod_data, +## +## # standard normal priors on key regression parameters +## priors = c(prior(std_normal(), class = 'b'), +## prior(std_normal(), class = 'Intercept'), +## prior(std_normal(), class = 'Intercept_trend'), +## prior(std_normal(), class = 'sigma_raw_trend')), +## +## # use Stan's variational inference for quicker results +## algorithm = 'meanfield', +## +## # no need to compute "series-level" residuals +## residuals = FALSE, +## samples = 1000) + ## ----------------------------------------------------------------------------- summary(mod, include_betas = FALSE) + ## ----------------------------------------------------------------------------- -avg_predictions(mod, type = 'detection') +marginaleffects::avg_predictions(mod, type = 'detection') + ## ----------------------------------------------------------------------------- abund_plots <- plot(conditional_effects(mod, @@ -362,14 +388,17 @@ abund_plots <- plot(conditional_effects(mod, 'abund_fac')), plot = FALSE) + ## ----------------------------------------------------------------------------- abund_plots[[1]] + ylab('Expected latent abundance') + ## ----------------------------------------------------------------------------- abund_plots[[2]] + ylab('Expected latent abundance') + ## ----------------------------------------------------------------------------- det_plots <- plot(conditional_effects(mod, type = 'detection', @@ -377,17 +406,19 @@ det_plots <- plot(conditional_effects(mod, 'det_cov2')), plot = FALSE) + ## ----------------------------------------------------------------------------- det_plots[[1]] + ylab('Pr(detection)') det_plots[[2]] + ylab('Pr(detection)') + ## ----------------------------------------------------------------------------- fivenum_round = function(x)round(fivenum(x, na.rm = TRUE), 2) -plot_predictions(mod, - newdata = datagrid(det_cov = unique, +marginaleffects::plot_predictions(mod, + newdata = marginaleffects::datagrid(det_cov = unique, det_cov2 = fivenum_round), by = c('det_cov', 'det_cov2'), type = 'detection') + diff --git a/doc/nmixtures.Rmd b/doc/nmixtures.Rmd index 62d4a08e..62f291d1 100644 --- a/doc/nmixtures.Rmd +++ b/doc/nmixtures.Rmd @@ -9,21 +9,23 @@ vignette: > %\VignetteIndexEntry{N-mixtures in mvgam} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} +params: + EVAL: !r identical(tolower(Sys.getenv("NOT_CRAN")), "true") --- ```{r, echo = FALSE} -NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true") knitr::opts_chunk$set( collapse = TRUE, comment = "#>", - purl = NOT_CRAN, - eval = NOT_CRAN + message = FALSE, + warning = FALSE, + eval = if (isTRUE(exists("params"))) params$EVAL else FALSE ) ``` ```{r setup, include=FALSE} knitr::opts_chunk$set( echo = TRUE, - dpi = 150, + dpi = 100, fig.asp = 0.8, fig.width = 6, out.width = "60%", @@ -229,7 +231,7 @@ plot(mod, type = 'smooths', trend_effects = TRUE) `marginaleffects` support allows for more useful prediction-based interrogations on different scales (though note that at the time of writing this Vignette, you must have the development version of `marginaleffects` installed for `nmix()` models to be supported; use `remotes::install_github('vincentarelbundock/marginaleffects')` to install). Objects that use family `nmix()` have a few additional prediction scales that can be used (i.e. `link`, `response`, `detection` or `latent_N`). For example, here are the estimated detection probabilities per species, which show that the model has done a nice job of estimating these parameters: ```{r} -plot_predictions(mod, condition = 'species', +marginaleffects::plot_predictions(mod, condition = 'species', type = 'detection') + ylab('Pr(detection)') + ylim(c(0, 1)) + @@ -302,7 +304,7 @@ We can see that estimates for both species have correctly captured the true temp ## Example 2: a larger survey with possible nonlinear effects -Now for another example with a larger dataset. We will use data from [Jeff Doser's simulation example from the wonderful `spAbundance` package](https://www.jeffdoser.com/files/spabundance-web/articles/nmixturemodels){target="_blank"}. The simulated data include one continuous site-level covariate, one factor site-level covariate and two continuous sample-level covariates. This example will allow us to examine how we can include possibly nonlinear effects in the latent process and detection probability models. +Now for another example with a larger dataset. We will use data from [Jeff Doser's simulation example from the wonderful `spAbundance` package](https://doserlab.com/files/spabundance-web/articles/nmixturemodels){target="_blank"}. The simulated data include one continuous site-level covariate, one factor site-level covariate and two continuous sample-level covariates. This example will allow us to examine how we can include possibly nonlinear effects in the latent process and detection probability models. Download the data and grab observations / covariate measurements for one species ```{r} @@ -440,7 +442,7 @@ summary(mod, include_betas = FALSE) Again we can make use of `marginaleffects` support for interrogating the model through targeted predictions. First, we can inspect the estimated average detection probability ```{r} -avg_predictions(mod, type = 'detection') +marginaleffects::avg_predictions(mod, type = 'detection') ``` Next investigate estimated effects of covariates on latent abundance using the `conditional_effects()` function and specifying `type = 'link'`; this will return plots on the expectation scale @@ -485,8 +487,8 @@ More targeted predictions are also easy with `marginaleffects` support. For exam ```{r} fivenum_round = function(x)round(fivenum(x, na.rm = TRUE), 2) -plot_predictions(mod, - newdata = datagrid(det_cov = unique, +marginaleffects::plot_predictions(mod, + newdata = marginaleffects::datagrid(det_cov = unique, det_cov2 = fivenum_round), by = c('det_cov', 'det_cov2'), type = 'detection') + @@ -501,7 +503,7 @@ The following papers and resources offer useful material about N-mixture models Guélat, Jérôme, and Kéry, Marc. “[Effects of Spatial Autocorrelation and Imperfect Detection on Species Distribution Models.](https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.12983)” *Methods in Ecology and Evolution* 9 (2018): 1614–25. -Kéry, Marc, and Royle Andrew J. "[Applied hierarchical modeling in ecology: Analysis of distribution, abundance and species richness in R and BUGS: Volume 2: Dynamic and advanced models](https://www.sciencedirect.com/book/9780128237687/applied-hierarchical-modeling-in-ecology-analysis-of-distribution-abundance-and-species-richness-in-r-and-bugs)". London, UK: Academic Press (2020). +Kéry, Marc, and Royle Andrew J. "[Applied hierarchical modeling in ecology: Analysis of distribution, abundance and species richness in R and BUGS: Volume 2: Dynamic and advanced models](https://shop.elsevier.com/books/applied-hierarchical-modeling-in-ecology-analysis-of-distribution-abundance-and-species-richness-in-r-and-bugs/kery/978-0-12-809585-0)". London, UK: Academic Press (2020). Royle, Andrew J. "[N‐mixture models for estimating population size from spatially replicated counts.](https://onlinelibrary.wiley.com/doi/full/10.1111/j.0006-341X.2004.00142.x)" *Biometrics* 60.1 (2004): 108-115. diff --git a/doc/nmixtures.html b/doc/nmixtures.html index f92147dd..b4acfda4 100644 --- a/doc/nmixtures.html +++ b/doc/nmixtures.html @@ -12,7 +12,7 @@ - +nmix()
familysummary(mod)
#> GAM observation formula:
#> obs ~ species - 1
-#>
-#> GAM process formula:
-#> ~s(time, by = trend, k = 4) + species
-#>
-#> Family:
-#> nmix
-#>
-#> Link function:
-#> log
-#>
-#> Trend model:
-#> None
-#>
-#> N process models:
-#> 2
-#>
-#> N series:
-#> 10
-#>
-#> N timepoints:
-#> 6
-#>
-#> Status:
-#> Fitted using Stan
-#> 4 chains, each with iter = 1500; warmup = 500; thin = 1
-#> Total post-warmup draws = 4000
-#>
-#>
-#> GAM observation model coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> speciessp_1 -0.28 0.7200 1.40 1 1361
-#> speciessp_2 -1.20 -0.0075 0.89 1 1675
-#>
-#> GAM process model coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept)_trend 2.700 3.0000 3.400 1.00 1148
-#> speciessp_2_trend -1.200 -0.6100 0.190 1.00 1487
-#> s(time):trendtrend1.1_trend -0.081 0.0130 0.200 1.00 800
-#> s(time):trendtrend1.2_trend -0.230 0.0077 0.310 1.00 1409
-#> s(time):trendtrend1.3_trend -0.460 -0.2500 -0.038 1.00 1699
-#> s(time):trendtrend2.1_trend -0.220 -0.0130 0.095 1.00 995
-#> s(time):trendtrend2.2_trend -0.190 0.0320 0.500 1.01 1071
-#> s(time):trendtrend2.3_trend 0.064 0.3300 0.640 1.00 2268
-#>
-#> Approximate significance of GAM process smooths:
-#> edf Ref.df F p-value
-#> s(time):seriestrend1 1.25 3 0.19 0.83
-#> s(time):seriestrend2 1.07 3 0.39 0.92
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 0 of 4000 iterations ended with a divergence (0%)
-#> 0 of 4000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Tue Apr 16 1:04:54 PM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
loo()
functionality works just as it does for all
mvgam
models to aid in model comparison / selection (though
note that Pareto K values often give warnings for mixture models so
these may not be too helpful)
loo(mod)
-#> Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
-#>
-#> Computed from 4000 by 60 log-likelihood matrix
-#>
-#> Estimate SE
-#> elpd_loo -230.4 13.8
-#> p_loo 83.3 12.7
-#> looic 460.9 27.5
-#> ------
-#> Monte Carlo SE of elpd_loo is NA.
-#>
-#> Pareto k diagnostic values:
-#> Count Pct. Min. n_eff
-#> (-Inf, 0.5] (good) 25 41.7% 1141
-#> (0.5, 0.7] (ok) 5 8.3% 390
-#> (0.7, 1] (bad) 7 11.7% 13
-#> (1, Inf) (very bad) 23 38.3% 2
-#> See help('pareto-k-diagnostic') for details.
Plot the estimated smooths of time from each species’ latent abundance process (on the log scale)
- +marginaleffects
support allows for more useful
prediction-based interrogations on different scales (though note that at
the time of writing this Vignette, you must have the development version
@@ -810,13 +811,13 @@
nmix()
familyplot_predictions(mod, condition = 'species',
+marginaleffects::plot_predictions(mod, condition = 'species',
type = 'detection') +
ylab('Pr(detection)') +
ylim(c(0, 1)) +
theme_classic() +
theme(legend.position = 'none')
-
+
A common goal in N-mixture modelling is to estimate the true latent
abundance. The model has automatically generated predictions for the
unknown latent abundance that are conditional on the observations. We
@@ -877,9 +878,9 @@
Modelling with the nmix()
family
black line shows the true latent abundance, and the ribbons show
credible intervals of our estimates:
-
+
-
+
We can see that estimates for both species have correctly captured
the true temporal variation and magnitudes in abundance
nmix()
familyNow for another example with a larger dataset. We will use data from
-Jeff Doser’s simulation example from the wonderful
+Jeff Doser’s simulation example from the wonderful
spAbundance
package. The simulated data include one
continuous site-level covariate, one factor site-level covariate and two
continuous sample-level covariates. This example will allow us to
@@ -944,8 +945,8 @@
summary(mod, include_betas = FALSE)
#> GAM observation formula:
#> y ~ s(det_cov, k = 3) + s(det_cov2, k = 3)
-#>
-#> GAM process formula:
-#> ~s(abund_cov, k = 3) + s(abund_fac, bs = "re")
-#>
-#> Family:
-#> nmix
-#>
-#> Link function:
-#> log
-#>
-#> Trend model:
-#> None
-#>
-#> N process models:
-#> 225
-#>
-#> N series:
-#> 675
-#>
-#> N timepoints:
-#> 1
-#>
-#> Status:
-#> Fitted using Stan
-#> 1 chains, each with iter = 1000; warmup = ; thin = 1
-#> Total post-warmup draws = 1000
-#>
-#>
-#> GAM observation model coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n.eff
-#> (Intercept) 0.052 0.4 0.71 NaN NaN
-#>
-#> Approximate significance of GAM observation smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(det_cov) 1.22 2 52.3 0.0011 **
-#> s(det_cov2) 1.07 2 307.1 <2e-16 ***
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> GAM process model coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n.eff
-#> (Intercept)_trend -0.25 -0.081 0.079 NaN NaN
-#>
-#> GAM process model group-level estimates:
-#> 2.5% 50% 97.5% Rhat n.eff
-#> mean(s(abund_fac))_trend -0.18 0.0038 0.19 NaN NaN
-#> sd(s(abund_fac))_trend 0.26 0.3900 0.56 NaN NaN
-#>
-#> Approximate significance of GAM process smooths:
-#> edf Ref.df F p-value
-#> s(abund_cov) 1.19 2 2.38 0.299
-#> s(abund_fac) 8.82 10 2.79 0.025 *
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> Posterior approximation used: no diagnostics to compute
Again we can make use of marginaleffects
support for
interrogating the model through targeted predictions. First, we can
inspect the estimated average detection probability
avg_predictions(mod, type = 'detection')
+marginaleffects::avg_predictions(mod, type = 'detection')
#>
#> Estimate 2.5 % 97.5 %
-#> 0.579 0.51 0.644
+#> 0.576 0.512 0.634
#>
#> Columns: estimate, conf.low, conf.high
#> Type: detection
@@ -1116,12 +1119,12 @@ Example 2: a larger survey with possible nonlinear effects
abundance
-
+
The effect of the factor covariate on expected latent abundance,
estimated as a hierarchical random effect
-
+
Now we can investigate estimated effects of covariates on detection
probability using type = 'detection'
det_plots <- plot(conditional_effects(mod,
@@ -1135,24 +1138,24 @@ Example 2: a larger survey with possible nonlinear effects
the probability scale is more intuitive and useful
-
+
-
+
More targeted predictions are also easy with
marginaleffects
support. For example, we can ask: How does
detection probability change as we change both detection
covariates?
fivenum_round = function(x)round(fivenum(x, na.rm = TRUE), 2)
-plot_predictions(mod,
- newdata = datagrid(det_cov = unique,
+marginaleffects::plot_predictions(mod,
+ newdata = marginaleffects::datagrid(det_cov = unique,
det_cov2 = fivenum_round),
by = c('det_cov', 'det_cov2'),
type = 'detection') +
theme_classic() +
ylab('Pr(detection)')
-
+
The model has found support for some important covariate effects, but
of course we’d want to interrogate how well the model predicts and think
about possible spatial effects to capture unmodelled variation in latent
@@ -1167,7 +1170,7 @@
Further reading
of Spatial Autocorrelation and Imperfect Detection on Species
Distribution Models.” Methods in Ecology and Evolution 9
(2018): 1614–25.
-Kéry, Marc, and Royle Andrew J. “Applied
+Kéry, Marc, and Royle Andrew J. “Applied
hierarchical modeling in ecology: Analysis of distribution, abundance
and species richness in R and BUGS: Volume 2: Dynamic and advanced
models”. London, UK: Academic Press (2020).
diff --git a/doc/shared_states.R b/doc/shared_states.R
index 3619bf06..c15a17a7 100644
--- a/doc/shared_states.R
+++ b/doc/shared_states.R
@@ -1,10 +1,13 @@
-## ----echo = FALSE-------------------------------------------------------------
-NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true")
+params <-
+list(EVAL = TRUE)
+
+## ---- echo = FALSE------------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
- purl = NOT_CRAN,
- eval = NOT_CRAN
+ message = FALSE,
+ warning = FALSE,
+ eval = if (isTRUE(exists("params"))) params$EVAL else FALSE
)
@@ -180,7 +183,7 @@ mod <- mvgam(formula =
trend_formula =
# formula for the latent signal, which can depend
# nonlinearly on productivity
- ~ s(productivity, k = 8),
+ ~ s(productivity, k = 8) - 1,
trend_model =
# in addition to productivity effects, the signal is
@@ -218,7 +221,7 @@ mod <- mvgam(formula =
## trend_formula =
## # formula for the latent signal, which can depend
## # nonlinearly on productivity
-## ~ s(productivity, k = 8),
+## ~ s(productivity, k = 8) - 1,
##
## trend_model =
## # in addition to productivity effects, the signal is
diff --git a/doc/shared_states.Rmd b/doc/shared_states.Rmd
index bc79aa71..17b1e625 100644
--- a/doc/shared_states.Rmd
+++ b/doc/shared_states.Rmd
@@ -9,14 +9,16 @@ vignette: >
%\VignetteIndexEntry{Shared latent states in mvgam}
%\VignetteEngine{knitr::rmarkdown}
\usepackage[utf8]{inputenc}
+params:
+ EVAL: !r identical(tolower(Sys.getenv("NOT_CRAN")), "true")
---
```{r, echo = FALSE}
-NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true")
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
- purl = NOT_CRAN,
- eval = NOT_CRAN
+ message = FALSE,
+ warning = FALSE,
+ eval = if (isTRUE(exists("params"))) params$EVAL else FALSE
)
```
@@ -215,7 +217,7 @@ mod <- mvgam(formula =
trend_formula =
# formula for the latent signal, which can depend
# nonlinearly on productivity
- ~ s(productivity, k = 8),
+ ~ s(productivity, k = 8) - 1,
trend_model =
# in addition to productivity effects, the signal is
@@ -253,7 +255,7 @@ mod <- mvgam(formula =
trend_formula =
# formula for the latent signal, which can depend
# nonlinearly on productivity
- ~ s(productivity, k = 8),
+ ~ s(productivity, k = 8) - 1,
trend_model =
# in addition to productivity effects, the signal is
diff --git a/doc/shared_states.html b/doc/shared_states.html
index db2dac3b..6786b351 100644
--- a/doc/shared_states.html
+++ b/doc/shared_states.html
@@ -12,7 +12,7 @@
-
+
Shared latent states in mvgam
@@ -340,7 +340,7 @@
Shared latent states in mvgam
Nicholas J Clark
-2024-07-01
+2024-09-04
@@ -544,36 +544,39 @@ Checking trend_map
with
#>
#> // dynamic process models
#>
-#> // prior for s(season)_trend...
-#> b_raw_trend[1 : 4] ~ multi_normal_prec(zero_trend[1 : 4],
-#> S_trend1[1 : 4, 1 : 4]
-#> * lambda_trend[1]);
-#> lambda_trend ~ normal(5, 30);
-#> {
-#> // likelihood functions
-#> vector[n_nonmissing] flat_trends;
-#> flat_trends = to_vector(trend)[obs_ind];
-#> flat_ys ~ poisson_log_glm(append_col(flat_xs, flat_trends), 0.0,
-#> append_row(b, 1.0));
-#> }
-#> }
-#> generated quantities {
-#> vector[total_obs] eta;
-#> matrix[n, n_series] mus;
-#> vector[n_sp_trend] rho_trend;
-#> vector[n_lv] penalty;
-#> array[n, n_series] int ypred;
-#> penalty = 1.0 / (sigma .* sigma);
-#> rho_trend = log(lambda_trend);
-#>
-#> matrix[n_series, n_lv] lv_coefs = Z;
-#> // posterior predictions
-#> eta = X * b;
-#> for (s in 1 : n_series) {
-#> mus[1 : n, s] = eta[ytimes[1 : n, s]] + trend[1 : n, s];
-#> ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
-#> }
-#> }
+#> // prior for (Intercept)_trend...
+#> b_raw_trend[1] ~ student_t(3, 0, 2);
+#>
+#> // prior for s(season)_trend...
+#> b_raw_trend[2 : 5] ~ multi_normal_prec(zero_trend[2 : 5],
+#> S_trend1[1 : 4, 1 : 4]
+#> * lambda_trend[1]);
+#> lambda_trend ~ normal(5, 30);
+#> {
+#> // likelihood functions
+#> vector[n_nonmissing] flat_trends;
+#> flat_trends = to_vector(trend)[obs_ind];
+#> flat_ys ~ poisson_log_glm(append_col(flat_xs, flat_trends), 0.0,
+#> append_row(b, 1.0));
+#> }
+#> }
+#> generated quantities {
+#> vector[total_obs] eta;
+#> matrix[n, n_series] mus;
+#> vector[n_sp_trend] rho_trend;
+#> vector[n_lv] penalty;
+#> array[n, n_series] int ypred;
+#> penalty = 1.0 / (sigma .* sigma);
+#> rho_trend = log(lambda_trend);
+#>
+#> matrix[n_series, n_lv] lv_coefs = Z;
+#> // posterior predictions
+#> eta = X * b;
+#> for (s in 1 : n_series) {
+#> mus[1 : n, s] = eta[ytimes[1 : n, s]] + trend[1 : n, s];
+#> ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
+#> }
+#> }
Notice the line that states “lv_coefs = Z;”. This uses the supplied
\(Z\) matrix to construct the loading
coefficients. The supplied matrix now looks exactly like what you’d use
@@ -604,11 +607,11 @@
Fitting and inspecting the model
summary(full_mod)
#> GAM observation formula:
#> y ~ series - 1
-#> <environment: 0x000001f52b9e3130>
+#> <environment: 0x00000245d0e24ff8>
#>
#> GAM process formula:
#> ~s(season, bs = "cc", k = 6)
-#> <environment: 0x000001f52b9e3130>
+#> <environment: 0x00000245d0e24ff8>
#>
#> Family:
#> poisson
@@ -636,50 +639,51 @@ Fitting and inspecting the model
#>
#> GAM observation model coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> seriesseries_1 -0.14 0.084 0.29 1.00 1720
-#> seriesseries_2 0.91 1.100 1.20 1.00 1374
-#> seriesseries_3 1.90 2.100 2.30 1.01 447
+#> seriesseries_1 -2.80 -0.67 1.5 1 931
+#> seriesseries_2 -1.80 0.31 2.5 1 924
+#> seriesseries_3 -0.84 1.30 3.4 1 920
#>
#> Process model AR parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> ar1[1] -0.72 -0.430 -0.037 1.01 560
-#> ar1[2] -0.30 -0.017 0.270 1.01 286
+#> ar1[1] -0.73 -0.430 -0.056 1.00 666
+#> ar1[2] -0.30 -0.019 0.250 1.01 499
#>
#> Process error parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> sigma[1] 0.34 0.49 0.65 1 819
-#> sigma[2] 0.59 0.73 0.90 1 573
+#> sigma[1] 0.33 0.49 0.67 1 854
+#> sigma[2] 0.59 0.73 0.91 1 755
#>
#> GAM process model coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> s(season).1_trend -0.20 -0.003 0.21 1.01 857
-#> s(season).2_trend -0.27 -0.046 0.17 1.00 918
-#> s(season).3_trend -0.15 0.068 0.28 1.00 850
-#> s(season).4_trend -0.14 0.064 0.27 1.00 972
-#>
-#> Approximate significance of GAM process smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(season) 2.33 4 0.38 0.93
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:31:17 AM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
+#> 2.5% 50% 97.5% Rhat n_eff
+#> (Intercept)_trend -1.40 0.7800 2.90 1 921
+#> s(season).1_trend -0.21 -0.0072 0.21 1 1822
+#> s(season).2_trend -0.30 -0.0480 0.18 1 1414
+#> s(season).3_trend -0.16 0.0680 0.30 1 1664
+#> s(season).4_trend -0.14 0.0660 0.29 1 1505
+#>
+#> Approximate significance of GAM process smooths:
+#> edf Ref.df Chi.sq p-value
+#> s(season) 1.48 4 0.67 0.93
+#>
+#> Stan MCMC diagnostics:
+#> n_eff / iter looks reasonable for all parameters
+#> Rhat looks reasonable for all parameters
+#> 0 of 2000 iterations ended with a divergence (0%)
+#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
+#> E-FMI indicated no pathological behavior
+#>
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:49:01 AM 2024.
+#> For each parameter, n_eff is a crude measure of effective sample size,
+#> and Rhat is the potential scale reduction factor on split MCMC chains
+#> (at convergence, Rhat = 1)
Both series 1 and 2 share the exact same latent process estimates, while the estimates for series 3 are different:
- + - + - +However, forecasts for series’ 1 and 2 will differ because they have different intercepts in the observation model
@@ -782,7 +786,7 @@conditional_effects
:
-
+
conditional_effects
is simply a wrapper to the more
flexible plot_predictions
function from the
marginaleffects
package. We can get more useful plots of
these effects using this function for further customisation:
require(marginaleffects)
-#> Loading required package: marginaleffects
-plot_predictions(mod,
- condition = c('temperature', 'series', 'series'),
- points = 0.5) +
- theme(legend.position = 'none')
We have successfully estimated effects, some of them nonlinear, that impact the hidden process AND the observations. All in a single joint model. But there can always be challenges with these models, @@ -924,7 +927,7 @@
dynamic()
functionsummary(mod, include_betas = FALSE)
#> GAM formula:
#> out ~ s(time, by = temp, bs = "gp", m = c(-2, 8, 2), k = 40)
-#> <environment: 0x0000014d37f0f110>
+#> <environment: 0x0000026c107b3fd8>
#>
#> Family:
#> gaussian
@@ -486,15 +486,15 @@ The dynamic()
function
#>
#> Observation error parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> sigma_obs[1] 0.23 0.25 0.28 1 2026
+#> sigma_obs[1] 0.23 0.25 0.28 1 1709
#>
#> GAM coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) 4 4 4.1 1 2640
+#> (Intercept) 4 4 4.1 1 1960
#>
#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(time):temp 15.4 40 173 <2e-16 ***
+#> edf Ref.df Chi.sq p-value
+#> s(time):temp 17 40 164 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
@@ -505,7 +505,7 @@ The dynamic()
function
#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
#> E-FMI indicated no pathological behavior
#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:35:21 AM 2024.
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:52:26 AM 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split MCMC chains
#> (at convergence, Rhat = 1)
dynamic()
functionWe can also use plot_predictions
from the
marginaleffects
package to visualise the time-varying
coefficient for what the effect would be estimated to be at different
values of \(temperature\):
require(marginaleffects)
-#> Loading required package: marginaleffects
-range_round = function(x){
- round(range(x, na.rm = TRUE), 2)
-}
-plot_predictions(mod,
- newdata = datagrid(time = unique,
- temp = range_round),
- by = c('time', 'temp', 'temp'),
- type = 'link')
This results in sensible forecasts of the observations as well
- +plot(fc) +The syntax is very similar if we wish to estimate the parameters of
the underlying Gaussian Process, this time using a Hilbert space
approximation. We simply omit the rho
argument in
@@ -559,7 +556,7 @@
dynamic()
functionsummary(mod, include_betas = FALSE)
#> GAM formula:
#> out ~ gp(time, by = temp, c = 5/4, k = 40, scale = TRUE)
-#> <environment: 0x0000014d37f0f110>
+#> <environment: 0x0000026c107b3fd8>
#>
#> Family:
#> gaussian
@@ -584,16 +581,16 @@ The dynamic()
function
#>
#> Observation error parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> sigma_obs[1] 0.24 0.26 0.3 1 2183
+#> sigma_obs[1] 0.24 0.26 0.3 1 2285
#>
#> GAM coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) 4 4 4.1 1 2733
+#> (Intercept) 4 4 4.1 1 3056
#>
#> GAM gp term marginal deviation (alpha) and length scale (rho) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> alpha_gp(time):temp 0.620 0.890 1.400 1.01 539
-#> rho_gp(time):temp 0.026 0.053 0.069 1.00 628
+#> alpha_gp(time):temp 0.630 0.880 1.400 1.00 619
+#> rho_gp(time):temp 0.026 0.053 0.069 1.01 487
#>
#> Stan MCMC diagnostics:
#> n_eff / iter looks reasonable for all parameters
@@ -602,7 +599,7 @@ The dynamic()
function
#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
#> E-FMI indicated no pathological behavior
#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:36:09 AM 2024.
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:53:17 AM 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split MCMC chains
#> (at convergence, Rhat = 1)
dynamic()
functionsummary(mod0)
#> GAM formula:
#> survival ~ 1
-#> <environment: 0x0000014d37f0f110>
+#> <environment: 0x0000026c107b3fd8>
#>
#> Family:
#> beta
@@ -715,32 +712,33 @@ A State-Space Beta regression
#>
#> Observation precision parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> phi[1] 95 280 630 1.02 271
+#> phi[1] 98 270 650 1.01 272
#>
#> GAM coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) -4.7 -4.4 -4 1 625
+#> (Intercept) -4.7 -4.4 -4.1 1 570
#>
#> Latent trend parameter AR estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> ar1[1] -0.230 0.67 0.98 1.01 415
-#> sigma[1] 0.073 0.47 0.72 1.02 213
+#> 2.5% 50% 97.5% Rhat n_eff
+#> ar1[1] 0.037 0.69 0.98 1.00 570
+#> sigma[1] 0.120 0.45 0.73 1.01 225
#>
#> Stan MCMC diagnostics:
#> n_eff / iter looks reasonable for all parameters
#> Rhat looks reasonable for all parameters
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:36:57 AM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
A plot of the underlying dynamic component shows how it has easily handled the temporal evolution of the time series:
- +summary(mod1, include_betas = FALSE)
#> GAM observation formula:
#> survival ~ 1
-#> <environment: 0x0000014d37f0f110>
+#> <environment: 0x0000026c107b3fd8>
#>
#> GAM process formula:
#> ~dynamic(CUI.apr, k = 25, scale = FALSE)
-#> <environment: 0x0000014d37f0f110>
+#> <environment: 0x0000026c107b3fd8>
#>
#> Family:
#> beta
@@ -796,43 +794,47 @@ Including time-varying upwelling effects
#>
#> Observation precision parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> phi[1] 160 350 690 1 557
+#> phi[1] 180 360 670 1.01 504
#>
#> GAM observation model coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) -4.7 -4 -2.6 1 331
+#> 2.5% 50% 97.5% Rhat n_eff
+#> (Intercept) -6.1 -2.4 1.8 1 1605
#>
#> Process model AR parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> ar1[1] 0.46 0.89 0.99 1.01 364
+#> ar1[1] 0.48 0.89 1 1.01 681
#>
#> Process error parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> sigma[1] 0.18 0.35 0.58 1 596
+#> sigma[1] 0.18 0.35 0.57 1.02 488
#>
-#> GAM process model gp term marginal deviation (alpha) and length scale (rho) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> alpha_gp_time_byCUI_apr_trend 0.02 0.3 1.2 1 760
-#> rho_gp_time_byCUI_apr_trend 1.30 5.5 28.0 1 674
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 79 of 2000 iterations ended with a divergence (3.95%)
-#> *Try running with larger adapt_delta to remove the divergences
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:37:32 AM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
The estimates for the underlying dynamic process, and for the hindcasts, haven’t changed much:
- + - +But the process error parameter \(\sigma\) is slightly smaller for this model than for the first model:
# Extract estimates of the process error 'sigma' for each model
@@ -847,13 +849,13 @@ Including time-varying upwelling effects
ggplot(sigmas, aes(y = `sigma[1]`, fill = model)) +
geom_density(alpha = 0.3, colour = NA) +
coord_flip()
Why does the process error not need to be as flexible in the second
model? Because the estimates of this dynamic process are now informed
partly by the time-varying effect of upwelling, which we can visualise
on the link scale using plot()
:
loo
package:
loo_compare(mod0, mod1)
-#> Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
-
-#> Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
-#> elpd_diff se_diff
-#> mod1 0.0 0.0
-#> mod0 -6.5 2.7
The second model has the larger Expected Log Predictive Density (ELPD), meaning that it is slightly favoured over the simpler model that did not include the time-varying upwelling effect. However, the two @@ -889,9 +888,9 @@
The model with the time-varying upwelling effect tends to provides better 1-step ahead forecasts, with a higher total forecast ELPD
+#> [1] 41.14095We can also plot the ELPDs for each model as a contrast. Here, values less than zero suggest the time-varying predictor model (Mod1) gives better 1-step ahead forecasts:
@@ -903,7 +902,7 @@A useful exercise to further expand this model would be to think about what kinds of predictors might impact measurement error, which could easily be implemented into the observation formula in diff --git a/doc/trend_formulas.R b/doc/trend_formulas.R index 98066ef4..6d625b4a 100644 --- a/doc/trend_formulas.R +++ b/doc/trend_formulas.R @@ -1,10 +1,13 @@ +params <- +list(EVAL = TRUE) + ## ---- echo = FALSE------------------------------------------------------------ -NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true") knitr::opts_chunk$set( collapse = TRUE, comment = "#>", - purl = NOT_CRAN, - eval = NOT_CRAN + message = FALSE, + warning = FALSE, + eval = if (isTRUE(exists("params"))) params$EVAL else FALSE ) diff --git a/doc/trend_formulas.Rmd b/doc/trend_formulas.Rmd index 8d636d2e..836a5d60 100644 --- a/doc/trend_formulas.Rmd +++ b/doc/trend_formulas.Rmd @@ -9,14 +9,16 @@ vignette: > %\VignetteIndexEntry{State-Space models in mvgam} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} +params: + EVAL: !r identical(tolower(Sys.getenv("NOT_CRAN")), "true") --- ```{r, echo = FALSE} -NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true") knitr::opts_chunk$set( collapse = TRUE, comment = "#>", - purl = NOT_CRAN, - eval = NOT_CRAN + message = FALSE, + warning = FALSE, + eval = if (isTRUE(exists("params"))) params$EVAL else FALSE ) ``` @@ -459,7 +461,7 @@ Auger‐Méthé, Marie, et al. ["A guide to state–space modeling of ecological Heaps, Sarah E. "[Enforcing stationarity through the prior in vector autoregressions.](https://www.tandfonline.com/doi/full/10.1080/10618600.2022.2079648)" *Journal of Computational and Graphical Statistics* 32.1 (2023): 74-83. -Hannaford, Naomi E., et al. "[A sparse Bayesian hierarchical vector autoregressive model for microbial dynamics in a wastewater treatment plant.](https://www.sciencedirect.com/science/article/pii/S0167947322002390)" *Computational Statistics & Data Analysis* 179 (2023): 107659. +Hannaford, Naomi E., et al. "[A sparse Bayesian hierarchical vector autoregressive model for microbial dynamics in a wastewater treatment plant.](https://doi.org/10.1016/j.csda.2022.107659)" *Computational Statistics & Data Analysis* 179 (2023): 107659. Holmes, Elizabeth E., Eric J. Ward, and Wills Kellie. "[MARSS: multivariate autoregressive state-space models for analyzing time-series data.](https://journal.r-project.org/archive/2012/RJ-2012-002/index.html)" *R Journal*. 4.1 (2012): 11. diff --git a/doc/trend_formulas.html b/doc/trend_formulas.html index 15dcaae4..f3c77a5c 100644 --- a/doc/trend_formulas.html +++ b/doc/trend_formulas.html @@ -368,7 +368,7 @@
plankton_data %>%
dplyr::filter(series == 'Diatoms') %>%
@@ -538,36 +533,30 @@ Capturing seasonality
The “global” tensor product smooth function can be quickly
visualized:
-
+
On this plot, red indicates below-average linear predictors and white
indicates above-average. We can then plot the deviation smooths for a
few algal groups to see how they vary from the “global” pattern:
-
+
-
+
These multidimensional smooths have done a good job of capturing the
seasonal variation in our observations:
-
-
-
-
-
-
+
+
+
+
+
+
This basic model gives us confidence that we can capture the seasonal
variation in the observations. But the model has not captured the
remaining temporal dynamics, which is obvious when we inspect Dunn-Smyth
residuals for a few series:
-
+
-
+
summary(var_mod, include_betas = FALSE)
#> GAM observation formula:
#> y ~ 1
-#> <environment: 0x0000020ef28a3068>
+#> <environment: 0x0000024fa27bd008>
#>
#> GAM process formula:
#> ~te(temp, month, k = c(4, 4)) + te(temp, month, k = c(4, 4),
#> by = trend) - 1
-#> <environment: 0x0000020ef28a3068>
+#> <environment: 0x0000024fa27bd008>
#>
#> Family:
#> gaussian
@@ -752,101 +741,98 @@ Inspecting SS models
#>
#> Observation error parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> sigma_obs[1] 0.20 0.25 0.34 1.00 481
-#> sigma_obs[2] 0.25 0.40 0.54 1.05 113
-#> sigma_obs[3] 0.41 0.64 0.81 1.08 84
-#> sigma_obs[4] 0.24 0.37 0.50 1.02 270
-#> sigma_obs[5] 0.31 0.43 0.54 1.01 268
+#> sigma_obs[1] 0.20 0.26 0.35 1.00 420
+#> sigma_obs[2] 0.25 0.40 0.54 1.01 162
+#> sigma_obs[3] 0.43 0.63 0.79 1.02 133
+#> sigma_obs[4] 0.25 0.37 0.50 1.02 275
+#> sigma_obs[5] 0.31 0.43 0.54 1.01 278
#>
#> GAM observation model coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
#> (Intercept) 0 0 0 NaN NaN
#>
#> Process model VAR parameter estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> A[1,1] -0.072 0.47000 0.830 1.05 151
-#> A[1,2] -0.370 -0.04000 0.200 1.01 448
-#> A[1,3] -0.540 -0.04500 0.360 1.02 232
-#> A[1,4] -0.290 0.03900 0.460 1.02 439
-#> A[1,5] -0.072 0.15000 0.560 1.03 183
-#> A[2,1] -0.150 0.01800 0.210 1.00 677
-#> A[2,2] 0.620 0.79000 0.920 1.01 412
-#> A[2,3] -0.400 -0.14000 0.042 1.01 295
-#> A[2,4] -0.045 0.12000 0.350 1.01 344
-#> A[2,5] -0.057 0.05800 0.200 1.01 641
-#> A[3,1] -0.480 0.00015 0.390 1.03 71
-#> A[3,2] -0.500 -0.22000 0.023 1.02 193
-#> A[3,3] 0.066 0.41000 0.710 1.01 259
-#> A[3,4] -0.020 0.24000 0.630 1.02 227
-#> A[3,5] -0.051 0.14000 0.410 1.02 195
-#> A[4,1] -0.220 0.05100 0.290 1.03 141
-#> A[4,2] -0.100 0.05300 0.260 1.01 402
-#> A[4,3] -0.420 -0.12000 0.120 1.02 363
-#> A[4,4] 0.480 0.74000 0.940 1.01 322
-#> A[4,5] -0.200 -0.03100 0.120 1.01 693
-#> A[5,1] -0.360 0.06400 0.430 1.03 99
-#> A[5,2] -0.430 -0.13000 0.074 1.01 230
-#> A[5,3] -0.610 -0.20000 0.110 1.02 232
-#> A[5,4] -0.061 0.19000 0.620 1.01 250
-#> A[5,5] 0.530 0.75000 0.970 1.02 273
+#> 2.5% 50% 97.5% Rhat n_eff
+#> A[1,1] 0.012 0.5000 0.830 1.01 138
+#> A[1,2] -0.390 -0.0340 0.200 1.02 356
+#> A[1,3] -0.500 -0.0340 0.360 1.02 262
+#> A[1,4] -0.280 0.0240 0.400 1.02 409
+#> A[1,5] -0.090 0.1400 0.550 1.01 280
+#> A[2,1] -0.140 0.0120 0.180 1.00 773
+#> A[2,2] 0.620 0.7900 0.920 1.01 331
+#> A[2,3] -0.410 -0.1200 0.048 1.00 394
+#> A[2,4] -0.047 0.1100 0.340 1.01 321
+#> A[2,5] -0.050 0.0590 0.190 1.00 681
+#> A[3,1] -0.280 0.0098 0.300 1.03 202
+#> A[3,2] -0.530 -0.1900 0.036 1.02 154
+#> A[3,3] 0.071 0.4300 0.740 1.02 224
+#> A[3,4] -0.033 0.2100 0.650 1.02 179
+#> A[3,5] -0.059 0.1200 0.400 1.03 220
+#> A[4,1] -0.140 0.0450 0.270 1.00 415
+#> A[4,2] -0.110 0.0500 0.260 1.01 362
+#> A[4,3] -0.450 -0.1100 0.130 1.02 229
+#> A[4,4] 0.500 0.7400 0.940 1.01 307
+#> A[4,5] -0.200 -0.0300 0.130 1.00 747
+#> A[5,1] -0.200 0.0620 0.420 1.01 338
+#> A[5,2] -0.420 -0.1100 0.086 1.03 154
+#> A[5,3] -0.650 -0.1700 0.130 1.02 180
+#> A[5,4] -0.067 0.1800 0.620 1.03 171
+#> A[5,5] 0.540 0.7400 0.940 1.01 300
#>
#> Process error parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> Sigma[1,1] 0.037 0.28 0.65 1.11 64
+#> Sigma[1,1] 0.067 0.29 0.64 1.02 83
#> Sigma[1,2] 0.000 0.00 0.00 NaN NaN
#> Sigma[1,3] 0.000 0.00 0.00 NaN NaN
#> Sigma[1,4] 0.000 0.00 0.00 NaN NaN
#> Sigma[1,5] 0.000 0.00 0.00 NaN NaN
#> Sigma[2,1] 0.000 0.00 0.00 NaN NaN
-#> Sigma[2,2] 0.067 0.11 0.19 1.00 505
+#> Sigma[2,2] 0.065 0.11 0.18 1.01 367
#> Sigma[2,3] 0.000 0.00 0.00 NaN NaN
#> Sigma[2,4] 0.000 0.00 0.00 NaN NaN
#> Sigma[2,5] 0.000 0.00 0.00 NaN NaN
#> Sigma[3,1] 0.000 0.00 0.00 NaN NaN
#> Sigma[3,2] 0.000 0.00 0.00 NaN NaN
-#> Sigma[3,3] 0.062 0.15 0.29 1.05 93
+#> Sigma[3,3] 0.055 0.16 0.31 1.01 155
#> Sigma[3,4] 0.000 0.00 0.00 NaN NaN
#> Sigma[3,5] 0.000 0.00 0.00 NaN NaN
#> Sigma[4,1] 0.000 0.00 0.00 NaN NaN
#> Sigma[4,2] 0.000 0.00 0.00 NaN NaN
#> Sigma[4,3] 0.000 0.00 0.00 NaN NaN
-#> Sigma[4,4] 0.052 0.13 0.26 1.01 201
+#> Sigma[4,4] 0.053 0.13 0.26 1.01 200
#> Sigma[4,5] 0.000 0.00 0.00 NaN NaN
#> Sigma[5,1] 0.000 0.00 0.00 NaN NaN
#> Sigma[5,2] 0.000 0.00 0.00 NaN NaN
#> Sigma[5,3] 0.000 0.00 0.00 NaN NaN
#> Sigma[5,4] 0.000 0.00 0.00 NaN NaN
-#> Sigma[5,5] 0.110 0.21 0.35 1.03 210
+#> Sigma[5,5] 0.100 0.21 0.35 1.01 261
#>
#> Approximate significance of GAM process smooths:
-#> edf Ref.df Chi.sq p-value
-#> te(temp,month) 2.67 15 38.52 0.405
-#> te(temp,month):seriestrend1 1.77 15 1.73 1.000
-#> te(temp,month):seriestrend2 2.18 15 4.07 0.995
-#> te(temp,month):seriestrend3 4.07 15 51.07 0.059 .
-#> te(temp,month):seriestrend4 3.72 15 6.98 0.825
-#> te(temp,month):seriestrend5 1.85 15 5.15 0.998
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhats above 1.05 found for 11 parameters
-#> *Diagnose further to investigate why the chains have not mixed
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 9:30:41 AM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
The convergence of this model isn’t fabulous (more on this in a
moment). But we can again plot the smooth functions, which this time
operate on the process model. We can see the same plot using
trend_effects = TRUE
in the plotting functions:
The VAR matrix is of particular interest here, as it captures lagged
dependencies and cross-dependencies in the latent process model.
Unfortunately bayesplot
doesn’t know this is a matrix of
@@ -862,7 +848,7 @@
There is a lot happening in this matrix. Each cell captures the lagged effect of the process in the column on the process in the row in the next timestep. So for example, the effect in cell [1,3] shows how an @@ -883,12 +869,12 @@
The observation error estimates \((\sigma_{obs})\) represent how much the model thinks we might miss the true count when we take our imperfect measurements:
- +These are still a bit hard to identify overall, especially when trying to estimate both process and observation error. Often we need to make some strong assumptions about which of these is more important for @@ -930,7 +916,7 @@
This symmetric matrix tells us there is support for correlated process errors, as several of the off-diagonal entries are strongly non-zero. But it is easier to interpret these estimates if we convert @@ -943,11 +929,11 @@
Plot the expected responses of the remaining series to a positive shock for series 3 (Greens)
- +This series of plots makes it clear that some of the other series would be expected to show both instantaneous responses to a shock for the Greens (due to their correlated process errors) as well as delayed @@ -1000,7 +986,7 @@
And we can also compute the energy score for out of sample forecasts to get a sense of which model provides forecasts that are better calibrated:
@@ -1014,7 +1000,7 @@The models tend to provide similar forecasts, though the correlated error model does slightly better overall. We would probably need to use a more extensive rolling forecast evaluation exercise if we felt like we @@ -1038,10 +1024,10 @@
Hannaford, Naomi E., et al. “A -sparse Bayesian hierarchical vector autoregressive model for microbial -dynamics in a wastewater treatment plant.” Computational -Statistics & Data Analysis 179 (2023): 107659.
+Hannaford, Naomi E., et al. “A sparse Bayesian +hierarchical vector autoregressive model for microbial dynamics in a +wastewater treatment plant.” Computational Statistics & Data +Analysis 179 (2023): 107659.
Holmes, Elizabeth E., Eric J. Ward, and Wills Kellie. “MARSS: multivariate autoregressive state-space models for analyzing time-series data.” R Journal. 4.1 (2012): 11.
diff --git a/docs/news/index.html b/docs/news/index.html index 16253ebf..4ad2866a 100644 --- a/docs/news/index.html +++ b/docs/news/index.html @@ -57,7 +57,14 @@stability.mvgam
method to compute stability metrics from models fit with Vector Autoregressive dynamics (#21)CRAN release: 2024-09-04
trend_formula
is supplied. This breaks the assumption that the process has to be zero-centred, adding more modelling flexibility but also potentially inducing nonidentifiabilities with respect to any observation model intercepts. Thoughtful priors are a must for these modelssimdat <- sim_mvgam(n_series = 4, T = 24, prop_missing = 0.2)
head(simdat$data_train, 16)
#> y season year series time
-#> 1 NA 1 1 series_1 1
-#> 2 0 1 1 series_2 1
-#> 3 0 1 1 series_3 1
+#> 1 1 1 1 series_1 1
+#> 2 2 1 1 series_2 1
+#> 3 1 1 1 series_3 1
#> 4 0 1 1 series_4 1
#> 5 0 2 1 series_1 2
-#> 6 1 2 1 series_2 2
+#> 6 5 2 1 series_2 2
#> 7 1 2 1 series_3 2
-#> 8 1 2 1 series_4 2
+#> 8 0 2 1 series_4 2
#> 9 0 3 1 series_1 3
-#> 10 NA 3 1 series_2 3
+#> 10 0 3 1 series_2 3
#> 11 0 3 1 series_3 3
-#> 12 NA 3 1 series_4 3
+#> 12 0 3 1 series_4 3
#> 13 1 4 1 series_1 4
-#> 14 0 4 1 series_2 4
+#> 14 NA 4 1 series_2 4
#> 15 0 4 1 series_3 4
-#> 16 2 4 1 series_4 4
series
as a factor
variableNotice how we have four different time series in these simulated @@ -447,25 +447,29 @@
summary(gam(y ~ series + s(time, by = series),
+#> Deviance Residuals:
+#> Min 1Q Median 3Q Max
+#> -2.1579 -1.1802 -0.5262 0.7172 2.3509
+#>
+#> Coefficients:
+#> Estimate Std. Error z value Pr(>|z|)
+#> (Intercept) -0.83568 0.41529 -2.012 0.04419 *
+#> seriesseries_2 1.13007 0.40629 2.781 0.00541 **
+#> seriesseries_3 1.29822 0.39666 3.273 0.00106 **
+#> seriesseries_4 0.32518 0.46472 0.700 0.48410
+#> time 0.02126 0.02161 0.984 0.32530
+#> ---
+#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#>
+#> (Dispersion parameter for poisson family taken to be 1)
+#>
+#> Null deviance: 111.421 on 60 degrees of freedom
+#> Residual deviance: 91.775 on 56 degrees of freedom
+#> (11 observations deleted due to missingness)
+#> AIC: 189.07
+#>
+#> Number of Fisher Scoring iterations: 5
summary(mgcv::gam(y ~ series + s(time, by = series),
data = simdat$data_train,
family = poisson()))
#>
@@ -476,23 +480,23 @@ A single outcome variable
#> y ~ series + s(time, by = series)
#>
#> Parametric coefficients:
-#> Estimate Std. Error z value Pr(>|z|)
-#> (Intercept) -4.293 5.500 -0.781 0.435
-#> seriesseries_2 3.001 5.533 0.542 0.588
-#> seriesseries_3 3.193 5.518 0.579 0.563
-#> seriesseries_4 4.795 5.505 0.871 0.384
-#>
-#> Approximate significance of smooth terms:
-#> edf Ref.df Chi.sq p-value
-#> s(time):seriesseries_1 7.737 8.181 6.541 0.5585
-#> s(time):seriesseries_2 3.444 4.213 4.739 0.3415
-#> s(time):seriesseries_3 1.000 1.000 0.006 0.9365
-#> s(time):seriesseries_4 3.958 4.832 11.636 0.0363 *
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#> Estimate Std. Error z value Pr(>|z|)
+#> (Intercept) -0.63089 0.35691 -1.768 0.07712 .
+#> seriesseries_2 -0.84795 1.40241 -0.605 0.54542
+#> seriesseries_3 1.25102 0.40263 3.107 0.00189 **
+#> seriesseries_4 -0.08734 0.55278 -0.158 0.87446
+#> ---
+#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#>
+#> Approximate significance of smooth terms:
+#> edf Ref.df Chi.sq p-value
+#> s(time):seriesseries_1 1.173 1.326 0.311 0.819
+#> s(time):seriesseries_2 8.519 8.900 8.394 0.493
+#> s(time):seriesseries_3 1.637 2.039 3.001 0.222
+#> s(time):seriesseries_4 2.614 3.300 5.397 0.171
#>
-#> R-sq.(adj) = 0.605 Deviance explained = 66.2%
-#> UBRE = 0.4193 Scale est. = 1 n = 57
Depending on the observation families you plan to use when building models, there may be some restrictions that need to be satisfied within the outcome variable. For example, a Beta regression can only handle @@ -514,33 +518,31 @@
A call to gam
using the mgcv
package leads
to a model that actually fits (though it does give an unhelpful warning
message):
gam(outcome ~ time,
+mgcv::gam(outcome ~ time,
family = betar(),
data = gauss_dat)
-#> Warning in family$saturated.ll(y, prior.weights, theta): saturated likelihood
-#> may be inaccurate
-#>
-#> Family: Beta regression(0.44)
-#> Link function: logit
-#>
-#> Formula:
-#> outcome ~ time
-#> Total model degrees of freedom 2
-#>
-#> REML score: -127.2706
+#>
+#> Family: Beta regression(0.437)
+#> Link function: logit
+#>
+#> Formula:
+#> outcome ~ time
+#> Total model degrees of freedom 2
+#>
+#> REML score: -123.7115
But the same call to mvgam
gives us something more
useful:
mvgam(outcome ~ time,
@@ -627,15 +629,15 @@ Checking data with get_mvgam_priors
series = factor('series_1'),
outcome = rnorm(8))
bad_times
-#> time series outcome
-#> 1 1 series_1 1.4681068
-#> 2 3 series_1 0.1796627
-#> 3 5 series_1 -0.4204020
-#> 4 7 series_1 -1.0729359
-#> 5 9 series_1 -0.1738239
-#> 6 11 series_1 -0.5463268
-#> 7 13 series_1 0.8275198
-#> 8 15 series_1 2.2085085
Next we call get_mvgam_priors
by simply specifying an
intercept-only model, which is enough to trigger all the checks:
get_mvgam_priors(outcome ~ 1,
@@ -652,24 +654,23 @@ Checking data with get_mvgam_priors
series = factor(unique(bad_times$series),
levels = levels(bad_times$series)))) %>%
dplyr::arrange(time) -> good_times
-#> Joining with `by = join_by(time, series)`
-good_times
-#> time series outcome
-#> 1 1 series_1 1.4681068
-#> 2 2 series_1 NA
-#> 3 3 series_1 0.1796627
-#> 4 4 series_1 NA
-#> 5 5 series_1 -0.4204020
-#> 6 6 series_1 NA
-#> 7 7 series_1 -1.0729359
-#> 8 8 series_1 NA
-#> 9 9 series_1 -0.1738239
-#> 10 10 series_1 NA
-#> 11 11 series_1 -0.5463268
-#> 12 12 series_1 NA
-#> 13 13 series_1 0.8275198
-#> 14 14 series_1 NA
-#> 15 15 series_1 2.2085085
Now the call to get_mvgam_priors
, using our filled in
data, should work:
get_mvgam_priors(outcome ~ 1,
@@ -678,9 +679,9 @@ Checking data with get_mvgam_priors
#> param_name param_length param_info
#> 1 (Intercept) 1 (Intercept)
#> 2 vector<lower=0>[n_series] sigma_obs; 1 observation error sd
-#> prior example_change
-#> 1 (Intercept) ~ student_t(3, 0, 2.5); (Intercept) ~ normal(0, 1);
-#> 2 sigma_obs ~ student_t(3, 0, 2.5); sigma_obs ~ normal(-0.22, 0.33);
+#> prior example_change
+#> 1 (Intercept) ~ student_t(3, -0.2, 2.5); (Intercept) ~ normal(0, 1);
+#> 2 sigma_obs ~ student_t(3, 0, 2.5); sigma_obs ~ normal(0.68, 0.48);
#> new_lowerbound new_upperbound
#> 1 NA NA
#> 2 NA NA
get_mvgam_priors
NA
sget_mvgam_priors(outcome ~ cov,
data = miss_dat,
family = gaussian())
-#> Error: Missing values found in data predictors:
-#> Error in na.fail.default(structure(list(outcome = c(0.774368589907313, : missing values in object
Just like with the Or we can look more closely at the distribution for the first time
series: If you have split your data into training and testing folds (i.e. for
forecast evaluation), you can include the test data in your plots: Create the And we can plot the conditional effects of the splines (on the link
scale) to see that they are estimated to be highly nonlinearmgcv
package, mvgam
can
also accept data as a list
object. This is useful if you
want to set up linear
@@ -778,8 +789,30 @@ Covariates with no
NA
sget_mvgam_priors(outcome ~ cov,
data = miss_dat,
family = gaussian())
-#> Error: Missing values found in data predictors:
-#> Error in na.fail.default(structure(list(outcome = c(-0.708736388395862, : missing values in object
Plotting with
-
+
plot_mvgam_series
Example with NEON tick data
@@ -896,9 +929,7 @@ Example with NEON tick data
# match up, in case you need the siteID for anything else later on
dplyr::left_join(all_neon_tick_data %>%
dplyr::select(siteID, plotID) %>%
- dplyr::distinct()) -> model_dat
-#> Joining with `by = join_by(Year, epiWeek, plotID)`
-#> Joining with `by = join_by(plotID)`series
variable needed for mvgam
modelling:model_dat %>%
@@ -965,12 +996,12 @@
Example with NEON tick data
#> $ S6 : num [1:8, 1:8] 1.037 -0.416 0.419 0.117 0.188 ...
#> $ S7 : num [1:8, 1:8] 1.037 -0.416 0.419 0.117 0.188 ...
#> $ S8 : num [1:8, 1:8] 1.037 -0.416 0.419 0.117 0.188 ...
-#> $ p_coefs : Named num 0.806
+#> $ p_coefs : Named num 0
#> ..- attr(*, "names")= chr "(Intercept)"
-#> $ p_taus : num 301
+#> $ p_taus : num 0.88
#> $ ytimes : int [1:416, 1:8] 1 9 17 25 33 41 49 57 65 73 ...
#> $ n_series : int 8
-#> $ sp : Named num [1:9] 4.68 59.57 1.11 2.73 6.5 ...
+#> $ sp : Named num [1:9] 0.368 0.368 0.368 0.368 0.368 ...
#> ..- attr(*, "names")= chr [1:9] "s(epiWeek):seriesBLAN_005" "s(epiWeek):seriesBLAN_012" "s(epiWeek):seriesSCBI_002" "s(epiWeek):seriesSCBI_013" ...
#> $ y_observed : num [1:416, 1:8] 0 0 0 0 0 0 0 0 0 0 ...
#> $ total_obs : int 3328
@@ -1019,7 +1050,7 @@ Example with NEON tick data
#> vector[1] mu_raw;
#>
#> // latent trend AR1 terms
-#> vector<lower=-1.5, upper=1.5>[n_series] ar1;
+#> vector<lower=-1, upper=1>[n_series] ar1;
#>
#> // latent trend variance parameters
#> vector<lower=0>[n_series] sigma;
diff --git a/inst/doc/forecast_evaluation.R b/inst/doc/forecast_evaluation.R
index 8412f550..16985002 100644
--- a/inst/doc/forecast_evaluation.R
+++ b/inst/doc/forecast_evaluation.R
@@ -1,10 +1,13 @@
-## ----echo = FALSE-------------------------------------------------------------
-NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true")
+params <-
+list(EVAL = TRUE)
+
+## ---- echo = FALSE------------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
- purl = NOT_CRAN,
- eval = NOT_CRAN
+ message = FALSE,
+ warning = FALSE,
+ eval = if (isTRUE(exists("params"))) params$EVAL else FALSE
)
@@ -35,12 +38,12 @@ simdat <- sim_mvgam(T = 100,
str(simdat)
-## ----fig.alt = "Plotting time series features for GAM models in mvgam"--------
+## ---- fig.alt = "Plotting time series features for GAM models in mvgam"-------
plot_mvgam_series(data = simdat$data_train,
series = 'all')
-## ----fig.alt = "Plotting time series features for GAM models in mvgam"--------
+## ---- fig.alt = "Plotting time series features for GAM models in mvgam"-------
plot_mvgam_series(data = simdat$data_train,
newdata = simdat$data_test,
series = 1)
@@ -67,7 +70,7 @@ mod1 <- mvgam(y ~ s(season, bs = 'cc', k = 8) +
summary(mod1, include_betas = FALSE)
-## ----fig.alt = "Plotting GAM smooth functions using mvgam"--------------------
+## ---- fig.alt = "Plotting GAM smooth functions using mvgam"-------------------
conditional_effects(mod1, type = 'link')
@@ -96,15 +99,15 @@ mod2 <- mvgam(y ~ s(season, bs = 'cc', k = 8) +
summary(mod2, include_betas = FALSE)
-## ----fig.alt = "Summarising latent Gaussian Process parameters in mvgam"------
+## ---- fig.alt = "Summarising latent Gaussian Process parameters in mvgam"-----
mcmc_plot(mod2, variable = c('alpha_gp'), regex = TRUE, type = 'areas')
-## ----fig.alt = "Summarising latent Gaussian Process parameters in mvgam"------
+## ---- fig.alt = "Summarising latent Gaussian Process parameters in mvgam"-----
mcmc_plot(mod2, variable = c('rho_gp'), regex = TRUE, type = 'areas')
-## ----fig.alt = "Plotting latent Gaussian Process effects in mvgam and marginaleffects"----
+## ---- fig.alt = "Plotting latent Gaussian Process effects in mvgam and marginaleffects"----
conditional_effects(mod2, type = 'link')
diff --git a/inst/doc/forecast_evaluation.Rmd b/inst/doc/forecast_evaluation.Rmd
index 8979593d..90bc3105 100644
--- a/inst/doc/forecast_evaluation.Rmd
+++ b/inst/doc/forecast_evaluation.Rmd
@@ -9,14 +9,16 @@ vignette: >
%\VignetteIndexEntry{Forecasting and forecast evaluation in mvgam}
%\VignetteEngine{knitr::rmarkdown}
\usepackage[utf8]{inputenc}
+params:
+ EVAL: !r identical(tolower(Sys.getenv("NOT_CRAN")), "true")
---
```{r, echo = FALSE}
-NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true")
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
- purl = NOT_CRAN,
- eval = NOT_CRAN
+ message = FALSE,
+ warning = FALSE,
+ eval = if (isTRUE(exists("params"))) params$EVAL else FALSE
)
```
diff --git a/inst/doc/forecast_evaluation.html b/inst/doc/forecast_evaluation.html
index ae761ae5..2e518447 100644
--- a/inst/doc/forecast_evaluation.html
+++ b/inst/doc/forecast_evaluation.html
@@ -12,7 +12,7 @@
-
+
Forecasting and forecast evaluation in
mvgam
Nicholas J Clark
-2024-07-01
+2024-09-04
Modelling dynamics with splines
#> GAM formula:
#> y ~ s(season, bs = "cc", k = 8) + s(time, by = series, bs = "cr",
#> k = 20)
-#> <environment: 0x000001b67206d110>
+#> <environment: 0x000002c8194b2058>
#>
#> Family:
#> poisson
@@ -468,15 +468,15 @@ Modelling dynamics with splines
#>
#>
#> GAM coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) -0.41 -0.21 -0.052 1 855
+#> 2.5% 50% 97.5% Rhat n_eff
+#> (Intercept) -0.4 -0.21 -0.045 1 1035
#>
#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(season) 3.82 6 19.6 0.0037 **
-#> s(time):seriesseries_1 7.25 19 13.2 0.7969
-#> s(time):seriesseries_2 9.81 19 173.3 0.0019 **
-#> s(time):seriesseries_3 6.05 19 19.4 0.7931
+#> edf Ref.df Chi.sq p-value
+#> s(season) 3.28 6 20.4 0.0215 *
+#> s(time):seriesseries_1 6.95 19 14.0 0.8153
+#> s(time):seriesseries_2 10.86 19 156.5 0.0044 **
+#> s(time):seriesseries_3 6.79 19 18.1 0.5529
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
@@ -487,14 +487,14 @@ Modelling dynamics with splines
#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
#> E-FMI indicated no pathological behavior
#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:26:51 AM 2024.
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:27:30 AM 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split MCMC chains
#> (at convergence, Rhat = 1)Modelling dynamics with GPs
@@ -517,7 +517,7 @@ Modelling dynamics with GPs
#> GAM formula:
#> y ~ s(season, bs = "cc", k = 8) + gp(time, by = series, c = 5/4,
#> k = 20, scale = FALSE)
-#> <environment: 0x000001b67206d110>
+#> <environment: 0x000002c8194b2058>
#>
#> Family:
#> poisson
@@ -542,32 +542,32 @@ Modelling dynamics with GPs
#>
#> GAM coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) -1.1 -0.51 0.34 1 694
+#> (Intercept) -1.1 -0.51 0.25 1 731
#>
#> GAM gp term marginal deviation (alpha) and length scale (rho) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> alpha_gp(time):seriesseries_1 0.21 0.78 2.2 1 819
-#> alpha_gp(time):seriesseries_2 0.73 1.30 2.9 1 761
-#> alpha_gp(time):seriesseries_3 0.46 1.10 2.8 1 1262
-#> rho_gp(time):seriesseries_1 1.30 5.40 22.0 1 682
-#> rho_gp(time):seriesseries_2 2.10 10.00 17.0 1 450
-#> rho_gp(time):seriesseries_3 1.50 8.80 24.0 1 769
+#> alpha_gp(time):seriesseries_1 0.19 0.75 2 1.00 932
+#> alpha_gp(time):seriesseries_2 0.76 1.40 3 1.00 845
+#> alpha_gp(time):seriesseries_3 0.48 1.10 3 1.00 968
+#> rho_gp(time):seriesseries_1 1.10 4.90 25 1.01 678
+#> rho_gp(time):seriesseries_2 2.20 10.00 17 1.00 645
+#> rho_gp(time):seriesseries_3 1.50 9.20 24 1.00 908
#>
#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(season) 3.36 6 21.1 0.0093 **
+#> edf Ref.df Chi.sq p-value
+#> s(season) 3.4 6 21.1 0.011 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Stan MCMC diagnostics:
#> n_eff / iter looks reasonable for all parameters
#> Rhat looks reasonable for all parameters
-#> 1 of 2000 iterations ended with a divergence (0.05%)
+#> 7 of 2000 iterations ended with a divergence (0.35%)
#> *Try running with larger adapt_delta to remove the divergences
#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
#> E-FMI indicated no pathological behavior
#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:27:28 AM 2024.
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:28:27 AM 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split MCMC chains
#> (at convergence, Rhat = 1)Modelling dynamics with GPs
the marginal deviation (\(\alpha\))
parameters:
And now the length scale (\(\rho\)) parameters:
- +We can again plot the nonlinear effects:
- +The estimates for the temporal trends are fairly similar for the two models, but below we will see if they produce similar forecasts
@@ -614,7 +614,7 @@forecast()
functionstr(fc_mod1)
#> List of 16
#> $ call :Class 'formula' language y ~ s(season, bs = "cc", k = 8) + s(time, by = series, bs = "cr", k = 20)
-#> .. ..- attr(*, ".Environment")=<environment: 0x000001b67206d110>
+#> .. ..- attr(*, ".Environment")=<environment: 0x000002c8194b2058>
#> $ trend_call : NULL
#> $ family : chr "poisson"
#> $ family_pars : NULL
@@ -635,42 +635,34 @@ Forecasting with the forecast()
function
#> ..$ series_3: int [1:25] 1 0 0 1 0 0 1 0 1 0 ...
#> $ test_times : int [1:25] 76 77 78 79 80 81 82 83 84 85 ...
#> $ hindcasts :List of 3
-#> ..$ series_1: num [1:2000, 1:75] 1 1 0 0 0 0 0 0 0 0 ...
+#> ..$ series_1: num [1:2000, 1:75] 0 0 0 1 0 1 0 0 0 0 ...
#> .. ..- attr(*, "dimnames")=List of 2
#> .. .. ..$ : NULL
#> .. .. ..$ : chr [1:75] "ypred[1,1]" "ypred[2,1]" "ypred[3,1]" "ypred[4,1]" ...
-#> ..$ series_2: num [1:2000, 1:75] 0 0 0 0 0 0 0 0 0 0 ...
+#> ..$ series_2: num [1:2000, 1:75] 0 0 0 0 0 1 0 0 0 0 ...
#> .. ..- attr(*, "dimnames")=List of 2
#> .. .. ..$ : NULL
#> .. .. ..$ : chr [1:75] "ypred[1,2]" "ypred[2,2]" "ypred[3,2]" "ypred[4,2]" ...
-#> ..$ series_3: num [1:2000, 1:75] 1 4 0 4 4 1 1 6 3 1 ...
+#> ..$ series_3: num [1:2000, 1:75] 3 2 2 0 1 3 2 3 1 3 ...
#> .. ..- attr(*, "dimnames")=List of 2
#> .. .. ..$ : NULL
#> .. .. ..$ : chr [1:75] "ypred[1,3]" "ypred[2,3]" "ypred[3,3]" "ypred[4,3]" ...
#> $ forecasts :List of 3
-#> ..$ series_1: num [1:2000, 1:25] 0 1 0 0 0 1 1 0 0 3 ...
-#> ..$ series_2: num [1:2000, 1:25] 0 2 0 1 0 2 1 0 1 0 ...
-#> ..$ series_3: num [1:2000, 1:25] 1 8 4 2 2 1 2 0 2 0 ...
+#> ..$ series_1: num [1:2000, 1:25] 0 1 0 0 0 1 1 1 1 1 ...
+#> ..$ series_2: num [1:2000, 1:25] 0 0 1 1 0 0 0 1 0 0 ...
+#> ..$ series_3: num [1:2000, 1:25] 0 0 1 1 0 3 1 0 3 2 ...
#> - attr(*, "class")= chr "mvgam_forecast"
We can plot the forecasts for some series from each model using the
S3 plot
method for objects of this class:
Clearly the two models do not produce equivalent forecasts. We will come back to scoring these forecasts in a moment.
@@ -698,10 +690,8 @@newdata
in mvgam()
The forecasts will be nearly identical to those calculated previously:
- - + +The returned list contains a data.frame
for each series
in the data that shows the CRPS score for each evaluation in the testing
data, along with some other useful information about the fit of the
@@ -777,31 +767,31 @@
crps_mod1 <- score(fc_mod1, score = 'crps', interval_width = 0.6)
crps_mod1$series_1
#> score in_interval interval_width eval_horizon score_type
-#> 1 0.18582425 1 0.6 1 crps
-#> 2 0.12933350 1 0.6 2 crps
-#> 3 1.37181050 0 0.6 3 crps
+#> 1 0.17965150 1 0.6 1 crps
+#> 2 0.13411725 1 0.6 2 crps
+#> 3 1.36749550 0 0.6 3 crps
#> 4 NA NA 0.6 4 crps
-#> 5 0.03698600 1 0.6 5 crps
-#> 6 1.53997900 0 0.6 6 crps
-#> 7 1.50467675 0 0.6 7 crps
-#> 8 0.63460725 1 0.6 8 crps
-#> 9 0.61682725 1 0.6 9 crps
-#> 10 0.62428875 1 0.6 10 crps
-#> 11 1.33824700 0 0.6 11 crps
-#> 12 2.06378300 0 0.6 12 crps
-#> 13 0.59247200 1 0.6 13 crps
-#> 14 0.13560025 1 0.6 14 crps
-#> 15 0.66512975 1 0.6 15 crps
-#> 16 0.08238525 1 0.6 16 crps
-#> 17 0.08152900 1 0.6 17 crps
-#> 18 0.09446425 1 0.6 18 crps
-#> 19 0.12084700 1 0.6 19 crps
+#> 5 0.03860225 1 0.6 5 crps
+#> 6 1.55334350 0 0.6 6 crps
+#> 7 1.49198325 0 0.6 7 crps
+#> 8 0.64088650 1 0.6 8 crps
+#> 9 0.61613650 1 0.6 9 crps
+#> 10 0.60528025 1 0.6 10 crps
+#> 11 1.30624075 0 0.6 11 crps
+#> 12 2.06809025 0 0.6 12 crps
+#> 13 0.61887550 1 0.6 13 crps
+#> 14 0.13920225 1 0.6 14 crps
+#> 15 0.67819725 1 0.6 15 crps
+#> 16 0.07817500 1 0.6 16 crps
+#> 17 0.07567500 1 0.6 17 crps
+#> 18 0.09510025 1 0.6 18 crps
+#> 19 0.12604375 1 0.6 19 crps
#> 20 NA NA 0.6 20 crps
-#> 21 0.21286925 1 0.6 21 crps
-#> 22 0.85799700 1 0.6 22 crps
+#> 21 0.20347825 1 0.6 21 crps
+#> 22 0.82202975 1 0.6 22 crps
#> 23 NA NA 0.6 23 crps
-#> 24 1.14954750 1 0.6 24 crps
-#> 25 0.85131425 1 0.6 25 crps
We can also compare forecasts against out of sample observations using the Expected Log Predictive Density (ELPD; also known as the log score). The ELPD is a strictly proper scoring rule that can be @@ -812,31 +802,31 @@
link_mod1 <- forecast(mod1, newdata = simdat$data_test, type = 'link')
score(link_mod1, score = 'elpd')$series_1
#> score eval_horizon score_type
-#> 1 -0.5343784 1 elpd
-#> 2 -0.4326190 2 elpd
-#> 3 -2.9699450 3 elpd
+#> 1 -0.5285206 1 elpd
+#> 2 -0.4286994 2 elpd
+#> 3 -2.9660940 3 elpd
#> 4 NA 4 elpd
-#> 5 -0.1998425 5 elpd
-#> 6 -3.3976729 6 elpd
-#> 7 -3.2989297 7 elpd
-#> 8 -2.0490633 8 elpd
-#> 9 -2.0690163 9 elpd
-#> 10 -2.0822051 10 elpd
-#> 11 -3.1101639 11 elpd
-#> 12 -3.7240924 12 elpd
-#> 13 -2.1578701 13 elpd
-#> 14 -0.2899481 14 elpd
-#> 15 -2.3811862 15 elpd
-#> 16 -0.2085375 16 elpd
-#> 17 -0.1960501 17 elpd
-#> 18 -0.2036978 18 elpd
-#> 19 -0.2154374 19 elpd
+#> 5 -0.1988847 5 elpd
+#> 6 -3.3821055 6 elpd
+#> 7 -3.2797236 7 elpd
+#> 8 -2.0571076 8 elpd
+#> 9 -2.0794559 9 elpd
+#> 10 -2.0882202 10 elpd
+#> 11 -3.0870256 11 elpd
+#> 12 -3.7065927 12 elpd
+#> 13 -2.1601960 13 elpd
+#> 14 -0.2931143 14 elpd
+#> 15 -2.3694878 15 elpd
+#> 16 -0.2110566 16 elpd
+#> 17 -0.1986209 17 elpd
+#> 18 -0.2069720 18 elpd
+#> 19 -0.2193413 19 elpd
#> 20 NA 20 elpd
-#> 21 -0.2341597 21 elpd
-#> 22 -2.6552948 22 elpd
+#> 21 -0.2379762 21 elpd
+#> 22 -2.6261457 22 elpd
#> 23 NA 23 elpd
-#> 24 -2.6652717 24 elpd
-#> 25 -0.2759126 25 elpd
Finally, when we have multiple time series it may also make sense to
use a multivariate proper scoring rule. mvgam
offers two
such options: the Energy score and the Variogram score. The first
@@ -860,7 +850,7 @@
The returned object still provides information on interval coverage @@ -868,31 +858,31 @@
all_series
slot):
energy_mod2$all_series
#> score eval_horizon score_type
-#> 1 0.7705198 1 energy
-#> 2 1.1330328 2 energy
-#> 3 1.2600785 3 energy
+#> 1 0.7546579 1 energy
+#> 2 1.1200630 2 energy
+#> 3 1.2447843 3 energy
#> 4 NA 4 energy
-#> 5 0.4427578 5 energy
-#> 6 1.8848308 6 energy
-#> 7 1.4186997 7 energy
-#> 8 0.7280518 8 energy
-#> 9 1.0467755 9 energy
+#> 5 0.4465348 5 energy
+#> 6 1.8231460 6 energy
+#> 7 1.4418019 7 energy
+#> 8 0.7172890 8 energy
+#> 9 1.0762943 9 energy
#> 10 NA 10 energy
-#> 11 1.4172423 11 energy
-#> 12 3.2326925 12 energy
-#> 13 1.5987732 13 energy
-#> 14 1.1798872 14 energy
-#> 15 1.0311968 15 energy
-#> 16 1.8261356 16 energy
+#> 11 1.4112423 11 energy
+#> 12 3.2385416 12 energy
+#> 13 1.5836460 13 energy
+#> 14 1.1953349 14 energy
+#> 15 1.0412578 15 energy
+#> 16 1.8348615 16 energy
#> 17 NA 17 energy
-#> 18 0.7170961 18 energy
-#> 19 0.8927311 19 energy
+#> 18 0.7142977 18 energy
+#> 19 0.9059773 19 energy
#> 20 NA 20 energy
-#> 21 1.0544501 21 energy
-#> 22 1.3280321 22 energy
+#> 21 1.1043397 21 energy
+#> 22 1.3292391 22 energy
#> 23 NA 23 energy
-#> 24 2.1843621 24 energy
-#> 25 1.2352041 25 energy
You can use your score(s) of choice to compare different models. For example, we can compute and plot the difference in CRPS scores for each series in data. Here, a negative value means the Gaussian Process model @@ -914,7 +904,7 @@
diff_scores <- crps_mod2$series_2$score -
@@ -930,7 +920,7 @@ Scoring forecast distributions
title(main = paste0('GP better in ', gp_better, ' of 25 evaluations',
'\nMean difference = ',
round(mean(diff_scores, na.rm = TRUE), 2)))
diff_scores <- crps_mod2$series_3$score -
crps_mod1$series_3$score
@@ -945,7 +935,7 @@ Scoring forecast distributions
title(main = paste0('GP better in ', gp_better, ' of 25 evaluations',
'\nMean difference = ',
round(mean(diff_scores, na.rm = TRUE), 2)))
The GP model consistently gives better forecasts, and the difference between scores grows quickly as the forecast horizon increases. This is not unexpected given the way that splines linearly extrapolate outside diff --git a/inst/doc/mvgam_overview.R b/inst/doc/mvgam_overview.R index c8b40d2d..86859c9f 100644 --- a/inst/doc/mvgam_overview.R +++ b/inst/doc/mvgam_overview.R @@ -1,16 +1,20 @@ -## ----echo = FALSE------------------------------------------------------------- -NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true") +params <- +list(EVAL = TRUE) + +## ---- echo = FALSE------------------------------------------------------------ knitr::opts_chunk$set( collapse = TRUE, comment = "#>", - purl = NOT_CRAN, - eval = NOT_CRAN + message = FALSE, + warning = FALSE, + eval = if (isTRUE(exists("params"))) params$EVAL else FALSE ) + ## ----setup, include=FALSE----------------------------------------------------- knitr::opts_chunk$set( echo = TRUE, - dpi = 150, + dpi = 100, fig.asp = 0.8, fig.width = 6, out.width = "60%", @@ -19,19 +23,24 @@ library(mvgam) library(ggplot2) theme_set(theme_bw(base_size = 12, base_family = 'serif')) + ## ----Access time series data-------------------------------------------------- data("portal_data") + ## ----Inspect data format and structure---------------------------------------- head(portal_data) + ## ----------------------------------------------------------------------------- dplyr::glimpse(portal_data) + ## ----------------------------------------------------------------------------- data <- sim_mvgam(n_series = 4, T = 24) head(data$data_train, 12) + ## ----Wrangle data for modelling----------------------------------------------- portal_data %>% @@ -54,95 +63,106 @@ portal_data %>% # Select the variables of interest to keep in the model_data dplyr::select(series, year, time, count, mintemp, ndvi) -> model_data + ## ----------------------------------------------------------------------------- head(model_data) + ## ----------------------------------------------------------------------------- dplyr::glimpse(model_data) + ## ----Summarise variables------------------------------------------------------ summary(model_data) -## ----------------------------------------------------------------------------- -image(is.na(t(model_data %>% - dplyr::arrange(dplyr::desc(time)))), axes = F, - col = c('grey80', 'darkred')) -axis(3, at = seq(0,1, len = NCOL(model_data)), labels = colnames(model_data)) ## ----------------------------------------------------------------------------- plot_mvgam_series(data = model_data, series = 1, y = 'count') + ## ----------------------------------------------------------------------------- model_data %>% # Create a 'year_fac' factor version of 'year' dplyr::mutate(year_fac = factor(year)) -> model_data + ## ----------------------------------------------------------------------------- dplyr::glimpse(model_data) levels(model_data$year_fac) + ## ----model1, include=FALSE, results='hide'------------------------------------ model1 <- mvgam(count ~ s(year_fac, bs = 're') - 1, family = poisson(), data = model_data, parallel = FALSE) + ## ----eval=FALSE--------------------------------------------------------------- -# model1 <- mvgam(count ~ s(year_fac, bs = 're') - 1, -# family = poisson(), -# data = model_data) +## model1 <- mvgam(count ~ s(year_fac, bs = 're') - 1, +## family = poisson(), +## data = model_data) + ## ----------------------------------------------------------------------------- get_mvgam_priors(count ~ s(year_fac, bs = 're') - 1, family = poisson(), data = model_data) + ## ----------------------------------------------------------------------------- summary(model1) + ## ----Extract coefficient posteriors------------------------------------------- beta_post <- as.data.frame(model1, variable = 'betas') dplyr::glimpse(beta_post) + ## ----------------------------------------------------------------------------- code(model1) + ## ----Plot random effect estimates--------------------------------------------- plot(model1, type = 're') + ## ----------------------------------------------------------------------------- mcmc_plot(object = model1, variable = 'betas', type = 'areas') + ## ----------------------------------------------------------------------------- pp_check(object = model1) -pp_check(model1, type = "rootogram") + ## ----Plot posterior hindcasts------------------------------------------------- plot(model1, type = 'forecast') + ## ----Extract posterior hindcast----------------------------------------------- hc <- hindcast(model1) str(hc) + ## ----Extract hindcasts on the linear predictor scale-------------------------- hc <- hindcast(model1, type = 'link') range(hc$hindcasts$PP) -## ----Plot hindcasts on the linear predictor scale----------------------------- -plot(hc) ## ----Plot posterior residuals------------------------------------------------- plot(model1, type = 'residuals') + ## ----------------------------------------------------------------------------- model_data %>% dplyr::filter(time <= 160) -> data_train model_data %>% dplyr::filter(time > 160) -> data_test + ## ----include=FALSE, message=FALSE, warning=FALSE------------------------------ model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1, family = poisson(), @@ -150,25 +170,23 @@ model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1, newdata = data_test, parallel = FALSE) -## ----eval=FALSE--------------------------------------------------------------- -# model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1, -# family = poisson(), -# data = data_train, -# newdata = data_test) -## ----------------------------------------------------------------------------- -plot(model1b, type = 're') +## ----eval=FALSE--------------------------------------------------------------- +## model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1, +## family = poisson(), +## data = data_train, +## newdata = data_test) -## ----------------------------------------------------------------------------- -plot(model1b, type = 'forecast') ## ----Plotting predictions against test data----------------------------------- plot(model1b, type = 'forecast', newdata = data_test) + ## ----Extract posterior forecasts---------------------------------------------- fc <- forecast(model1b) str(fc) + ## ----model2, include=FALSE, message=FALSE, warning=FALSE---------------------- model2 <- mvgam(count ~ s(year_fac, bs = 're') + ndvi - 1, @@ -177,26 +195,28 @@ model2 <- mvgam(count ~ s(year_fac, bs = 're') + newdata = data_test, parallel = FALSE) + ## ----eval=FALSE--------------------------------------------------------------- -# model2 <- mvgam(count ~ s(year_fac, bs = 're') + -# ndvi - 1, -# family = poisson(), -# data = data_train, -# newdata = data_test) +## model2 <- mvgam(count ~ s(year_fac, bs = 're') + +## ndvi - 1, +## family = poisson(), +## data = data_train, +## newdata = data_test) -## ----class.output="scroll-300"------------------------------------------------ + +## ---- class.output="scroll-300"----------------------------------------------- summary(model2) + ## ----Posterior quantiles of model coefficients-------------------------------- coef(model2) -## ----Plot NDVI effect--------------------------------------------------------- -plot(model2, type = 'pterms') ## ----------------------------------------------------------------------------- beta_post <- as.data.frame(model2, variable = 'betas') dplyr::glimpse(beta_post) + ## ----Histogram of NDVI effects------------------------------------------------ hist(beta_post$ndvi, xlim = c(-1 * max(abs(beta_post$ndvi)), @@ -210,16 +230,10 @@ hist(beta_post$ndvi, lwd = 2) abline(v = 0, lwd = 2.5) -## ----warning=FALSE------------------------------------------------------------ -plot_predictions(model2, - condition = "ndvi", - # include the observed count values - # as points, and show rugs for the observed - # ndvi and count values on the axes - points = 0.5, rug = TRUE) ## ----warning=FALSE------------------------------------------------------------ -plot(conditional_effects(model2), ask = FALSE) +conditional_effects(model2) + ## ----model3, include=FALSE, message=FALSE, warning=FALSE---------------------- model3 <- mvgam(count ~ s(time, bs = 'bs', k = 15) + @@ -229,38 +243,31 @@ model3 <- mvgam(count ~ s(time, bs = 'bs', k = 15) + newdata = data_test, parallel = FALSE) -## ----eval=FALSE--------------------------------------------------------------- -# model3 <- mvgam(count ~ s(time, bs = 'bs', k = 15) + -# ndvi, -# family = poisson(), -# data = data_train, -# newdata = data_test) -## ----------------------------------------------------------------------------- -summary(model3) +## ----eval=FALSE--------------------------------------------------------------- +## model3 <- mvgam(count ~ s(time, bs = 'bs', k = 15) + +## ndvi, +## family = poisson(), +## data = data_train, +## newdata = data_test) -## ----------------------------------------------------------------------------- -plot(model3, type = 'smooths') ## ----------------------------------------------------------------------------- -plot(model3, type = 'smooths', realisations = TRUE, - n_realisations = 30) +summary(model3) -## ----Plot smooth term derivatives, warning = FALSE, fig.asp = 1--------------- -plot(model3, type = 'smooths', derivatives = TRUE) ## ----warning=FALSE------------------------------------------------------------ -plot(conditional_effects(model3), ask = FALSE) +conditional_effects(model3, type = 'link') -## ----warning=FALSE------------------------------------------------------------ -plot(conditional_effects(model3, type = 'link'), ask = FALSE) -## ----class.output="scroll-300"------------------------------------------------ +## ---- class.output="scroll-300"----------------------------------------------- code(model3) + ## ----------------------------------------------------------------------------- plot(model3, type = 'forecast', newdata = data_test) + ## ----Plot extrapolated temporal functions using newdata----------------------- plot_mvgam_smooth(model3, smooth = 's(time)', # feed newdata to the plot function to generate @@ -270,6 +277,7 @@ plot_mvgam_smooth(model3, smooth = 's(time)', ndvi = 0)) abline(v = max(data_train$time), lty = 'dashed', lwd = 2) + ## ----model4, include=FALSE---------------------------------------------------- model4 <- mvgam(count ~ s(ndvi, k = 6), family = poisson(), @@ -278,30 +286,31 @@ model4 <- mvgam(count ~ s(ndvi, k = 6), trend_model = 'AR1', parallel = FALSE) + ## ----eval=FALSE--------------------------------------------------------------- -# model4 <- mvgam(count ~ s(ndvi, k = 6), -# family = poisson(), -# data = data_train, -# newdata = data_test, -# trend_model = 'AR1') +## model4 <- mvgam(count ~ s(ndvi, k = 6), +## family = poisson(), +## data = data_train, +## newdata = data_test, +## trend_model = 'AR1') + ## ----Summarise the mvgam autocorrelated error model, class.output="scroll-300"---- summary(model4) -## ----warning=FALSE, message=FALSE--------------------------------------------- -plot_predictions(model4, - condition = "ndvi", - points = 0.5, rug = TRUE) ## ----------------------------------------------------------------------------- plot(model4, type = 'forecast', newdata = data_test) + ## ----------------------------------------------------------------------------- plot(model4, type = 'trend', newdata = data_test) + ## ----------------------------------------------------------------------------- loo_compare(model3, model4) + ## ----------------------------------------------------------------------------- fc_mod3 <- forecast(model3) fc_mod4 <- forecast(model4) diff --git a/inst/doc/mvgam_overview.Rmd b/inst/doc/mvgam_overview.Rmd index 508c63d3..3b5108db 100644 --- a/inst/doc/mvgam_overview.Rmd +++ b/inst/doc/mvgam_overview.Rmd @@ -9,21 +9,23 @@ vignette: > %\VignetteIndexEntry{Overview of the mvgam package} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} +params: + EVAL: !r identical(tolower(Sys.getenv("NOT_CRAN")), "true") --- ```{r, echo = FALSE} -NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true") knitr::opts_chunk$set( collapse = TRUE, comment = "#>", - purl = NOT_CRAN, - eval = NOT_CRAN + message = FALSE, + warning = FALSE, + eval = if (isTRUE(exists("params"))) params$EVAL else FALSE ) ``` ```{r setup, include=FALSE} knitr::opts_chunk$set( echo = TRUE, - dpi = 150, + dpi = 100, fig.asp = 0.8, fig.width = 6, out.width = "60%", @@ -146,7 +148,7 @@ z_{i,t} & \sim \text{Normal}(ar1_i^{distance} * z_{i,t-1}, \sigma_i) \end{align* Where $distance$ is a vector of non-negative measurements of the time differences between successive observations. See the **Examples** section in `?CAR` for an illustration of how to set these models up. ## Regression formulae -`mvgam` supports an observation model regression formula, built off the `mvgcv` package, as well as an optional process model regression formula. The formulae supplied to \code{\link{mvgam}} are exactly like those supplied to `glm()` except that smooth terms, `s()`, +`mvgam` supports an observation model regression formula, built off the `mgcv` package, as well as an optional process model regression formula. The formulae supplied to \code{\link{mvgam}} are exactly like those supplied to `glm()` except that smooth terms, `s()`, `te()`, `ti()` and `t2()`, time-varying effects using `dynamic()`, monotonically increasing (using `s(x, bs = 'moi')`) or decreasing splines (using `s(x, bs = 'mod')`; see `?smooth.construct.moi.smooth.spec` for details), as well as Gaussian Process functions using `gp()`, can be added to the right hand side (and `.` is not supported in `mvgam` formulae). See `?mvgam_formulae` for more guidance. For setting up State-Space models, the optional process model formula can be used (see [the State-Space model vignette](https://nicholasjclark.github.io/mvgam/articles/trend_formulas.html) and [the shared latent states vignette](https://nicholasjclark.github.io/mvgam/articles/trend_formulas.html) for guidance on using trend formulae). @@ -223,15 +225,7 @@ You can also summarize multiple variables, which is helpful to search for data r summary(model_data) ``` -We have some `NA`s in our response variable `count`. Let's visualize the data as a heatmap to get a sense of where these are distributed (`NA`s are shown as red bars in the below plot) -```{r} -image(is.na(t(model_data %>% - dplyr::arrange(dplyr::desc(time)))), axes = F, - col = c('grey80', 'darkred')) -axis(3, at = seq(0,1, len = NCOL(model_data)), labels = colnames(model_data)) -``` - -These observations will generally be thrown out by most modelling packages in \R. But as you will see when we work through the tutorials, `mvgam` keeps these in the data so that predictions can be automatically returned for the full dataset. The time series and some of its descriptive features can be plotted using `plot_mvgam_series()`: +We have some `NA`s in our response variable `count`. These observations will generally be thrown out by most modelling packages in \R. But as you will see when we work through the tutorials, `mvgam` keeps these in the data so that predictions can be automatically returned for the full dataset. The time series and some of its descriptive features can be plotted using `plot_mvgam_series()`: ```{r} plot_mvgam_series(data = model_data, series = 1, y = 'count') ``` @@ -314,7 +308,6 @@ mcmc_plot(object = model1, We can also use the wide range of posterior checking functions available in `bayesplot` (see `?mvgam::ppc_check.mvgam` for details): ```{r} pp_check(object = model1) -pp_check(model1, type = "rootogram") ``` There is clearly some variation in these yearly intercept estimates. But how do these translate into time-varying predictions? To understand this, we can plot posterior hindcasts from this model for the training period using `plot.mvgam` with `type = 'forecast'` @@ -334,13 +327,6 @@ hc <- hindcast(model1, type = 'link') range(hc$hindcasts$PP) ``` -Objects of class `mvgam_forecast` have an associated plot function as well: -```{r Plot hindcasts on the linear predictor scale} -plot(hc) -``` - -This plot can look a bit confusing as it seems like there is linear interpolation from the end of one year to the start of the next. But this is just due to the way the lines are automatically connected in base \R plots - In any regression analysis, a key question is whether the residuals show any patterns that can be indicative of un-modelled sources of variation. For GLMs, we can use a modified residual called the [Dunn-Smyth, or randomized quantile, residual](https://www.jstor.org/stable/1390802){target="_blank"}. Inspect Dunn-Smyth residuals from the model using `plot.mvgam` with `type = 'residuals'` ```{r Plot posterior residuals} plot(model1, type = 'residuals') @@ -365,22 +351,12 @@ model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1, ```{r eval=FALSE} model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1, - family = poisson(), - data = data_train, - newdata = data_test) -``` - -Repeating the plots above gives insight into how the model's hierarchical prior formulation provides all the structure needed to sample values for un-modelled years -```{r} -plot(model1b, type = 're') -``` - - -```{r} -plot(model1b, type = 'forecast') + family = poisson(), + data = data_train, + newdata = data_test) ``` -We can also view the test data in the forecast plot to see that the predictions do not capture the temporal variation in the test set +We can view the test data in the forecast plot to see that the predictions do not capture the temporal variation in the test set ```{r Plotting predictions against test data} plot(model1b, type = 'forecast', newdata = data_test) ``` @@ -428,12 +404,7 @@ Rather than printing the summary each time, we can also quickly look at the post coef(model2) ``` -Look at the estimated effect of `ndvi` using `plot.mvgam` with `type = 'pterms'` -```{r Plot NDVI effect} -plot(model2, type = 'pterms') -``` - -This plot indicates a positive linear effect of `ndvi` on `log(counts)`. But it may be easier to visualise using a histogram, especially for parametric (linear) effects. This can be done by first extracting the posterior coefficients as we did in the first example: +Look at the estimated effect of `ndvi` using using a histogram. This can be done by first extracting the posterior coefficients: ```{r} beta_post <- as.data.frame(model2, variable = 'betas') dplyr::glimpse(beta_post) @@ -455,19 +426,9 @@ abline(v = 0, lwd = 2.5) ``` ### `marginaleffects` support -Given our model used a nonlinear link function (log link in this example), it can still be difficult to fully understand what relationship our model is estimating between a predictor and the response. Fortunately, the `marginaleffects` package makes this relatively straightforward. Objects of class `mvgam` can be used with `marginaleffects` to inspect contrasts, scenario-based predictions, conditional and marginal effects, all on the outcome scale. Here we will use the `plot_predictions` function from `marginaleffects` to inspect the conditional effect of `ndvi` (use `?plot_predictions` for guidance on how to modify these plots): +Given our model used a nonlinear link function (log link in this example), it can still be difficult to fully understand what relationship our model is estimating between a predictor and the response. Fortunately, the `marginaleffects` package makes this relatively straightforward. Objects of class `mvgam` can be used with `marginaleffects` to inspect contrasts, scenario-based predictions, conditional and marginal effects, all on the outcome scale. Like `brms`, `mvgam` has the simple `conditional_effects` function to make quick and informative plots for main effects, which rely on `marginaleffects` support. This will likely be your go-to function for quickly understanding patterns from fitted `mvgam` models ```{r warning=FALSE} -plot_predictions(model2, - condition = "ndvi", - # include the observed count values - # as points, and show rugs for the observed - # ndvi and count values on the axes - points = 0.5, rug = TRUE) -``` - -Now it is easier to get a sense of the nonlinear but positive relationship estimated between `ndvi` and `count`. Like `brms`, `mvgam` has the simple `conditional_effects` function to make quick and informative plots for main effects. This will likely be your go-to function for quickly understanding patterns from fitted `mvgam` models -```{r warning=FALSE} -plot(conditional_effects(model2), ask = FALSE) +conditional_effects(model2) ``` ## Adding predictors as smooths @@ -504,33 +465,9 @@ Where the smooth function $f_{time}$ is built by summing across a set of weighte summary(model3) ``` -The summary above now contains posterior estimates for the smoothing parameters as well as the basis coefficients for the nonlinear effect of `time`. We can visualize the conditional `time` effect using the `plot` function with `type = 'smooths'`: -```{r} -plot(model3, type = 'smooths') -``` - -By default this plots shows posterior empirical quantiles, but it can also be helpful to view some realizations of the underlying function (here, each line is a different potential curve drawn from the posterior of all possible curves): -```{r} -plot(model3, type = 'smooths', realisations = TRUE, - n_realisations = 30) -``` - -### Derivatives of smooths -A useful question when modelling using GAMs is to identify where the function is changing most rapidly. To address this, we can plot estimated 1st derivatives of the spline: -```{r Plot smooth term derivatives, warning = FALSE, fig.asp = 1} -plot(model3, type = 'smooths', derivatives = TRUE) -``` - -Here, values above `>0` indicate the function was increasing at that time point, while values `<0` indicate the function was declining. The most rapid declines appear to have been happening around timepoints 50 and again toward the end of the training period, for example. - -Use `conditional_effects` again for useful plots on the outcome scale: +The summary above now contains posterior estimates for the smoothing parameters as well as the basis coefficients for the nonlinear effect of `time`. We can visualize `conditional_effects` as before: ```{r warning=FALSE} -plot(conditional_effects(model3), ask = FALSE) -``` - -Or on the link scale: -```{r warning=FALSE} -plot(conditional_effects(model3, type = 'link'), ask = FALSE) +conditional_effects(model3, type = 'link') ``` Inspect the underlying `Stan` code to gain some idea of how the spline is being penalized: @@ -591,13 +528,6 @@ Here the term $z_t$ captures autocorrelated latent residuals, which are modelled summary(model4) ``` -View conditional smooths for the `ndvi` effect: -```{r warning=FALSE, message=FALSE} -plot_predictions(model4, - condition = "ndvi", - points = 0.5, rug = TRUE) -``` - View posterior hindcasts / forecasts and compare against the out of sample test data ```{r} plot(model4, type = 'forecast', newdata = data_test) diff --git a/inst/doc/mvgam_overview.html b/inst/doc/mvgam_overview.html index 6ebc839b..d905d14a 100644 --- a/inst/doc/mvgam_overview.html +++ b/inst/doc/mvgam_overview.html @@ -12,7 +12,7 @@ - +
marginaleffects
supportmvgam
mvgam
supports an observation model regression formula,
-built off the mvgcv
package, as well as an optional process
+built off the mgcv
package, as well as an optional process
model regression formula. The formulae supplied to are exactly like
those supplied to glm()
except that smooth terms,
s()
, te()
, ti()
and
@@ -769,17 +767,17 @@
Notice how we have four different time series in these simulated data, but we do not spread the outcome values into different columns. Rather, there is only a single column for the outcome variable, labelled @@ -863,22 +861,14 @@
We have some NA
s in our response variable
-count
. Let’s visualize the data as a heatmap to get a sense
-of where these are distributed (NA
s are shown as red bars
-in the below plot)
image(is.na(t(model_data %>%
- dplyr::arrange(dplyr::desc(time)))), axes = F,
- col = c('grey80', 'darkred'))
-axis(3, at = seq(0,1, len = NCOL(model_data)), labels = colnames(model_data))
These observations will generally be thrown out by most modelling
-packages in . But as you will see when we work through the tutorials,
-mvgam
keeps these in the data so that predictions can be
-automatically returned for the full dataset. The time series and some of
-its descriptive features can be plotted using
+count
. These observations will generally be thrown out by
+most modelling packages in . But as you will see when we work through
+the tutorials, mvgam
keeps these in the data so that
+predictions can be automatically returned for the full dataset. The time
+series and some of its descriptive features can be plotted using
plot_mvgam_series()
:
?smooth.construct.re.smooth.spec
for details about the
re
basis construction that is used by both
mvgam
and mgcv
-model_data %>%
-
- # Create a 'year_fac' factor version of 'year'
- dplyr::mutate(year_fac = factor(year)) -> model_data
model_data %>%
+
+ # Create a 'year_fac' factor version of 'year'
+ dplyr::mutate(year_fac = factor(year)) -> model_data
Preview the dataset to ensure year is now a factor with a unique factor level for each year in the data
-dplyr::glimpse(model_data)
-#> Rows: 199
-#> Columns: 7
-#> $ series <fct> PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, P…
-#> $ year <int> 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2…
-#> $ time <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18…
-#> $ count <int> 0, 1, 2, NA, 10, NA, NA, 16, 18, 12, NA, 3, 2, NA, NA, 13, NA…
-#> $ mintemp <dbl> -9.710, -5.924, -0.220, 1.931, 6.568, 11.590, 14.370, 16.520,…
-#> $ ndvi <dbl> 1.4658889, 1.5585069, 1.3378172, 1.6589129, 1.8536561, 1.7613…
-#> $ year_fac <fct> 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2…
-levels(model_data$year_fac)
-#> [1] "2004" "2005" "2006" "2007" "2008" "2009" "2010" "2011" "2012" "2013"
-#> [11] "2014" "2015" "2016" "2017" "2018" "2019" "2020"
dplyr::glimpse(model_data)
+#> Rows: 199
+#> Columns: 7
+#> $ series <fct> PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, PP, P…
+#> $ year <int> 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2…
+#> $ time <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18…
+#> $ count <int> 0, 1, 2, NA, 10, NA, NA, 16, 18, 12, NA, 3, 2, NA, NA, 13, NA…
+#> $ mintemp <dbl> -9.710, -5.924, -0.220, 1.931, 6.568, 11.590, 14.370, 16.520,…
+#> $ ndvi <dbl> 1.4658889, 1.5585069, 1.3378172, 1.6589129, 1.8536561, 1.7613…
+#> $ year_fac <fct> 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2004, 2…
+levels(model_data$year_fac)
+#> [1] "2004" "2005" "2006" "2007" "2008" "2009" "2010" "2011" "2012" "2013"
+#> [11] "2014" "2015" "2016" "2017" "2018" "2019" "2020"
We are now ready for our first mvgam
model. The syntax
will be familiar to users who have previously built models with
mgcv
. But for a refresher, see ?formula.gam
@@ -943,9 +933,9 @@
Stan
user’s guide for more information
about the software and the enormous variety of models that can be
tackled with HMC.
-
+
The model can be described mathematically for each timepoint \(t\) as follows: \[\begin{align*}
\boldsymbol{count}_t & \sim \text{Poisson}(\lambda_t) \\
log(\lambda_t) & = \beta_{year[year_t]} \\
@@ -959,90 +949,91 @@ GLMs with temporal random effects
similar functionality to the options available in brms
. For
example, the default priors on \((\mu_{year})\) and \((\sigma_{year})\) can be viewed using the
following code:
get_mvgam_priors(count ~ s(year_fac, bs = 're') - 1,
- family = poisson(),
- data = model_data)
-#> param_name param_length param_info
-#> 1 vector[1] mu_raw; 1 s(year_fac) pop mean
-#> 2 vector<lower=0>[1] sigma_raw; 1 s(year_fac) pop sd
-#> prior example_change
-#> 1 mu_raw ~ std_normal(); mu_raw ~ normal(0.17, 0.76);
-#> 2 sigma_raw ~ student_t(3, 0, 2.5); sigma_raw ~ exponential(0.7);
-#> new_lowerbound new_upperbound
-#> 1 NA NA
-#> 2 NA NA
get_mvgam_priors(count ~ s(year_fac, bs = 're') - 1,
+ family = poisson(),
+ data = model_data)
+#> param_name param_length param_info
+#> 1 vector[1] mu_raw; 1 s(year_fac) pop mean
+#> 2 vector<lower=0>[1] sigma_raw; 1 s(year_fac) pop sd
+#> prior example_change
+#> 1 mu_raw ~ std_normal(); mu_raw ~ normal(-0.7, 0.81);
+#> 2 sigma_raw ~ student_t(3, 0, 2.5); sigma_raw ~ exponential(0.85);
+#> new_lowerbound new_upperbound
+#> 1 NA NA
+#> 2 NA NA
See examples in ?get_mvgam_priors
to find out different
ways that priors can be altered. Once the model has finished, the first
step is to inspect the summary
to ensure no major
diagnostic warnings have been produced and to quickly summarise
posterior distributions for key parameters
summary(model1)
-#> GAM formula:
-#> count ~ s(year_fac, bs = "re") - 1
-#>
-#> Family:
-#> poisson
-#>
-#> Link function:
-#> log
-#>
-#> Trend model:
-#> None
-#>
-#> N series:
-#> 1
-#>
-#> N timepoints:
-#> 199
-#>
-#> Status:
-#> Fitted using Stan
-#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
-#> Total post-warmup draws = 2000
-#>
-#>
-#> GAM coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> s(year_fac).1 1.80 2.1 2.3 1.00 2663
-#> s(year_fac).2 2.50 2.7 2.8 1.00 2468
-#> s(year_fac).3 3.00 3.1 3.2 1.00 3105
-#> s(year_fac).4 3.10 3.3 3.4 1.00 2822
-#> s(year_fac).5 1.90 2.1 2.3 1.00 3348
-#> s(year_fac).6 1.50 1.8 2.0 1.00 2859
-#> s(year_fac).7 1.80 2.0 2.3 1.00 2995
-#> s(year_fac).8 2.80 3.0 3.1 1.00 3126
-#> s(year_fac).9 3.10 3.3 3.4 1.00 2816
-#> s(year_fac).10 2.60 2.8 2.9 1.00 2289
-#> s(year_fac).11 3.00 3.1 3.2 1.00 2725
-#> s(year_fac).12 3.10 3.2 3.3 1.00 2581
-#> s(year_fac).13 2.00 2.2 2.5 1.00 2885
-#> s(year_fac).14 2.50 2.6 2.8 1.00 2749
-#> s(year_fac).15 1.90 2.2 2.4 1.00 2943
-#> s(year_fac).16 1.90 2.1 2.3 1.00 2991
-#> s(year_fac).17 -0.33 1.1 1.9 1.01 356
-#>
-#> GAM group-level estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> mean(s(year_fac)) 2.00 2.40 2.7 1.01 193
-#> sd(s(year_fac)) 0.44 0.67 1.1 1.02 172
-#>
-#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(year_fac) 13.8 17 23477 <2e-16 ***
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Tue Apr 16 12:59:57 PM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
summary(model1)
+#> GAM formula:
+#> count ~ s(year_fac, bs = "re") - 1
+#> <environment: 0x0000024869b48078>
+#>
+#> Family:
+#> poisson
+#>
+#> Link function:
+#> log
+#>
+#> Trend model:
+#> None
+#>
+#> N series:
+#> 1
+#>
+#> N timepoints:
+#> 199
+#>
+#> Status:
+#> Fitted using Stan
+#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
+#> Total post-warmup draws = 2000
+#>
+#>
+#> GAM coefficient (beta) estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> s(year_fac).1 1.80 2.0 2.3 1.00 2660
+#> s(year_fac).2 2.50 2.7 2.9 1.00 2901
+#> s(year_fac).3 3.00 3.1 3.2 1.00 3803
+#> s(year_fac).4 3.10 3.3 3.4 1.00 2872
+#> s(year_fac).5 1.90 2.1 2.3 1.00 3127
+#> s(year_fac).6 1.50 1.8 2.0 1.00 2977
+#> s(year_fac).7 1.80 2.0 2.3 1.00 2586
+#> s(year_fac).8 2.80 3.0 3.1 1.00 3608
+#> s(year_fac).9 3.10 3.3 3.4 1.00 2846
+#> s(year_fac).10 2.60 2.8 2.9 1.00 2911
+#> s(year_fac).11 3.00 3.1 3.2 1.00 3026
+#> s(year_fac).12 3.10 3.2 3.3 1.00 2736
+#> s(year_fac).13 2.00 2.2 2.5 1.00 2887
+#> s(year_fac).14 2.50 2.6 2.8 1.00 3026
+#> s(year_fac).15 1.90 2.2 2.4 1.00 2268
+#> s(year_fac).16 1.90 2.1 2.3 1.00 2649
+#> s(year_fac).17 -0.26 1.1 1.9 1.01 384
+#>
+#> GAM group-level estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> mean(s(year_fac)) 2.00 2.40 2.8 1.01 209
+#> sd(s(year_fac)) 0.46 0.68 1.1 1.02 193
+#>
+#> Approximate significance of GAM smooths:
+#> edf Ref.df Chi.sq p-value
+#> s(year_fac) 13.7 17 1637 <2e-16 ***
+#> ---
+#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#>
+#> Stan MCMC diagnostics:
+#> n_eff / iter looks reasonable for all parameters
+#> Rhat looks reasonable for all parameters
+#> 0 of 2000 iterations ended with a divergence (0%)
+#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
+#> E-FMI indicated no pathological behavior
+#>
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:30:51 AM 2024.
+#> For each parameter, n_eff is a crude measure of effective sample size,
+#> and Rhat is the potential scale reduction factor on split MCMC chains
+#> (at convergence, Rhat = 1)
The diagnostic messages at the bottom of the summary show that the HMC sampler did not encounter any problems or difficult posterior spaces. This is a good sign. Posterior distributions for model @@ -1051,85 +1042,85 @@
data.frame
using:
-beta_post <- as.data.frame(model1, variable = 'betas')
-dplyr::glimpse(beta_post)
-#> Rows: 2,000
-#> Columns: 17
-#> $ `s(year_fac).1` <dbl> 2.17023, 2.08413, 1.99815, 2.17572, 2.11308, 2.03050,…
-#> $ `s(year_fac).2` <dbl> 2.70488, 2.69887, 2.65551, 2.79651, 2.76044, 2.75108,…
-#> $ `s(year_fac).3` <dbl> 3.08617, 3.13429, 3.04575, 3.14824, 3.10917, 3.09809,…
-#> $ `s(year_fac).4` <dbl> 3.29529, 3.21044, 3.22018, 3.26644, 3.29880, 3.25638,…
-#> $ `s(year_fac).5` <dbl> 2.11053, 2.14516, 2.13959, 2.05244, 2.26847, 2.20820,…
-#> $ `s(year_fac).6` <dbl> 1.80418, 1.83343, 1.75987, 1.76972, 1.64782, 1.70765,…
-#> $ `s(year_fac).7` <dbl> 1.99033, 1.95772, 1.98093, 2.01777, 2.04849, 1.97815,…
-#> $ `s(year_fac).8` <dbl> 3.01204, 2.91291, 3.14762, 2.83082, 2.90250, 3.04050,…
-#> $ `s(year_fac).9` <dbl> 3.22248, 3.20205, 3.30373, 3.23181, 3.24927, 3.25232,…
-#> $ `s(year_fac).10` <dbl> 2.71922, 2.62225, 2.82574, 2.65027, 2.69077, 2.75249,…
-#> $ `s(year_fac).11` <dbl> 3.10525, 3.03951, 3.12914, 3.03849, 3.01198, 3.14391,…
-#> $ `s(year_fac).12` <dbl> 3.20887, 3.23337, 3.24350, 3.16821, 3.23516, 3.18216,…
-#> $ `s(year_fac).13` <dbl> 2.18530, 2.15358, 2.39908, 2.21862, 2.14648, 2.17067,…
-#> $ `s(year_fac).14` <dbl> 2.66153, 2.67202, 2.64594, 2.57457, 2.38109, 2.44175,…
-#> $ `s(year_fac).15` <dbl> 2.24898, 2.24912, 2.03587, 2.33842, 2.27868, 2.24643,…
-#> $ `s(year_fac).16` <dbl> 2.20947, 2.21717, 2.03610, 2.17374, 2.16442, 2.14900,…
-#> $ `s(year_fac).17` <dbl> 0.1428430, 0.8005170, -0.0136294, 0.6880930, 0.192034…
beta_post <- as.data.frame(model1, variable = 'betas')
+dplyr::glimpse(beta_post)
+#> Rows: 2,000
+#> Columns: 17
+#> $ `s(year_fac).1` <dbl> 2.21482, 1.93877, 2.29682, 1.74808, 1.95131, 2.24533,…
+#> $ `s(year_fac).2` <dbl> 2.76723, 2.60145, 2.79886, 2.65931, 2.71225, 2.68233,…
+#> $ `s(year_fac).3` <dbl> 2.94903, 3.09928, 3.09161, 3.10005, 3.18758, 3.16841,…
+#> $ `s(year_fac).4` <dbl> 3.26861, 3.25354, 3.33401, 3.27266, 3.32254, 3.28928,…
+#> $ `s(year_fac).5` <dbl> 2.24324, 2.07771, 2.09387, 2.20329, 2.09680, 2.22082,…
+#> $ `s(year_fac).6` <dbl> 1.84890, 1.72516, 1.75302, 1.59202, 1.89224, 1.89458,…
+#> $ `s(year_fac).7` <dbl> 2.06401, 2.20919, 1.88047, 2.09898, 2.12196, 2.01012,…
+#> $ `s(year_fac).8` <dbl> 3.02799, 2.87193, 3.06112, 3.03637, 2.92891, 2.91725,…
+#> $ `s(year_fac).9` <dbl> 3.21091, 3.28829, 3.20874, 3.15167, 3.27000, 3.29526,…
+#> $ `s(year_fac).10` <dbl> 2.81849, 2.68918, 2.81146, 2.74898, 2.65924, 2.81729,…
+#> $ `s(year_fac).11` <dbl> 3.06540, 3.01926, 3.12711, 3.12696, 3.06283, 3.10466,…
+#> $ `s(year_fac).12` <dbl> 3.16679, 3.14866, 3.20039, 3.17305, 3.27057, 3.16223,…
+#> $ `s(year_fac).13` <dbl> 2.12371, 2.38737, 2.07735, 2.22997, 2.08580, 2.14124,…
+#> $ `s(year_fac).14` <dbl> 2.69768, 2.57665, 2.66538, 2.51016, 2.64938, 2.42338,…
+#> $ `s(year_fac).15` <dbl> 2.21619, 2.14404, 2.24750, 2.14194, 2.10472, 2.24652,…
+#> $ `s(year_fac).16` <dbl> 2.15356, 2.06512, 2.02614, 2.14004, 2.21221, 2.07094,…
+#> $ `s(year_fac).17` <dbl> -0.0533815, 0.4696020, -0.2424530, 1.2282600, 1.16988…
With any model fitted in mvgam
, the underlying
Stan
code can be viewed using the code
function:
code(model1)
-#> // Stan model code generated by package mvgam
-#> data {
-#> int<lower=0> total_obs; // total number of observations
-#> int<lower=0> n; // number of timepoints per series
-#> int<lower=0> n_series; // number of series
-#> int<lower=0> num_basis; // total number of basis coefficients
-#> matrix[total_obs, num_basis] X; // mgcv GAM design matrix
-#> array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
-#> int<lower=0> n_nonmissing; // number of nonmissing observations
-#> array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
-#> matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
-#> array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
-#> }
-#> parameters {
-#> // raw basis coefficients
-#> vector[num_basis] b_raw;
-#>
-#> // random effect variances
-#> vector<lower=0>[1] sigma_raw;
-#>
-#> // random effect means
-#> vector[1] mu_raw;
-#> }
-#> transformed parameters {
-#> // basis coefficients
-#> vector[num_basis] b;
-#> b[1 : 17] = mu_raw[1] + b_raw[1 : 17] * sigma_raw[1];
-#> }
-#> model {
-#> // prior for random effect population variances
-#> sigma_raw ~ student_t(3, 0, 2.5);
-#>
-#> // prior for random effect population means
-#> mu_raw ~ std_normal();
-#>
-#> // prior (non-centred) for s(year_fac)...
-#> b_raw[1 : 17] ~ std_normal();
-#> {
-#> // likelihood functions
-#> flat_ys ~ poisson_log_glm(flat_xs, 0.0, b);
-#> }
-#> }
-#> generated quantities {
-#> vector[total_obs] eta;
-#> matrix[n, n_series] mus;
-#> array[n, n_series] int ypred;
-#>
-#> // posterior predictions
-#> eta = X * b;
-#> for (s in 1 : n_series) {
-#> mus[1 : n, s] = eta[ytimes[1 : n, s]];
-#> ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
-#> }
-#> }
code(model1)
+#> // Stan model code generated by package mvgam
+#> data {
+#> int<lower=0> total_obs; // total number of observations
+#> int<lower=0> n; // number of timepoints per series
+#> int<lower=0> n_series; // number of series
+#> int<lower=0> num_basis; // total number of basis coefficients
+#> matrix[total_obs, num_basis] X; // mgcv GAM design matrix
+#> array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
+#> int<lower=0> n_nonmissing; // number of nonmissing observations
+#> array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
+#> matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
+#> array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
+#> }
+#> parameters {
+#> // raw basis coefficients
+#> vector[num_basis] b_raw;
+#>
+#> // random effect variances
+#> vector<lower=0>[1] sigma_raw;
+#>
+#> // random effect means
+#> vector[1] mu_raw;
+#> }
+#> transformed parameters {
+#> // basis coefficients
+#> vector[num_basis] b;
+#> b[1 : 17] = mu_raw[1] + b_raw[1 : 17] * sigma_raw[1];
+#> }
+#> model {
+#> // prior for random effect population variances
+#> sigma_raw ~ student_t(3, 0, 2.5);
+#>
+#> // prior for random effect population means
+#> mu_raw ~ std_normal();
+#>
+#> // prior (non-centred) for s(year_fac)...
+#> b_raw[1 : 17] ~ std_normal();
+#> {
+#> // likelihood functions
+#> flat_ys ~ poisson_log_glm(flat_xs, 0.0, b);
+#> }
+#> }
+#> generated quantities {
+#> vector[total_obs] eta;
+#> matrix[n, n_series] mus;
+#> array[n, n_series] int ypred;
+#>
+#> // posterior predictions
+#> eta = X * b;
+#> for (s in 1 : n_series) {
+#> mus[1 : n, s] = eta[ytimes[1 : n, s]];
+#> ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
+#> }
+#> }
Now for interrogating the model. We can get some sense of the @@ -1139,8 +1130,8 @@
type = 're'
. See ?plot.mvgam
for more details
about the types of plots that can be produced from fitted
mvgam
objects
-
-
+
+
bayesplot
supportbayesplot
supportbayesplot
package to visualize posterior
distributions and diagnostics (see ?mvgam::mcmc_plot.mvgam
for details):
-
-
+
+
We can also use the wide range of posterior checking functions
available in bayesplot
(see
?mvgam::ppc_check.mvgam
for details):
pp_check(object = model1)
-#> Using 10 posterior draws for ppc type 'dens_overlay' by default.
-#> Warning in pp_check.mvgam(object = model1): NA responses are not shown in
-#> 'pp_check'.
pp_check(model1, type = "rootogram")
-#> Using all posterior draws for ppc type 'rootogram' by default.
-#> Warning in pp_check.mvgam(model1, type = "rootogram"): NA responses are not
-#> shown in 'pp_check'.
There is clearly some variation in these yearly intercept estimates.
But how do these translate into time-varying predictions? To understand
this, we can plot posterior hindcasts from this model for the training
period using plot.mvgam
with
type = 'forecast'
If you wish to extract these hindcasts for other downstream analyses,
the hindcast
function can be used. This will return a list
object of class mvgam_forecast
. In the
hindcasts
slot, a matrix of posterior retrodictions will be
returned for each series in the data (only one series in our
example):
hc <- hindcast(model1)
-str(hc)
-#> List of 15
-#> $ call :Class 'formula' language count ~ s(year_fac, bs = "re") - 1
-#> .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
-#> $ trend_call : NULL
-#> $ family : chr "poisson"
-#> $ trend_model : chr "None"
-#> $ drift : logi FALSE
-#> $ use_lv : logi FALSE
-#> $ fit_engine : chr "stan"
-#> $ type : chr "response"
-#> $ series_names : chr "PP"
-#> $ train_observations:List of 1
-#> ..$ PP: int [1:199] 0 1 2 NA 10 NA NA 16 18 12 ...
-#> $ train_times : num [1:199] 1 2 3 4 5 6 7 8 9 10 ...
-#> $ test_observations : NULL
-#> $ test_times : NULL
-#> $ hindcasts :List of 1
-#> ..$ PP: num [1:2000, 1:199] 9 6 10 6 12 8 10 5 8 6 ...
-#> .. ..- attr(*, "dimnames")=List of 2
-#> .. .. ..$ : NULL
-#> .. .. ..$ : chr [1:199] "ypred[1,1]" "ypred[2,1]" "ypred[3,1]" "ypred[4,1]" ...
-#> $ forecasts : NULL
-#> - attr(*, "class")= chr "mvgam_forecast"
hc <- hindcast(model1)
+str(hc)
+#> List of 15
+#> $ call :Class 'formula' language count ~ s(year_fac, bs = "re") - 1
+#> .. ..- attr(*, ".Environment")=<environment: 0x0000024869b48078>
+#> $ trend_call : NULL
+#> $ family : chr "poisson"
+#> $ trend_model : chr "None"
+#> $ drift : logi FALSE
+#> $ use_lv : logi FALSE
+#> $ fit_engine : chr "stan"
+#> $ type : chr "response"
+#> $ series_names : chr "PP"
+#> $ train_observations:List of 1
+#> ..$ PP: int [1:199] 0 1 2 NA 10 NA NA 16 18 12 ...
+#> $ train_times : num [1:199] 1 2 3 4 5 6 7 8 9 10 ...
+#> $ test_observations : NULL
+#> $ test_times : NULL
+#> $ hindcasts :List of 1
+#> ..$ PP: num [1:2000, 1:199] 11 3 14 9 10 9 15 9 11 10 ...
+#> .. ..- attr(*, "dimnames")=List of 2
+#> .. .. ..$ : NULL
+#> .. .. ..$ : chr [1:199] "ypred[1,1]" "ypred[2,1]" "ypred[3,1]" "ypred[4,1]" ...
+#> $ forecasts : NULL
+#> - attr(*, "class")= chr "mvgam_forecast"
You can also extract these hindcasts on the linear predictor scale, which in this case is the log scale (our Poisson GLM used a log link function). Sometimes this can be useful for asking more targeted questions about drivers of variation:
- -Objects of class mvgam_forecast
have an associated plot
-function as well:
This plot can look a bit confusing as it seems like there is linear -interpolation from the end of one year to the start of the next. But -this is just due to the way the lines are automatically connected in -base plots
+In any regression analysis, a key question is whether the residuals
show any patterns that can be indicative of un-modelled sources of
variation. For GLMs, we can use a modified residual called the Dunn-Smyth,
or randomized quantile, residual. Inspect Dunn-Smyth residuals from
the model using plot.mvgam
with
type = 'residuals'
newdata
. For splitting, we will make use of the
filter
function from dplyr
-model_data %>%
- dplyr::filter(time <= 160) -> data_train
-model_data %>%
- dplyr::filter(time > 160) -> data_test
model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1,
- family = poisson(),
- data = data_train,
- newdata = data_test)
Repeating the plots above gives insight into how the model’s -hierarchical prior formulation provides all the structure needed to -sample values for un-modelled years
- - - - -#> Out of sample DRPS:
-#> [1] 182.6177
-We can also view the test data in the forecast plot to see that the +
model_data %>%
+ dplyr::filter(time <= 160) -> data_train
+model_data %>%
+ dplyr::filter(time > 160) -> data_test
model1b <- mvgam(count ~ s(year_fac, bs = 're') - 1,
+ family = poisson(),
+ data = data_train,
+ newdata = data_test)
We can view the test data in the forecast plot to see that the predictions do not capture the temporal variation in the test set
- - -#> Out of sample DRPS:
-#> [1] 182.6177
+
+
As with the hindcast
function, we can use the
forecast
function to automatically extract the posterior
distributions for these predictions. This also returns an object of
class mvgam_forecast
, but now it will contain both the
hindcasts and forecasts for each series in the data:
fc <- forecast(model1b)
-str(fc)
-#> List of 16
-#> $ call :Class 'formula' language count ~ s(year_fac, bs = "re") - 1
-#> .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
-#> $ trend_call : NULL
-#> $ family : chr "poisson"
-#> $ family_pars : NULL
-#> $ trend_model : chr "None"
-#> $ drift : logi FALSE
-#> $ use_lv : logi FALSE
-#> $ fit_engine : chr "stan"
-#> $ type : chr "response"
-#> $ series_names : Factor w/ 1 level "PP": 1
-#> $ train_observations:List of 1
-#> ..$ PP: int [1:160] 0 1 2 NA 10 NA NA 16 18 12 ...
-#> $ train_times : num [1:160] 1 2 3 4 5 6 7 8 9 10 ...
-#> $ test_observations :List of 1
-#> ..$ PP: int [1:39] NA 0 0 10 3 14 18 NA 28 46 ...
-#> $ test_times : num [1:39] 161 162 163 164 165 166 167 168 169 170 ...
-#> $ hindcasts :List of 1
-#> ..$ PP: num [1:2000, 1:160] 10 7 8 11 6 11 9 11 5 2 ...
-#> .. ..- attr(*, "dimnames")=List of 2
-#> .. .. ..$ : NULL
-#> .. .. ..$ : chr [1:160] "ypred[1,1]" "ypred[2,1]" "ypred[3,1]" "ypred[4,1]" ...
-#> $ forecasts :List of 1
-#> ..$ PP: num [1:2000, 1:39] 5 12 10 7 7 8 11 14 8 12 ...
-#> .. ..- attr(*, "dimnames")=List of 2
-#> .. .. ..$ : NULL
-#> .. .. ..$ : chr [1:39] "ypred[161,1]" "ypred[162,1]" "ypred[163,1]" "ypred[164,1]" ...
-#> - attr(*, "class")= chr "mvgam_forecast"
fc <- forecast(model1b)
+str(fc)
+#> List of 16
+#> $ call :Class 'formula' language count ~ s(year_fac, bs = "re") - 1
+#> .. ..- attr(*, ".Environment")=<environment: 0x0000024869b48078>
+#> $ trend_call : NULL
+#> $ family : chr "poisson"
+#> $ family_pars : NULL
+#> $ trend_model : chr "None"
+#> $ drift : logi FALSE
+#> $ use_lv : logi FALSE
+#> $ fit_engine : chr "stan"
+#> $ type : chr "response"
+#> $ series_names : Factor w/ 1 level "PP": 1
+#> $ train_observations:List of 1
+#> ..$ PP: int [1:160] 0 1 2 NA 10 NA NA 16 18 12 ...
+#> $ train_times : num [1:160] 1 2 3 4 5 6 7 8 9 10 ...
+#> $ test_observations :List of 1
+#> ..$ PP: int [1:39] NA 0 0 10 3 14 18 NA 28 46 ...
+#> $ test_times : num [1:39] 161 162 163 164 165 166 167 168 169 170 ...
+#> $ hindcasts :List of 1
+#> ..$ PP: num [1:2000, 1:160] 12 7 15 10 5 13 13 8 3 11 ...
+#> .. ..- attr(*, "dimnames")=List of 2
+#> .. .. ..$ : NULL
+#> .. .. ..$ : chr [1:160] "ypred[1,1]" "ypred[2,1]" "ypred[3,1]" "ypred[4,1]" ...
+#> $ forecasts :List of 1
+#> ..$ PP: num [1:2000, 1:39] 7 6 8 8 10 14 13 5 10 7 ...
+#> .. ..- attr(*, "dimnames")=List of 2
+#> .. .. ..$ : NULL
+#> .. .. ..$ : chr [1:39] "ypred[161,1]" "ypred[162,1]" "ypred[163,1]" "ypred[164,1]" ...
+#> - attr(*, "class")= chr "mvgam_forecast"
ndvi
as a linear predictor:
-model2 <- mvgam(count ~ s(year_fac, bs = 're') +
- ndvi - 1,
- family = poisson(),
- data = data_train,
- newdata = data_test)
model2 <- mvgam(count ~ s(year_fac, bs = 're') +
+ ndvi - 1,
+ family = poisson(),
+ data = data_train,
+ newdata = data_test)
The model can be described mathematically as follows: \[\begin{align*}
\boldsymbol{count}_t & \sim \text{Poisson}(\lambda_t) \\
log(\lambda_t) & = \beta_{year[year_t]} + \beta_{ndvi} *
@@ -1319,145 +1283,140 @@ Where the \(\beta_{year}\) effects
are the same as before but we now have another predictor \((\beta_{ndvi})\) that applies to the
Rather than printing the summary each time, we can also quickly look
at the posterior empirical quantiles for the fixed effect of
Look at the estimated effect of This plot indicates a positive linear effect of Look at the estimated effect of The posterior distribution for the effect of Given our model used a nonlinear link function (log link in this
@@ -1467,25 +1426,13 @@ Now it is easier to get a sense of the nonlinear but positive
-relationship estimated between Adding predictors as “fixed” effects
ndvi
value at each timepoint \(t\). Inspect the summary of this modelsummary(model2)
-#> GAM formula:
-#> count ~ ndvi + s(year_fac, bs = "re") - 1
-#>
-#> Family:
-#> poisson
-#>
-#> Link function:
-#> log
-#>
-#> Trend model:
-#> None
-#>
-#> N series:
-#> 1
-#>
-#> N timepoints:
-#> 160
-#>
-#> Status:
-#> Fitted using Stan
-#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
-#> Total post-warmup draws = 2000
-#>
-#>
-#> GAM coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> ndvi 0.32 0.39 0.46 1 1696
-#> s(year_fac).1 1.10 1.40 1.70 1 2512
-#> s(year_fac).2 1.80 2.00 2.20 1 2210
-#> s(year_fac).3 2.20 2.40 2.60 1 2109
-#> s(year_fac).4 2.30 2.50 2.70 1 1780
-#> s(year_fac).5 1.20 1.40 1.60 1 2257
-#> s(year_fac).6 1.00 1.30 1.50 1 2827
-#> s(year_fac).7 1.10 1.40 1.70 1 2492
-#> s(year_fac).8 2.10 2.30 2.50 1 2188
-#> s(year_fac).9 2.70 2.90 3.00 1 2014
-#> s(year_fac).10 2.00 2.20 2.40 1 2090
-#> s(year_fac).11 2.30 2.40 2.60 1 1675
-#> s(year_fac).12 2.50 2.70 2.80 1 2108
-#> s(year_fac).13 1.40 1.60 1.80 1 2161
-#> s(year_fac).14 0.46 2.00 3.20 1 1849
-#> s(year_fac).15 0.53 2.00 3.30 1 1731
-#> s(year_fac).16 0.53 2.00 3.30 1 1859
-#> s(year_fac).17 0.59 1.90 3.20 1 1761
-#>
-#> GAM group-level estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> mean(s(year_fac)) 1.6 2.00 2.3 1.01 397
-#> sd(s(year_fac)) 0.4 0.59 1.0 1.01 395
-#>
-#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(year_fac) 11.2 17 3096 <2e-16 ***
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Tue Apr 16 1:00:50 PM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
summary(model2)
+#> GAM formula:
+#> count ~ ndvi + s(year_fac, bs = "re") - 1
+#> <environment: 0x0000024869b48078>
+#>
+#> Family:
+#> poisson
+#>
+#> Link function:
+#> log
+#>
+#> Trend model:
+#> None
+#>
+#> N series:
+#> 1
+#>
+#> N timepoints:
+#> 199
+#>
+#> Status:
+#> Fitted using Stan
+#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
+#> Total post-warmup draws = 2000
+#>
+#>
+#> GAM coefficient (beta) estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> ndvi 0.32 0.39 0.46 1 1835
+#> s(year_fac).1 1.10 1.40 1.70 1 2267
+#> s(year_fac).2 1.80 2.00 2.20 1 2518
+#> s(year_fac).3 2.20 2.40 2.60 1 2140
+#> s(year_fac).4 2.30 2.50 2.70 1 1978
+#> s(year_fac).5 1.20 1.40 1.60 1 2341
+#> s(year_fac).6 1.00 1.30 1.50 1 2318
+#> s(year_fac).7 1.20 1.40 1.70 1 2447
+#> s(year_fac).8 2.10 2.30 2.50 1 2317
+#> s(year_fac).9 2.70 2.90 3.00 1 1916
+#> s(year_fac).10 2.00 2.20 2.40 1 2791
+#> s(year_fac).11 2.30 2.40 2.60 1 2214
+#> s(year_fac).12 2.50 2.70 2.80 1 2010
+#> s(year_fac).13 1.40 1.60 1.80 1 2976
+#> s(year_fac).14 0.68 2.00 3.30 1 1581
+#> s(year_fac).15 0.69 2.00 3.30 1 1874
+#> s(year_fac).16 0.56 2.00 3.40 1 1442
+#> s(year_fac).17 0.60 2.00 3.30 1 1671
+#>
+#> GAM group-level estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> mean(s(year_fac)) 1.6 2.0 2.3 1.01 417
+#> sd(s(year_fac)) 0.4 0.6 1.0 1.01 417
+#>
+#> Approximate significance of GAM smooths:
+#> edf Ref.df Chi.sq p-value
+#> s(year_fac) 10.9 17 265 <2e-16 ***
+#> ---
+#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#>
+#> Stan MCMC diagnostics:
+#> n_eff / iter looks reasonable for all parameters
+#> Rhat looks reasonable for all parameters
+#> 0 of 2000 iterations ended with a divergence (0%)
+#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
+#> E-FMI indicated no pathological behavior
+#>
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:32:01 AM 2024.
+#> For each parameter, n_eff is a crude measure of effective sample size,
+#> and Rhat is the potential scale reduction factor on split MCMC chains
+#> (at convergence, Rhat = 1)
ndvi
(and other linear predictor coefficients) using
coef
:coef(model2)
-#> 2.5% 50% 97.5% Rhat n_eff
-#> ndvi 0.3198694 0.3899835 0.4571083 1 1696
-#> s(year_fac).1 1.1176373 1.4085900 1.6603838 1 2512
-#> s(year_fac).2 1.8008470 2.0005000 2.2003670 1 2210
-#> s(year_fac).3 2.1842727 2.3822950 2.5699363 1 2109
-#> s(year_fac).4 2.3267037 2.5022700 2.6847912 1 1780
-#> s(year_fac).5 1.1945853 1.4215950 1.6492038 1 2257
-#> s(year_fac).6 1.0332160 1.2743050 1.5091052 1 2827
-#> s(year_fac).7 1.1467567 1.4119100 1.6751850 1 2492
-#> s(year_fac).8 2.0710285 2.2713050 2.4596285 1 2188
-#> s(year_fac).9 2.7198967 2.8557300 2.9874662 1 2014
-#> s(year_fac).10 1.9798730 2.1799600 2.3932595 1 2090
-#> s(year_fac).11 2.2734940 2.4374700 2.6130482 1 1675
-#> s(year_fac).12 2.5421157 2.6935350 2.8431822 1 2108
-#> s(year_fac).13 1.3786087 1.6177850 1.8495872 1 2161
-#> s(year_fac).14 0.4621041 1.9744700 3.2480377 1 1849
-#> s(year_fac).15 0.5293684 2.0014200 3.2766722 1 1731
-#> s(year_fac).16 0.5285142 1.9786450 3.2859085 1 1859
-#> s(year_fac).17 0.5909969 1.9462850 3.2306940 1 1761
ndvi
using
-plot.mvgam
with type = 'pterms'
ndvi
on
-log(counts)
. But it may be easier to visualise using a
-histogram, especially for parametric (linear) effects. This can be done
-by first extracting the posterior coefficients as we did in the first
-example:beta_post <- as.data.frame(model2, variable = 'betas')
-dplyr::glimpse(beta_post)
-#> Rows: 2,000
-#> Columns: 18
-#> $ ndvi <dbl> 0.330568, 0.398734, 0.357498, 0.484288, 0.380087, 0.3…
-#> $ `s(year_fac).1` <dbl> 1.55868, 1.27949, 1.24414, 1.02997, 1.64712, 1.07519,…
-#> $ `s(year_fac).2` <dbl> 1.98967, 2.00846, 2.07493, 1.84431, 2.01590, 2.16466,…
-#> $ `s(year_fac).3` <dbl> 2.41434, 2.16020, 2.67324, 2.33332, 2.32415, 2.45516,…
-#> $ `s(year_fac).4` <dbl> 2.62215, 2.53992, 2.50659, 2.23671, 2.56663, 2.40054,…
-#> $ `s(year_fac).5` <dbl> 1.37221, 1.44795, 1.53019, 1.27623, 1.50771, 1.49515,…
-#> $ `s(year_fac).6` <dbl> 1.323980, 1.220200, 1.165610, 1.271620, 1.193820, 1.3…
-#> $ `s(year_fac).7` <dbl> 1.52005, 1.30735, 1.42566, 1.13335, 1.61581, 1.31853,…
-#> $ `s(year_fac).8` <dbl> 2.40223, 2.20021, 2.44366, 2.17192, 2.20837, 2.33066,…
-#> $ `s(year_fac).9` <dbl> 2.91580, 2.90942, 2.87679, 2.64941, 2.85401, 2.78744,…
-#> $ `s(year_fac).10` <dbl> 2.46559, 2.01466, 2.08319, 2.01400, 2.22965, 2.26523,…
-#> $ `s(year_fac).11` <dbl> 2.52118, 2.45406, 2.46667, 2.20664, 2.42495, 2.46256,…
-#> $ `s(year_fac).12` <dbl> 2.72360, 2.63546, 2.86718, 2.59035, 2.76576, 2.56130,…
-#> $ `s(year_fac).13` <dbl> 1.67388, 1.50790, 1.52463, 1.39004, 1.72927, 1.61023,…
-#> $ `s(year_fac).14` <dbl> 2.583650, 2.034240, 1.819820, 1.579280, 2.426880, 1.8…
-#> $ `s(year_fac).15` <dbl> 2.57365, 2.28723, 1.67404, 1.46796, 2.49512, 2.71230,…
-#> $ `s(year_fac).16` <dbl> 1.801660, 2.185540, 1.756500, 2.098760, 2.270640, 1.8…
-#> $ `s(year_fac).17` <dbl> 0.886081, 3.409300, -0.371795, 2.494990, 1.822150, 2.…
coef(model2)
+#> 2.5% 50% 97.5% Rhat n_eff
+#> ndvi 0.3219980 0.3897445 0.4574249 1 1835
+#> s(year_fac).1 1.1251775 1.3947750 1.6786482 1 2267
+#> s(year_fac).2 1.8029418 2.0028550 2.1983995 1 2518
+#> s(year_fac).3 2.1828568 2.3833850 2.5667087 1 2140
+#> s(year_fac).4 2.3216057 2.5041200 2.6876810 1 1978
+#> s(year_fac).5 1.1947695 1.4253400 1.6404350 1 2341
+#> s(year_fac).6 1.0302375 1.2719450 1.5071408 1 2318
+#> s(year_fac).7 1.1527560 1.4237300 1.6631115 1 2447
+#> s(year_fac).8 2.1024232 2.2693150 2.4510250 1 2317
+#> s(year_fac).9 2.7252610 2.8546400 2.9843943 1 1916
+#> s(year_fac).10 1.9848580 2.1821250 2.3831660 1 2791
+#> s(year_fac).11 2.2655225 2.4353150 2.6026625 1 2214
+#> s(year_fac).12 2.5445065 2.6914450 2.8342325 1 2010
+#> s(year_fac).13 1.3758873 1.6138700 1.8464015 1 2976
+#> s(year_fac).14 0.6763885 2.0087850 3.2876045 1 1581
+#> s(year_fac).15 0.6927259 1.9904050 3.3384072 1 1874
+#> s(year_fac).16 0.5608287 1.9907700 3.3807122 1 1442
+#> s(year_fac).17 0.5989812 2.0276550 3.3404602 1 1671
ndvi
using using a
+histogram. This can be done by first extracting the posterior
+coefficients:beta_post <- as.data.frame(model2, variable = 'betas')
+dplyr::glimpse(beta_post)
+#> Rows: 2,000
+#> Columns: 18
+#> $ ndvi <dbl> 0.356033, 0.419265, 0.401340, 0.408547, 0.365358, 0.4…
+#> $ `s(year_fac).1` <dbl> 1.35402, 1.49105, 1.26309, 1.22981, 1.55560, 1.46103,…
+#> $ `s(year_fac).2` <dbl> 2.08371, 2.03387, 1.81765, 1.82714, 2.20838, 2.12063,…
+#> $ `s(year_fac).3` <dbl> 2.43563, 2.31586, 2.36554, 2.40869, 2.38362, 2.24695,…
+#> $ `s(year_fac).4` <dbl> 2.66322, 2.44635, 2.48168, 2.50480, 2.57476, 2.42963,…
+#> $ `s(year_fac).5` <dbl> 1.40050, 1.33274, 1.38421, 1.36805, 1.40633, 1.34501,…
+#> $ `s(year_fac).6` <dbl> 1.431430, 1.341260, 1.332540, 1.300360, 1.243430, 1.2…
+#> $ `s(year_fac).7` <dbl> 1.46134, 1.25454, 1.42857, 1.41534, 1.40782, 1.40564,…
+#> $ `s(year_fac).8` <dbl> 2.38557, 2.27800, 2.25584, 2.26500, 2.31796, 2.20621,…
+#> $ `s(year_fac).9` <dbl> 2.80751, 2.83031, 2.73530, 2.78705, 2.84959, 2.77223,…
+#> $ `s(year_fac).10` <dbl> 2.09385, 2.08026, 2.22424, 2.23312, 2.18318, 2.09513,…
+#> $ `s(year_fac).11` <dbl> 2.41232, 2.36077, 2.43681, 2.47770, 2.51130, 2.44691,…
+#> $ `s(year_fac).12` <dbl> 2.74478, 2.67146, 2.68431, 2.72280, 2.80983, 2.60829,…
+#> $ `s(year_fac).13` <dbl> 1.63171, 1.55937, 1.71905, 1.70544, 1.52311, 1.48743,…
+#> $ `s(year_fac).14` <dbl> 2.628480, 1.679080, 1.817260, 1.805210, 1.732590, 2.2…
+#> $ `s(year_fac).15` <dbl> 2.5601600, 1.3652300, 1.7714600, 1.7648100, 2.4456900…
+#> $ `s(year_fac).16` <dbl> 1.345990, 2.422970, 1.949050, 1.983840, 2.160030, 1.8…
+#> $ `s(year_fac).17` <dbl> 2.206790, 1.674940, 1.699170, 1.657840, 2.982300, 1.1…
ndvi
is
stored in the ndvi
column. A quick histogram confirms our
inference that log(counts)
respond positively to increases
in ndvi
:hist(beta_post$ndvi,
- xlim = c(-1 * max(abs(beta_post$ndvi)),
- max(abs(beta_post$ndvi))),
- col = 'darkred',
- border = 'white',
- xlab = expression(beta[NDVI]),
- ylab = '',
- yaxt = 'n',
- main = '',
- lwd = 2)
-abline(v = 0, lwd = 2.5)
hist(beta_post$ndvi,
+ xlim = c(-1 * max(abs(beta_post$ndvi)),
+ max(abs(beta_post$ndvi))),
+ col = 'darkred',
+ border = 'white',
+ xlab = expression(beta[NDVI]),
+ ylab = '',
+ yaxt = 'n',
+ main = '',
+ lwd = 2)
+abline(v = 0, lwd = 2.5)
marginaleffects
support
this relatively straightforward. Objects of class marginaleffects
supportmvgam
can
be used with marginaleffects
to inspect contrasts,
scenario-based predictions, conditional and marginal effects, all on the
-outcome scale. Here we will use the plot_predictions
-function from marginaleffects
to inspect the conditional
-effect of ndvi
(use ?plot_predictions
for
-guidance on how to modify these plots):
-plot_predictions(model2,
- condition = "ndvi",
- # include the observed count values
- # as points, and show rugs for the observed
- # ndvi and count values on the axes
- points = 0.5, rug = TRUE)
ndvi
and count
.
-Like brms
, mvgam
has the simple
+outcome scale. Like brms
, mvgam
has the simple
conditional_effects
function to make quick and informative
-plots for main effects. This will likely be your go-to function for
-quickly understanding patterns from fitted mvgam
modelsmarginaleffects
+support. This will likely be your go-to function for quickly
+understanding patterns from fitted mvgam
models
+
+
-1
in the formula now):
-model3 <- mvgam(count ~ s(time, bs = 'bs', k = 15) +
- ndvi,
- family = poisson(),
- data = data_train,
- newdata = data_test)
model3 <- mvgam(count ~ s(time, bs = 'bs', k = 15) +
+ ndvi,
+ family = poisson(),
+ data = data_train,
+ newdata = data_test)
The model can be described mathematically as follows: \[\begin{align*}
\boldsymbol{count}_t & \sim \text{Poisson}(\lambda_t) \\
log(\lambda_t) & = f(\boldsymbol{time})_t + \beta_{ndvi} *
@@ -1543,178 +1490,149 @@ Adding predictors as smooths
wiggly spline. Note that sometimes there are multiple smoothing
penalties that contribute to the covariance matrix, but I am only
showing one here for simplicity. View the summary as before
summary(model3)
-#> GAM formula:
-#> count ~ s(time, bs = "bs", k = 15) + ndvi
-#>
-#> Family:
-#> poisson
-#>
-#> Link function:
-#> log
-#>
-#> Trend model:
-#> None
-#>
-#> N series:
-#> 1
-#>
-#> N timepoints:
-#> 160
-#>
-#> Status:
-#> Fitted using Stan
-#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
-#> Total post-warmup draws = 2000
-#>
-#>
-#> GAM coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) 2.00 2.10 2.200 1.00 903
-#> ndvi 0.26 0.33 0.390 1.00 942
-#> s(time).1 -2.10 -1.10 0.029 1.01 484
-#> s(time).2 0.45 1.30 2.400 1.01 411
-#> s(time).3 -0.43 0.45 1.500 1.02 347
-#> s(time).4 1.60 2.50 3.600 1.02 342
-#> s(time).5 -1.10 -0.22 0.880 1.02 375
-#> s(time).6 -0.53 0.36 1.600 1.01 352
-#> s(time).7 -1.50 -0.51 0.560 1.01 406
-#> s(time).8 0.63 1.50 2.600 1.02 340
-#> s(time).9 1.20 2.10 3.200 1.02 346
-#> s(time).10 -0.34 0.54 1.600 1.01 364
-#> s(time).11 0.92 1.80 2.900 1.02 332
-#> s(time).12 0.67 1.50 2.500 1.01 398
-#> s(time).13 -1.20 -0.32 0.700 1.01 420
-#> s(time).14 -7.90 -4.20 -1.200 1.01 414
-#>
-#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(time) 9.41 14 790 <2e-16 ***
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Tue Apr 16 1:01:29 PM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
summary(model3)
+#> GAM formula:
+#> count ~ s(time, bs = "bs", k = 15) + ndvi
+#> <environment: 0x0000024869b48078>
+#>
+#> Family:
+#> poisson
+#>
+#> Link function:
+#> log
+#>
+#> Trend model:
+#> None
+#>
+#> N series:
+#> 1
+#>
+#> N timepoints:
+#> 199
+#>
+#> Status:
+#> Fitted using Stan
+#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
+#> Total post-warmup draws = 2000
+#>
+#>
+#> GAM coefficient (beta) estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> (Intercept) 2.00 2.10 2.2000 1.00 815
+#> ndvi 0.26 0.33 0.4000 1.01 856
+#> s(time).1 -2.10 -1.10 0.0073 1.01 513
+#> s(time).2 0.48 1.30 2.3000 1.01 433
+#> s(time).3 -0.49 0.43 1.5000 1.01 389
+#> s(time).4 1.60 2.40 3.5000 1.01 375
+#> s(time).5 -1.10 -0.23 0.8300 1.01 399
+#> s(time).6 -0.54 0.37 1.5000 1.01 415
+#> s(time).7 -1.50 -0.54 0.5000 1.01 423
+#> s(time).8 0.62 1.50 2.5000 1.01 378
+#> s(time).9 1.20 2.00 3.1000 1.01 379
+#> s(time).10 -0.31 0.52 1.6000 1.01 380
+#> s(time).11 0.80 1.70 2.8000 1.01 377
+#> s(time).12 0.71 1.50 2.4000 1.01 399
+#> s(time).13 -1.20 -0.35 0.6400 1.01 497
+#> s(time).14 -7.50 -4.10 -1.2000 1.01 490
+#>
+#> Approximate significance of GAM smooths:
+#> edf Ref.df Chi.sq p-value
+#> s(time) 9.83 14 64.4 <2e-16 ***
+#> ---
+#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#>
+#> Stan MCMC diagnostics:
+#> n_eff / iter looks reasonable for all parameters
+#> Rhat looks reasonable for all parameters
+#> 0 of 2000 iterations ended with a divergence (0%)
+#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
+#> E-FMI indicated no pathological behavior
+#>
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:32:49 AM 2024.
+#> For each parameter, n_eff is a crude measure of effective sample size,
+#> and Rhat is the potential scale reduction factor on split MCMC chains
+#> (at convergence, Rhat = 1)
The summary above now contains posterior estimates for the smoothing
parameters as well as the basis coefficients for the nonlinear effect of
-time
. We can visualize the conditional time
-effect using the plot
function with
-type = 'smooths'
:
By default this plots shows posterior empirical quantiles, but it can -also be helpful to view some realizations of the underlying function -(here, each line is a different potential curve drawn from the posterior -of all possible curves):
- - -A useful question when modelling using GAMs is to identify where the -function is changing most rapidly. To address this, we can plot -estimated 1st derivatives of the spline:
- - -Here, values above >0
indicate the function was
-increasing at that time point, while values <0
indicate
-the function was declining. The most rapid declines appear to have been
-happening around timepoints 50 and again toward the end of the training
-period, for example.
Use conditional_effects
again for useful plots on the
-outcome scale:
Or on the link scale:
- - +time
. We can visualize conditional_effects
as
+before:
+
+
Inspect the underlying Stan
code to gain some idea of
how the spline is being penalized:
code(model3)
-#> // Stan model code generated by package mvgam
-#> data {
-#> int<lower=0> total_obs; // total number of observations
-#> int<lower=0> n; // number of timepoints per series
-#> int<lower=0> n_sp; // number of smoothing parameters
-#> int<lower=0> n_series; // number of series
-#> int<lower=0> num_basis; // total number of basis coefficients
-#> vector[num_basis] zero; // prior locations for basis coefficients
-#> matrix[total_obs, num_basis] X; // mgcv GAM design matrix
-#> array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
-#> matrix[14, 28] S1; // mgcv smooth penalty matrix S1
-#> int<lower=0> n_nonmissing; // number of nonmissing observations
-#> array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
-#> matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
-#> array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
-#> }
-#> parameters {
-#> // raw basis coefficients
-#> vector[num_basis] b_raw;
-#>
-#> // smoothing parameters
-#> vector<lower=0>[n_sp] lambda;
-#> }
-#> transformed parameters {
-#> // basis coefficients
-#> vector[num_basis] b;
-#> b[1 : num_basis] = b_raw[1 : num_basis];
-#> }
-#> model {
-#> // prior for (Intercept)...
-#> b_raw[1] ~ student_t(3, 2.6, 2.5);
-#>
-#> // prior for ndvi...
-#> b_raw[2] ~ student_t(3, 0, 2);
-#>
-#> // prior for s(time)...
-#> b_raw[3 : 16] ~ multi_normal_prec(zero[3 : 16],
-#> S1[1 : 14, 1 : 14] * lambda[1]
-#> + S1[1 : 14, 15 : 28] * lambda[2]);
-#>
-#> // priors for smoothing parameters
-#> lambda ~ normal(5, 30);
-#> {
-#> // likelihood functions
-#> flat_ys ~ poisson_log_glm(flat_xs, 0.0, b);
-#> }
-#> }
-#> generated quantities {
-#> vector[total_obs] eta;
-#> matrix[n, n_series] mus;
-#> vector[n_sp] rho;
-#> array[n, n_series] int ypred;
-#> rho = log(lambda);
-#>
-#> // posterior predictions
-#> eta = X * b;
-#> for (s in 1 : n_series) {
-#> mus[1 : n, s] = eta[ytimes[1 : n, s]];
-#> ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
-#> }
-#> }
code(model3)
+#> // Stan model code generated by package mvgam
+#> data {
+#> int<lower=0> total_obs; // total number of observations
+#> int<lower=0> n; // number of timepoints per series
+#> int<lower=0> n_sp; // number of smoothing parameters
+#> int<lower=0> n_series; // number of series
+#> int<lower=0> num_basis; // total number of basis coefficients
+#> vector[num_basis] zero; // prior locations for basis coefficients
+#> matrix[total_obs, num_basis] X; // mgcv GAM design matrix
+#> array[n, n_series] int<lower=0> ytimes; // time-ordered matrix (which col in X belongs to each [time, series] observation?)
+#> matrix[14, 28] S1; // mgcv smooth penalty matrix S1
+#> int<lower=0> n_nonmissing; // number of nonmissing observations
+#> array[n_nonmissing] int<lower=0> flat_ys; // flattened nonmissing observations
+#> matrix[n_nonmissing, num_basis] flat_xs; // X values for nonmissing observations
+#> array[n_nonmissing] int<lower=0> obs_ind; // indices of nonmissing observations
+#> }
+#> parameters {
+#> // raw basis coefficients
+#> vector[num_basis] b_raw;
+#>
+#> // smoothing parameters
+#> vector<lower=0>[n_sp] lambda;
+#> }
+#> transformed parameters {
+#> // basis coefficients
+#> vector[num_basis] b;
+#> b[1 : num_basis] = b_raw[1 : num_basis];
+#> }
+#> model {
+#> // prior for (Intercept)...
+#> b_raw[1] ~ student_t(3, 2.6, 2.5);
+#>
+#> // prior for ndvi...
+#> b_raw[2] ~ student_t(3, 0, 2);
+#>
+#> // prior for s(time)...
+#> b_raw[3 : 16] ~ multi_normal_prec(zero[3 : 16],
+#> S1[1 : 14, 1 : 14] * lambda[1]
+#> + S1[1 : 14, 15 : 28] * lambda[2]);
+#>
+#> // priors for smoothing parameters
+#> lambda ~ normal(5, 30);
+#> {
+#> // likelihood functions
+#> flat_ys ~ poisson_log_glm(flat_xs, 0.0, b);
+#> }
+#> }
+#> generated quantities {
+#> vector[total_obs] eta;
+#> matrix[n, n_series] mus;
+#> vector[n_sp] rho;
+#> array[n, n_series] int ypred;
+#> rho = log(lambda);
+#>
+#> // posterior predictions
+#> eta = X * b;
+#> for (s in 1 : n_series) {
+#> mus[1 : n, s] = eta[ytimes[1 : n, s]];
+#> ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
+#> }
+#> }
The line below // prior for s(time)...
shows how the
spline basis coefficients are drawn from a zero-centred multivariate
normal distribution. The precision matrix \(S\) is penalized by two different smoothing
parameters (the \(\lambda\)’s) to
enforce smoothness and reduce overfitting
mvgam
Forecasts from the above model are not ideal:
- - -#> Out of sample DRPS:
-#> [1] 288.3844
+
+
Why is this happening? The forecasts are driven almost entirely by variation in the temporal spline, which is extrapolating linearly forever beyond the edge of the training data. Any slight @@ -1723,14 +1641,14 @@
mvgam
plot_mvgam_smooth(model3, smooth = 's(time)',
- # feed newdata to the plot function to generate
- # predictions of the temporal smooth to the end of the
- # testing period
- newdata = data.frame(time = 1:max(data_test$time),
- ndvi = 0))
-abline(v = max(data_train$time), lty = 'dashed', lwd = 2)
plot_mvgam_smooth(model3, smooth = 's(time)',
+ # feed newdata to the plot function to generate
+ # predictions of the temporal smooth to the end of the
+ # testing period
+ newdata = data.frame(time = 1:max(data_test$time),
+ ndvi = 0))
+abline(v = max(data_train$time), lty = 'dashed', lwd = 2)
This model is not doing well. Clearly we need to somehow account for
the strong temporal autocorrelation when modelling these data without
using a smooth function of time
. Now onto another prominent
@@ -1745,11 +1663,11 @@
mvgam
mvgam
can include combinations of smooths and dynamic
components:
-model4 <- mvgam(count ~ s(ndvi, k = 6),
- family = poisson(),
- data = data_train,
- newdata = data_test,
- trend_model = 'AR1')
model4 <- mvgam(count ~ s(ndvi, k = 6),
+ family = poisson(),
+ data = data_train,
+ newdata = data_test,
+ trend_model = 'AR1')
The model can be described mathematically as follows: \[\begin{align*}
\boldsymbol{count}_t & \sim \text{Poisson}(\lambda_t) \\
log(\lambda_t) & = f(\boldsymbol{ndvi})_t + z_t \\
@@ -1767,84 +1685,76 @@ Latent dynamics in
more realistic estimates of the residual autocorrelation parameters.
Summarise the model to see how it now returns posterior summaries for
the latent AR1 process:mvgam
summary(model4)
-#> GAM formula:
-#> count ~ s(ndvi, k = 6)
-#>
-#> Family:
-#> poisson
-#>
-#> Link function:
-#> log
-#>
-#> Trend model:
-#> AR1
-#>
-#> N series:
-#> 1
-#>
-#> N timepoints:
-#> 160
-#>
-#> Status:
-#> Fitted using Stan
-#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
-#> Total post-warmup draws = 2000
-#>
-#>
-#> GAM coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) 1.200 1.9000 2.500 1.03 63
-#> s(ndvi).1 -0.066 0.0100 0.160 1.01 318
-#> s(ndvi).2 -0.110 0.0190 0.340 1.00 286
-#> s(ndvi).3 -0.048 -0.0019 0.051 1.00 560
-#> s(ndvi).4 -0.210 0.1200 1.500 1.01 198
-#> s(ndvi).5 -0.079 0.1500 0.360 1.01 350
-#>
-#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(ndvi) 2.26 5 87.8 0.1
-#>
-#> Latent trend parameter AR estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> ar1[1] 0.70 0.81 0.92 1.01 234
-#> sigma[1] 0.68 0.80 0.96 1.00 488
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Tue Apr 16 1:02:26 PM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
View conditional smooths for the ndvi
effect:
summary(model4)
+#> GAM formula:
+#> count ~ s(ndvi, k = 6)
+#> <environment: 0x0000024869b48078>
+#>
+#> Family:
+#> poisson
+#>
+#> Link function:
+#> log
+#>
+#> Trend model:
+#> AR1
+#>
+#> N series:
+#> 1
+#>
+#> N timepoints:
+#> 199
+#>
+#> Status:
+#> Fitted using Stan
+#> 4 chains, each with iter = 1000; warmup = 500; thin = 1
+#> Total post-warmup draws = 2000
+#>
+#>
+#> GAM coefficient (beta) estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> (Intercept) 1.100 2.0000 2.600 1.13 19
+#> s(ndvi).1 -0.180 -0.0110 0.073 1.01 335
+#> s(ndvi).2 -0.130 0.0200 0.300 1.01 381
+#> s(ndvi).3 -0.052 -0.0018 0.040 1.00 893
+#> s(ndvi).4 -0.210 0.1300 1.300 1.02 253
+#> s(ndvi).5 -0.080 0.1500 0.350 1.00 407
+#>
+#> Approximate significance of GAM smooths:
+#> edf Ref.df Chi.sq p-value
+#> s(ndvi) 2.47 5 5.7 0.34
+#>
+#> Latent trend parameter AR estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> ar1[1] 0.70 0.81 0.92 1 416
+#> sigma[1] 0.68 0.79 0.95 1 378
+#>
+#> Stan MCMC diagnostics:
+#> n_eff / iter looks reasonable for all parameters
+#> Rhats above 1.05 found for 94 parameters
+#> *Diagnose further to investigate why the chains have not mixed
+#> 0 of 2000 iterations ended with a divergence (0%)
+#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
+#> E-FMI indicated no pathological behavior
+#>
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:34:00 AM 2024.
+#> For each parameter, n_eff is a crude measure of effective sample size,
+#> and Rhat is the potential scale reduction factor on split MCMC chains
+#> (at convergence, Rhat = 1)
View posterior hindcasts / forecasts and compare against the out of sample test data
- - -#> Out of sample DRPS:
-#> [1] 150.5241
+
+
The trend is evolving as an AR1 process, which we can also view:
- - + +In-sample model performance can be interrogated using leave-one-out
cross-validation utilities from the loo
package (a higher
value is preferred for this metric):
loo_compare(model3, model4)
-#> Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
-
-#> Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
-#> elpd_diff se_diff
-#> model4 0.0 0.0
-#> model3 -558.9 66.4
The higher estimated log predictive density (ELPD) value for the dynamic model suggests it provides a better fit to the in-sample data.
@@ -1853,12 +1763,12 @@mvgam
forecast
and score
functions. Here we will
compare models based on their Discrete Ranked Probability Scores (a
lower value is preferred for this metric)
-fc_mod3 <- forecast(model3)
-fc_mod4 <- forecast(model4)
-score_mod3 <- score(fc_mod3, score = 'drps')
-score_mod4 <- score(fc_mod4, score = 'drps')
-sum(score_mod4$PP$score, na.rm = TRUE) - sum(score_mod3$PP$score, na.rm = TRUE)
-#> [1] -137.8603
fc_mod3 <- forecast(model3)
+fc_mod4 <- forecast(model4)
+score_mod3 <- score(fc_mod3, score = 'drps')
+score_mod4 <- score(fc_mod4, score = 'drps')
+sum(score_mod4$PP$score, na.rm = TRUE) - sum(score_mod3$PP$score, na.rm = TRUE)
+#> [1] -132.6078
A strongly negative value here suggests the score for the dynamic model (model 4) is much smaller than the score for the model with a smooth function of time (model 3)
diff --git a/inst/doc/nmixtures.R b/inst/doc/nmixtures.R index a4e7a766..b1000e6e 100644 --- a/inst/doc/nmixtures.R +++ b/inst/doc/nmixtures.R @@ -1,16 +1,20 @@ -## ----echo = FALSE------------------------------------------------------------- -NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true") +params <- +list(EVAL = TRUE) + +## ---- echo = FALSE------------------------------------------------------------ knitr::opts_chunk$set( collapse = TRUE, comment = "#>", - purl = NOT_CRAN, - eval = NOT_CRAN + message = FALSE, + warning = FALSE, + eval = if (isTRUE(exists("params"))) params$EVAL else FALSE ) + ## ----setup, include=FALSE----------------------------------------------------- knitr::opts_chunk$set( echo = TRUE, - dpi = 150, + dpi = 100, fig.asp = 0.8, fig.width = 6, out.width = "60%", @@ -33,6 +37,7 @@ options(ggplot2.discrete.colour = c("#A25050", 'darkred', "#010048")) + ## ----------------------------------------------------------------------------- set.seed(999) # Simulate observations for species 1, which shows a declining trend and 0.7 detection probability @@ -92,16 +97,19 @@ testdat = testdat %>% cap = 50) %>% dplyr::select(-replicate)) + ## ----------------------------------------------------------------------------- testdat$species <- factor(testdat$species, levels = unique(testdat$species)) testdat$series <- factor(testdat$series, levels = unique(testdat$series)) + ## ----------------------------------------------------------------------------- dplyr::glimpse(testdat) head(testdat, 12) + ## ----------------------------------------------------------------------------- testdat %>% # each unique combination of site*species is a separate process @@ -110,6 +118,7 @@ testdat %>% dplyr::distinct() -> trend_map trend_map + ## ----include = FALSE, results='hide'------------------------------------------ mod <- mvgam( # the observation formula sets up linear predictors for @@ -132,48 +141,55 @@ mod <- mvgam( prior(normal(1, 1.5), class = Intercept_trend)), samples = 1000) + ## ----eval = FALSE------------------------------------------------------------- -# mod <- mvgam( -# # the observation formula sets up linear predictors for -# # detection probability on the logit scale -# formula = obs ~ species - 1, -# -# # the trend_formula sets up the linear predictors for -# # the latent abundance processes on the log scale -# trend_formula = ~ s(time, by = trend, k = 4) + species, -# -# # the trend_map takes care of the mapping -# trend_map = trend_map, -# -# # nmix() family and data -# family = nmix(), -# data = testdat, -# -# # priors can be set in the usual way -# priors = c(prior(std_normal(), class = b), -# prior(normal(1, 1.5), class = Intercept_trend)), -# samples = 1000) +## mod <- mvgam( +## # the observation formula sets up linear predictors for +## # detection probability on the logit scale +## formula = obs ~ species - 1, +## +## # the trend_formula sets up the linear predictors for +## # the latent abundance processes on the log scale +## trend_formula = ~ s(time, by = trend, k = 4) + species, +## +## # the trend_map takes care of the mapping +## trend_map = trend_map, +## +## # nmix() family and data +## family = nmix(), +## data = testdat, +## +## # priors can be set in the usual way +## priors = c(prior(std_normal(), class = b), +## prior(normal(1, 1.5), class = Intercept_trend)), +## samples = 1000) + ## ----------------------------------------------------------------------------- code(mod) + ## ----------------------------------------------------------------------------- summary(mod) + ## ----------------------------------------------------------------------------- loo(mod) + ## ----------------------------------------------------------------------------- plot(mod, type = 'smooths', trend_effects = TRUE) + ## ----------------------------------------------------------------------------- -plot_predictions(mod, condition = 'species', +marginaleffects::plot_predictions(mod, condition = 'species', type = 'detection') + ylab('Pr(detection)') + ylim(c(0, 1)) + theme_classic() + theme(legend.position = 'none') + ## ----------------------------------------------------------------------------- hc <- hindcast(mod, type = 'latent_N') @@ -227,10 +243,12 @@ plot_latentN = function(hindcasts, data, species = 'sp_1'){ title = species) } + ## ----------------------------------------------------------------------------- plot_latentN(hc, testdat, species = 'sp_1') plot_latentN(hc, testdat, species = 'sp_2') + ## ----------------------------------------------------------------------------- # Date link load(url('https://github.com/doserjef/spAbundance/raw/main/data/dataNMixSim.rda')) @@ -251,6 +269,7 @@ det.cov[is.na(det.cov)] <- rnorm(length(which(is.na(det.cov)))) det.cov2 <- dataNMixSim$det.covs$det.cov.2 det.cov2[is.na(det.cov2)] <- rnorm(length(which(is.na(det.cov2)))) + ## ----------------------------------------------------------------------------- mod_data <- do.call(rbind, lapply(1:NROW(data.one.sp$y), function(x){ @@ -269,11 +288,13 @@ mod_data <- do.call(rbind, time = 1, cap = max(data.one.sp$y, na.rm = TRUE) + 20) + ## ----------------------------------------------------------------------------- NROW(mod_data) dplyr::glimpse(mod_data) head(mod_data) + ## ----------------------------------------------------------------------------- mod_data %>% # each unique combination of site*species is a separate process @@ -285,6 +306,7 @@ trend_map %>% dplyr::arrange(trend) %>% head(12) + ## ----include = FALSE, results='hide'------------------------------------------ mod <- mvgam( # effects of covariates on detection probability; @@ -317,43 +339,47 @@ mod <- mvgam( residuals = FALSE, samples = 1000) + ## ----eval=FALSE--------------------------------------------------------------- -# mod <- mvgam( -# # effects of covariates on detection probability; -# # here we use penalized splines for both continuous covariates -# formula = y ~ s(det_cov, k = 4) + s(det_cov2, k = 4), -# -# # effects of the covariates on latent abundance; -# # here we use a penalized spline for the continuous covariate and -# # hierarchical intercepts for the factor covariate -# trend_formula = ~ s(abund_cov, k = 4) + -# s(abund_fac, bs = 're'), -# -# # link multiple observations to each site -# trend_map = trend_map, -# -# # nmix() family and supplied data -# family = nmix(), -# data = mod_data, -# -# # standard normal priors on key regression parameters -# priors = c(prior(std_normal(), class = 'b'), -# prior(std_normal(), class = 'Intercept'), -# prior(std_normal(), class = 'Intercept_trend'), -# prior(std_normal(), class = 'sigma_raw_trend')), -# -# # use Stan's variational inference for quicker results -# algorithm = 'meanfield', -# -# # no need to compute "series-level" residuals -# residuals = FALSE, -# samples = 1000) +## mod <- mvgam( +## # effects of covariates on detection probability; +## # here we use penalized splines for both continuous covariates +## formula = y ~ s(det_cov, k = 4) + s(det_cov2, k = 4), +## +## # effects of the covariates on latent abundance; +## # here we use a penalized spline for the continuous covariate and +## # hierarchical intercepts for the factor covariate +## trend_formula = ~ s(abund_cov, k = 4) + +## s(abund_fac, bs = 're'), +## +## # link multiple observations to each site +## trend_map = trend_map, +## +## # nmix() family and supplied data +## family = nmix(), +## data = mod_data, +## +## # standard normal priors on key regression parameters +## priors = c(prior(std_normal(), class = 'b'), +## prior(std_normal(), class = 'Intercept'), +## prior(std_normal(), class = 'Intercept_trend'), +## prior(std_normal(), class = 'sigma_raw_trend')), +## +## # use Stan's variational inference for quicker results +## algorithm = 'meanfield', +## +## # no need to compute "series-level" residuals +## residuals = FALSE, +## samples = 1000) + ## ----------------------------------------------------------------------------- summary(mod, include_betas = FALSE) + ## ----------------------------------------------------------------------------- -avg_predictions(mod, type = 'detection') +marginaleffects::avg_predictions(mod, type = 'detection') + ## ----------------------------------------------------------------------------- abund_plots <- plot(conditional_effects(mod, @@ -362,14 +388,17 @@ abund_plots <- plot(conditional_effects(mod, 'abund_fac')), plot = FALSE) + ## ----------------------------------------------------------------------------- abund_plots[[1]] + ylab('Expected latent abundance') + ## ----------------------------------------------------------------------------- abund_plots[[2]] + ylab('Expected latent abundance') + ## ----------------------------------------------------------------------------- det_plots <- plot(conditional_effects(mod, type = 'detection', @@ -377,17 +406,19 @@ det_plots <- plot(conditional_effects(mod, 'det_cov2')), plot = FALSE) + ## ----------------------------------------------------------------------------- det_plots[[1]] + ylab('Pr(detection)') det_plots[[2]] + ylab('Pr(detection)') + ## ----------------------------------------------------------------------------- fivenum_round = function(x)round(fivenum(x, na.rm = TRUE), 2) -plot_predictions(mod, - newdata = datagrid(det_cov = unique, +marginaleffects::plot_predictions(mod, + newdata = marginaleffects::datagrid(det_cov = unique, det_cov2 = fivenum_round), by = c('det_cov', 'det_cov2'), type = 'detection') + diff --git a/inst/doc/nmixtures.Rmd b/inst/doc/nmixtures.Rmd index 62d4a08e..62f291d1 100644 --- a/inst/doc/nmixtures.Rmd +++ b/inst/doc/nmixtures.Rmd @@ -9,21 +9,23 @@ vignette: > %\VignetteIndexEntry{N-mixtures in mvgam} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} +params: + EVAL: !r identical(tolower(Sys.getenv("NOT_CRAN")), "true") --- ```{r, echo = FALSE} -NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true") knitr::opts_chunk$set( collapse = TRUE, comment = "#>", - purl = NOT_CRAN, - eval = NOT_CRAN + message = FALSE, + warning = FALSE, + eval = if (isTRUE(exists("params"))) params$EVAL else FALSE ) ``` ```{r setup, include=FALSE} knitr::opts_chunk$set( echo = TRUE, - dpi = 150, + dpi = 100, fig.asp = 0.8, fig.width = 6, out.width = "60%", @@ -229,7 +231,7 @@ plot(mod, type = 'smooths', trend_effects = TRUE) `marginaleffects` support allows for more useful prediction-based interrogations on different scales (though note that at the time of writing this Vignette, you must have the development version of `marginaleffects` installed for `nmix()` models to be supported; use `remotes::install_github('vincentarelbundock/marginaleffects')` to install). Objects that use family `nmix()` have a few additional prediction scales that can be used (i.e. `link`, `response`, `detection` or `latent_N`). For example, here are the estimated detection probabilities per species, which show that the model has done a nice job of estimating these parameters: ```{r} -plot_predictions(mod, condition = 'species', +marginaleffects::plot_predictions(mod, condition = 'species', type = 'detection') + ylab('Pr(detection)') + ylim(c(0, 1)) + @@ -302,7 +304,7 @@ We can see that estimates for both species have correctly captured the true temp ## Example 2: a larger survey with possible nonlinear effects -Now for another example with a larger dataset. We will use data from [Jeff Doser's simulation example from the wonderful `spAbundance` package](https://www.jeffdoser.com/files/spabundance-web/articles/nmixturemodels){target="_blank"}. The simulated data include one continuous site-level covariate, one factor site-level covariate and two continuous sample-level covariates. This example will allow us to examine how we can include possibly nonlinear effects in the latent process and detection probability models. +Now for another example with a larger dataset. We will use data from [Jeff Doser's simulation example from the wonderful `spAbundance` package](https://doserlab.com/files/spabundance-web/articles/nmixturemodels){target="_blank"}. The simulated data include one continuous site-level covariate, one factor site-level covariate and two continuous sample-level covariates. This example will allow us to examine how we can include possibly nonlinear effects in the latent process and detection probability models. Download the data and grab observations / covariate measurements for one species ```{r} @@ -440,7 +442,7 @@ summary(mod, include_betas = FALSE) Again we can make use of `marginaleffects` support for interrogating the model through targeted predictions. First, we can inspect the estimated average detection probability ```{r} -avg_predictions(mod, type = 'detection') +marginaleffects::avg_predictions(mod, type = 'detection') ``` Next investigate estimated effects of covariates on latent abundance using the `conditional_effects()` function and specifying `type = 'link'`; this will return plots on the expectation scale @@ -485,8 +487,8 @@ More targeted predictions are also easy with `marginaleffects` support. For exam ```{r} fivenum_round = function(x)round(fivenum(x, na.rm = TRUE), 2) -plot_predictions(mod, - newdata = datagrid(det_cov = unique, +marginaleffects::plot_predictions(mod, + newdata = marginaleffects::datagrid(det_cov = unique, det_cov2 = fivenum_round), by = c('det_cov', 'det_cov2'), type = 'detection') + @@ -501,7 +503,7 @@ The following papers and resources offer useful material about N-mixture models Guélat, Jérôme, and Kéry, Marc. “[Effects of Spatial Autocorrelation and Imperfect Detection on Species Distribution Models.](https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.12983)” *Methods in Ecology and Evolution* 9 (2018): 1614–25. -Kéry, Marc, and Royle Andrew J. "[Applied hierarchical modeling in ecology: Analysis of distribution, abundance and species richness in R and BUGS: Volume 2: Dynamic and advanced models](https://www.sciencedirect.com/book/9780128237687/applied-hierarchical-modeling-in-ecology-analysis-of-distribution-abundance-and-species-richness-in-r-and-bugs)". London, UK: Academic Press (2020). +Kéry, Marc, and Royle Andrew J. "[Applied hierarchical modeling in ecology: Analysis of distribution, abundance and species richness in R and BUGS: Volume 2: Dynamic and advanced models](https://shop.elsevier.com/books/applied-hierarchical-modeling-in-ecology-analysis-of-distribution-abundance-and-species-richness-in-r-and-bugs/kery/978-0-12-809585-0)". London, UK: Academic Press (2020). Royle, Andrew J. "[N‐mixture models for estimating population size from spatially replicated counts.](https://onlinelibrary.wiley.com/doi/full/10.1111/j.0006-341X.2004.00142.x)" *Biometrics* 60.1 (2004): 108-115. diff --git a/inst/doc/nmixtures.html b/inst/doc/nmixtures.html index f92147dd..b4acfda4 100644 --- a/inst/doc/nmixtures.html +++ b/inst/doc/nmixtures.html @@ -12,7 +12,7 @@ - +nmix()
familysummary(mod)
#> GAM observation formula:
#> obs ~ species - 1
-#>
-#> GAM process formula:
-#> ~s(time, by = trend, k = 4) + species
-#>
-#> Family:
-#> nmix
-#>
-#> Link function:
-#> log
-#>
-#> Trend model:
-#> None
-#>
-#> N process models:
-#> 2
-#>
-#> N series:
-#> 10
-#>
-#> N timepoints:
-#> 6
-#>
-#> Status:
-#> Fitted using Stan
-#> 4 chains, each with iter = 1500; warmup = 500; thin = 1
-#> Total post-warmup draws = 4000
-#>
-#>
-#> GAM observation model coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> speciessp_1 -0.28 0.7200 1.40 1 1361
-#> speciessp_2 -1.20 -0.0075 0.89 1 1675
-#>
-#> GAM process model coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept)_trend 2.700 3.0000 3.400 1.00 1148
-#> speciessp_2_trend -1.200 -0.6100 0.190 1.00 1487
-#> s(time):trendtrend1.1_trend -0.081 0.0130 0.200 1.00 800
-#> s(time):trendtrend1.2_trend -0.230 0.0077 0.310 1.00 1409
-#> s(time):trendtrend1.3_trend -0.460 -0.2500 -0.038 1.00 1699
-#> s(time):trendtrend2.1_trend -0.220 -0.0130 0.095 1.00 995
-#> s(time):trendtrend2.2_trend -0.190 0.0320 0.500 1.01 1071
-#> s(time):trendtrend2.3_trend 0.064 0.3300 0.640 1.00 2268
-#>
-#> Approximate significance of GAM process smooths:
-#> edf Ref.df F p-value
-#> s(time):seriestrend1 1.25 3 0.19 0.83
-#> s(time):seriestrend2 1.07 3 0.39 0.92
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 0 of 4000 iterations ended with a divergence (0%)
-#> 0 of 4000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Tue Apr 16 1:04:54 PM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
loo()
functionality works just as it does for all
mvgam
models to aid in model comparison / selection (though
note that Pareto K values often give warnings for mixture models so
these may not be too helpful)
loo(mod)
-#> Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
-#>
-#> Computed from 4000 by 60 log-likelihood matrix
-#>
-#> Estimate SE
-#> elpd_loo -230.4 13.8
-#> p_loo 83.3 12.7
-#> looic 460.9 27.5
-#> ------
-#> Monte Carlo SE of elpd_loo is NA.
-#>
-#> Pareto k diagnostic values:
-#> Count Pct. Min. n_eff
-#> (-Inf, 0.5] (good) 25 41.7% 1141
-#> (0.5, 0.7] (ok) 5 8.3% 390
-#> (0.7, 1] (bad) 7 11.7% 13
-#> (1, Inf) (very bad) 23 38.3% 2
-#> See help('pareto-k-diagnostic') for details.
Plot the estimated smooths of time from each species’ latent abundance process (on the log scale)
- +marginaleffects
support allows for more useful
prediction-based interrogations on different scales (though note that at
the time of writing this Vignette, you must have the development version
@@ -810,13 +811,13 @@
nmix()
familyplot_predictions(mod, condition = 'species',
+marginaleffects::plot_predictions(mod, condition = 'species',
type = 'detection') +
ylab('Pr(detection)') +
ylim(c(0, 1)) +
theme_classic() +
theme(legend.position = 'none')
-
+
A common goal in N-mixture modelling is to estimate the true latent
abundance. The model has automatically generated predictions for the
unknown latent abundance that are conditional on the observations. We
@@ -877,9 +878,9 @@
Modelling with the nmix()
family
black line shows the true latent abundance, and the ribbons show
credible intervals of our estimates:
-
+
-
+
We can see that estimates for both species have correctly captured
the true temporal variation and magnitudes in abundance
nmix()
familyNow for another example with a larger dataset. We will use data from
-Jeff Doser’s simulation example from the wonderful
+Jeff Doser’s simulation example from the wonderful
spAbundance
package. The simulated data include one
continuous site-level covariate, one factor site-level covariate and two
continuous sample-level covariates. This example will allow us to
@@ -944,8 +945,8 @@
summary(mod, include_betas = FALSE)
#> GAM observation formula:
#> y ~ s(det_cov, k = 3) + s(det_cov2, k = 3)
-#>
-#> GAM process formula:
-#> ~s(abund_cov, k = 3) + s(abund_fac, bs = "re")
-#>
-#> Family:
-#> nmix
-#>
-#> Link function:
-#> log
-#>
-#> Trend model:
-#> None
-#>
-#> N process models:
-#> 225
-#>
-#> N series:
-#> 675
-#>
-#> N timepoints:
-#> 1
-#>
-#> Status:
-#> Fitted using Stan
-#> 1 chains, each with iter = 1000; warmup = ; thin = 1
-#> Total post-warmup draws = 1000
-#>
-#>
-#> GAM observation model coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n.eff
-#> (Intercept) 0.052 0.4 0.71 NaN NaN
-#>
-#> Approximate significance of GAM observation smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(det_cov) 1.22 2 52.3 0.0011 **
-#> s(det_cov2) 1.07 2 307.1 <2e-16 ***
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> GAM process model coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n.eff
-#> (Intercept)_trend -0.25 -0.081 0.079 NaN NaN
-#>
-#> GAM process model group-level estimates:
-#> 2.5% 50% 97.5% Rhat n.eff
-#> mean(s(abund_fac))_trend -0.18 0.0038 0.19 NaN NaN
-#> sd(s(abund_fac))_trend 0.26 0.3900 0.56 NaN NaN
-#>
-#> Approximate significance of GAM process smooths:
-#> edf Ref.df F p-value
-#> s(abund_cov) 1.19 2 2.38 0.299
-#> s(abund_fac) 8.82 10 2.79 0.025 *
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> Posterior approximation used: no diagnostics to compute
Again we can make use of marginaleffects
support for
interrogating the model through targeted predictions. First, we can
inspect the estimated average detection probability
avg_predictions(mod, type = 'detection')
+marginaleffects::avg_predictions(mod, type = 'detection')
#>
#> Estimate 2.5 % 97.5 %
-#> 0.579 0.51 0.644
+#> 0.576 0.512 0.634
#>
#> Columns: estimate, conf.low, conf.high
#> Type: detection
@@ -1116,12 +1119,12 @@ Example 2: a larger survey with possible nonlinear effects
abundance
-
+
The effect of the factor covariate on expected latent abundance,
estimated as a hierarchical random effect
-
+
Now we can investigate estimated effects of covariates on detection
probability using type = 'detection'
det_plots <- plot(conditional_effects(mod,
@@ -1135,24 +1138,24 @@ Example 2: a larger survey with possible nonlinear effects
the probability scale is more intuitive and useful
-
+
-
+
More targeted predictions are also easy with
marginaleffects
support. For example, we can ask: How does
detection probability change as we change both detection
covariates?
fivenum_round = function(x)round(fivenum(x, na.rm = TRUE), 2)
-plot_predictions(mod,
- newdata = datagrid(det_cov = unique,
+marginaleffects::plot_predictions(mod,
+ newdata = marginaleffects::datagrid(det_cov = unique,
det_cov2 = fivenum_round),
by = c('det_cov', 'det_cov2'),
type = 'detection') +
theme_classic() +
ylab('Pr(detection)')
-
+
The model has found support for some important covariate effects, but
of course we’d want to interrogate how well the model predicts and think
about possible spatial effects to capture unmodelled variation in latent
@@ -1167,7 +1170,7 @@
Further reading
of Spatial Autocorrelation and Imperfect Detection on Species
Distribution Models.” Methods in Ecology and Evolution 9
(2018): 1614–25.
-Kéry, Marc, and Royle Andrew J. “Applied
+Kéry, Marc, and Royle Andrew J. “Applied
hierarchical modeling in ecology: Analysis of distribution, abundance
and species richness in R and BUGS: Volume 2: Dynamic and advanced
models”. London, UK: Academic Press (2020).
diff --git a/inst/doc/shared_states.R b/inst/doc/shared_states.R
index 3619bf06..c15a17a7 100644
--- a/inst/doc/shared_states.R
+++ b/inst/doc/shared_states.R
@@ -1,10 +1,13 @@
-## ----echo = FALSE-------------------------------------------------------------
-NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true")
+params <-
+list(EVAL = TRUE)
+
+## ---- echo = FALSE------------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
- purl = NOT_CRAN,
- eval = NOT_CRAN
+ message = FALSE,
+ warning = FALSE,
+ eval = if (isTRUE(exists("params"))) params$EVAL else FALSE
)
@@ -180,7 +183,7 @@ mod <- mvgam(formula =
trend_formula =
# formula for the latent signal, which can depend
# nonlinearly on productivity
- ~ s(productivity, k = 8),
+ ~ s(productivity, k = 8) - 1,
trend_model =
# in addition to productivity effects, the signal is
@@ -218,7 +221,7 @@ mod <- mvgam(formula =
## trend_formula =
## # formula for the latent signal, which can depend
## # nonlinearly on productivity
-## ~ s(productivity, k = 8),
+## ~ s(productivity, k = 8) - 1,
##
## trend_model =
## # in addition to productivity effects, the signal is
diff --git a/inst/doc/shared_states.Rmd b/inst/doc/shared_states.Rmd
index bc79aa71..17b1e625 100644
--- a/inst/doc/shared_states.Rmd
+++ b/inst/doc/shared_states.Rmd
@@ -9,14 +9,16 @@ vignette: >
%\VignetteIndexEntry{Shared latent states in mvgam}
%\VignetteEngine{knitr::rmarkdown}
\usepackage[utf8]{inputenc}
+params:
+ EVAL: !r identical(tolower(Sys.getenv("NOT_CRAN")), "true")
---
```{r, echo = FALSE}
-NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true")
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
- purl = NOT_CRAN,
- eval = NOT_CRAN
+ message = FALSE,
+ warning = FALSE,
+ eval = if (isTRUE(exists("params"))) params$EVAL else FALSE
)
```
@@ -215,7 +217,7 @@ mod <- mvgam(formula =
trend_formula =
# formula for the latent signal, which can depend
# nonlinearly on productivity
- ~ s(productivity, k = 8),
+ ~ s(productivity, k = 8) - 1,
trend_model =
# in addition to productivity effects, the signal is
@@ -253,7 +255,7 @@ mod <- mvgam(formula =
trend_formula =
# formula for the latent signal, which can depend
# nonlinearly on productivity
- ~ s(productivity, k = 8),
+ ~ s(productivity, k = 8) - 1,
trend_model =
# in addition to productivity effects, the signal is
diff --git a/inst/doc/shared_states.html b/inst/doc/shared_states.html
index db2dac3b..6786b351 100644
--- a/inst/doc/shared_states.html
+++ b/inst/doc/shared_states.html
@@ -12,7 +12,7 @@
-
+
Shared latent states in mvgam
@@ -340,7 +340,7 @@
Shared latent states in mvgam
Nicholas J Clark
-2024-07-01
+2024-09-04
@@ -544,36 +544,39 @@ Checking trend_map
with
#>
#> // dynamic process models
#>
-#> // prior for s(season)_trend...
-#> b_raw_trend[1 : 4] ~ multi_normal_prec(zero_trend[1 : 4],
-#> S_trend1[1 : 4, 1 : 4]
-#> * lambda_trend[1]);
-#> lambda_trend ~ normal(5, 30);
-#> {
-#> // likelihood functions
-#> vector[n_nonmissing] flat_trends;
-#> flat_trends = to_vector(trend)[obs_ind];
-#> flat_ys ~ poisson_log_glm(append_col(flat_xs, flat_trends), 0.0,
-#> append_row(b, 1.0));
-#> }
-#> }
-#> generated quantities {
-#> vector[total_obs] eta;
-#> matrix[n, n_series] mus;
-#> vector[n_sp_trend] rho_trend;
-#> vector[n_lv] penalty;
-#> array[n, n_series] int ypred;
-#> penalty = 1.0 / (sigma .* sigma);
-#> rho_trend = log(lambda_trend);
-#>
-#> matrix[n_series, n_lv] lv_coefs = Z;
-#> // posterior predictions
-#> eta = X * b;
-#> for (s in 1 : n_series) {
-#> mus[1 : n, s] = eta[ytimes[1 : n, s]] + trend[1 : n, s];
-#> ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
-#> }
-#> }
+#> // prior for (Intercept)_trend...
+#> b_raw_trend[1] ~ student_t(3, 0, 2);
+#>
+#> // prior for s(season)_trend...
+#> b_raw_trend[2 : 5] ~ multi_normal_prec(zero_trend[2 : 5],
+#> S_trend1[1 : 4, 1 : 4]
+#> * lambda_trend[1]);
+#> lambda_trend ~ normal(5, 30);
+#> {
+#> // likelihood functions
+#> vector[n_nonmissing] flat_trends;
+#> flat_trends = to_vector(trend)[obs_ind];
+#> flat_ys ~ poisson_log_glm(append_col(flat_xs, flat_trends), 0.0,
+#> append_row(b, 1.0));
+#> }
+#> }
+#> generated quantities {
+#> vector[total_obs] eta;
+#> matrix[n, n_series] mus;
+#> vector[n_sp_trend] rho_trend;
+#> vector[n_lv] penalty;
+#> array[n, n_series] int ypred;
+#> penalty = 1.0 / (sigma .* sigma);
+#> rho_trend = log(lambda_trend);
+#>
+#> matrix[n_series, n_lv] lv_coefs = Z;
+#> // posterior predictions
+#> eta = X * b;
+#> for (s in 1 : n_series) {
+#> mus[1 : n, s] = eta[ytimes[1 : n, s]] + trend[1 : n, s];
+#> ypred[1 : n, s] = poisson_log_rng(mus[1 : n, s]);
+#> }
+#> }
Notice the line that states “lv_coefs = Z;”. This uses the supplied
\(Z\) matrix to construct the loading
coefficients. The supplied matrix now looks exactly like what you’d use
@@ -604,11 +607,11 @@
Fitting and inspecting the model
summary(full_mod)
#> GAM observation formula:
#> y ~ series - 1
-#> <environment: 0x000001f52b9e3130>
+#> <environment: 0x00000245d0e24ff8>
#>
#> GAM process formula:
#> ~s(season, bs = "cc", k = 6)
-#> <environment: 0x000001f52b9e3130>
+#> <environment: 0x00000245d0e24ff8>
#>
#> Family:
#> poisson
@@ -636,50 +639,51 @@ Fitting and inspecting the model
#>
#> GAM observation model coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> seriesseries_1 -0.14 0.084 0.29 1.00 1720
-#> seriesseries_2 0.91 1.100 1.20 1.00 1374
-#> seriesseries_3 1.90 2.100 2.30 1.01 447
+#> seriesseries_1 -2.80 -0.67 1.5 1 931
+#> seriesseries_2 -1.80 0.31 2.5 1 924
+#> seriesseries_3 -0.84 1.30 3.4 1 920
#>
#> Process model AR parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> ar1[1] -0.72 -0.430 -0.037 1.01 560
-#> ar1[2] -0.30 -0.017 0.270 1.01 286
+#> ar1[1] -0.73 -0.430 -0.056 1.00 666
+#> ar1[2] -0.30 -0.019 0.250 1.01 499
#>
#> Process error parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> sigma[1] 0.34 0.49 0.65 1 819
-#> sigma[2] 0.59 0.73 0.90 1 573
+#> sigma[1] 0.33 0.49 0.67 1 854
+#> sigma[2] 0.59 0.73 0.91 1 755
#>
#> GAM process model coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> s(season).1_trend -0.20 -0.003 0.21 1.01 857
-#> s(season).2_trend -0.27 -0.046 0.17 1.00 918
-#> s(season).3_trend -0.15 0.068 0.28 1.00 850
-#> s(season).4_trend -0.14 0.064 0.27 1.00 972
-#>
-#> Approximate significance of GAM process smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(season) 2.33 4 0.38 0.93
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:31:17 AM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
+#> 2.5% 50% 97.5% Rhat n_eff
+#> (Intercept)_trend -1.40 0.7800 2.90 1 921
+#> s(season).1_trend -0.21 -0.0072 0.21 1 1822
+#> s(season).2_trend -0.30 -0.0480 0.18 1 1414
+#> s(season).3_trend -0.16 0.0680 0.30 1 1664
+#> s(season).4_trend -0.14 0.0660 0.29 1 1505
+#>
+#> Approximate significance of GAM process smooths:
+#> edf Ref.df Chi.sq p-value
+#> s(season) 1.48 4 0.67 0.93
+#>
+#> Stan MCMC diagnostics:
+#> n_eff / iter looks reasonable for all parameters
+#> Rhat looks reasonable for all parameters
+#> 0 of 2000 iterations ended with a divergence (0%)
+#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
+#> E-FMI indicated no pathological behavior
+#>
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:49:01 AM 2024.
+#> For each parameter, n_eff is a crude measure of effective sample size,
+#> and Rhat is the potential scale reduction factor on split MCMC chains
+#> (at convergence, Rhat = 1)
Both series 1 and 2 share the exact same latent process estimates, while the estimates for series 3 are different:
- + - + - +However, forecasts for series’ 1 and 2 will differ because they have different intercepts in the observation model
@@ -782,7 +786,7 @@conditional_effects
:
-
+
conditional_effects
is simply a wrapper to the more
flexible plot_predictions
function from the
marginaleffects
package. We can get more useful plots of
these effects using this function for further customisation:
require(marginaleffects)
-#> Loading required package: marginaleffects
-plot_predictions(mod,
- condition = c('temperature', 'series', 'series'),
- points = 0.5) +
- theme(legend.position = 'none')
We have successfully estimated effects, some of them nonlinear, that impact the hidden process AND the observations. All in a single joint model. But there can always be challenges with these models, @@ -924,7 +927,7 @@
dynamic()
functionsummary(mod, include_betas = FALSE)
#> GAM formula:
#> out ~ s(time, by = temp, bs = "gp", m = c(-2, 8, 2), k = 40)
-#> <environment: 0x0000014d37f0f110>
+#> <environment: 0x0000026c107b3fd8>
#>
#> Family:
#> gaussian
@@ -486,15 +486,15 @@ The dynamic()
function
#>
#> Observation error parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> sigma_obs[1] 0.23 0.25 0.28 1 2026
+#> sigma_obs[1] 0.23 0.25 0.28 1 1709
#>
#> GAM coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) 4 4 4.1 1 2640
+#> (Intercept) 4 4 4.1 1 1960
#>
#> Approximate significance of GAM smooths:
-#> edf Ref.df Chi.sq p-value
-#> s(time):temp 15.4 40 173 <2e-16 ***
+#> edf Ref.df Chi.sq p-value
+#> s(time):temp 17 40 164 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
@@ -505,7 +505,7 @@ The dynamic()
function
#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
#> E-FMI indicated no pathological behavior
#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:35:21 AM 2024.
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:52:26 AM 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split MCMC chains
#> (at convergence, Rhat = 1)
dynamic()
functionWe can also use plot_predictions
from the
marginaleffects
package to visualise the time-varying
coefficient for what the effect would be estimated to be at different
values of \(temperature\):
require(marginaleffects)
-#> Loading required package: marginaleffects
-range_round = function(x){
- round(range(x, na.rm = TRUE), 2)
-}
-plot_predictions(mod,
- newdata = datagrid(time = unique,
- temp = range_round),
- by = c('time', 'temp', 'temp'),
- type = 'link')
This results in sensible forecasts of the observations as well
- +plot(fc) +The syntax is very similar if we wish to estimate the parameters of
the underlying Gaussian Process, this time using a Hilbert space
approximation. We simply omit the rho
argument in
@@ -559,7 +556,7 @@
dynamic()
functionsummary(mod, include_betas = FALSE)
#> GAM formula:
#> out ~ gp(time, by = temp, c = 5/4, k = 40, scale = TRUE)
-#> <environment: 0x0000014d37f0f110>
+#> <environment: 0x0000026c107b3fd8>
#>
#> Family:
#> gaussian
@@ -584,16 +581,16 @@ The dynamic()
function
#>
#> Observation error parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> sigma_obs[1] 0.24 0.26 0.3 1 2183
+#> sigma_obs[1] 0.24 0.26 0.3 1 2285
#>
#> GAM coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) 4 4 4.1 1 2733
+#> (Intercept) 4 4 4.1 1 3056
#>
#> GAM gp term marginal deviation (alpha) and length scale (rho) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> alpha_gp(time):temp 0.620 0.890 1.400 1.01 539
-#> rho_gp(time):temp 0.026 0.053 0.069 1.00 628
+#> alpha_gp(time):temp 0.630 0.880 1.400 1.00 619
+#> rho_gp(time):temp 0.026 0.053 0.069 1.01 487
#>
#> Stan MCMC diagnostics:
#> n_eff / iter looks reasonable for all parameters
@@ -602,7 +599,7 @@ The dynamic()
function
#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
#> E-FMI indicated no pathological behavior
#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:36:09 AM 2024.
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 11:53:17 AM 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split MCMC chains
#> (at convergence, Rhat = 1)
dynamic()
functionsummary(mod0)
#> GAM formula:
#> survival ~ 1
-#> <environment: 0x0000014d37f0f110>
+#> <environment: 0x0000026c107b3fd8>
#>
#> Family:
#> beta
@@ -715,32 +712,33 @@ A State-Space Beta regression
#>
#> Observation precision parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> phi[1] 95 280 630 1.02 271
+#> phi[1] 98 270 650 1.01 272
#>
#> GAM coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) -4.7 -4.4 -4 1 625
+#> (Intercept) -4.7 -4.4 -4.1 1 570
#>
#> Latent trend parameter AR estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> ar1[1] -0.230 0.67 0.98 1.01 415
-#> sigma[1] 0.073 0.47 0.72 1.02 213
+#> 2.5% 50% 97.5% Rhat n_eff
+#> ar1[1] 0.037 0.69 0.98 1.00 570
+#> sigma[1] 0.120 0.45 0.73 1.01 225
#>
#> Stan MCMC diagnostics:
#> n_eff / iter looks reasonable for all parameters
#> Rhat looks reasonable for all parameters
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:36:57 AM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
A plot of the underlying dynamic component shows how it has easily handled the temporal evolution of the time series:
- +summary(mod1, include_betas = FALSE)
#> GAM observation formula:
#> survival ~ 1
-#> <environment: 0x0000014d37f0f110>
+#> <environment: 0x0000026c107b3fd8>
#>
#> GAM process formula:
#> ~dynamic(CUI.apr, k = 25, scale = FALSE)
-#> <environment: 0x0000014d37f0f110>
+#> <environment: 0x0000026c107b3fd8>
#>
#> Family:
#> beta
@@ -796,43 +794,47 @@ Including time-varying upwelling effects
#>
#> Observation precision parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> phi[1] 160 350 690 1 557
+#> phi[1] 180 360 670 1.01 504
#>
#> GAM observation model coefficient (beta) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> (Intercept) -4.7 -4 -2.6 1 331
+#> 2.5% 50% 97.5% Rhat n_eff
+#> (Intercept) -6.1 -2.4 1.8 1 1605
#>
#> Process model AR parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> ar1[1] 0.46 0.89 0.99 1.01 364
+#> ar1[1] 0.48 0.89 1 1.01 681
#>
#> Process error parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> sigma[1] 0.18 0.35 0.58 1 596
+#> sigma[1] 0.18 0.35 0.57 1.02 488
#>
-#> GAM process model gp term marginal deviation (alpha) and length scale (rho) estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> alpha_gp_time_byCUI_apr_trend 0.02 0.3 1.2 1 760
-#> rho_gp_time_byCUI_apr_trend 1.30 5.5 28.0 1 674
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhat looks reasonable for all parameters
-#> 79 of 2000 iterations ended with a divergence (3.95%)
-#> *Try running with larger adapt_delta to remove the divergences
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Mon Jul 01 7:37:32 AM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
The estimates for the underlying dynamic process, and for the hindcasts, haven’t changed much:
- + - +But the process error parameter \(\sigma\) is slightly smaller for this model than for the first model:
# Extract estimates of the process error 'sigma' for each model
@@ -847,13 +849,13 @@ Including time-varying upwelling effects
ggplot(sigmas, aes(y = `sigma[1]`, fill = model)) +
geom_density(alpha = 0.3, colour = NA) +
coord_flip()
Why does the process error not need to be as flexible in the second
model? Because the estimates of this dynamic process are now informed
partly by the time-varying effect of upwelling, which we can visualise
on the link scale using plot()
:
loo
package:
loo_compare(mod0, mod1)
-#> Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
-
-#> Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
-#> elpd_diff se_diff
-#> mod1 0.0 0.0
-#> mod0 -6.5 2.7
The second model has the larger Expected Log Predictive Density (ELPD), meaning that it is slightly favoured over the simpler model that did not include the time-varying upwelling effect. However, the two @@ -889,9 +888,9 @@
The model with the time-varying upwelling effect tends to provides better 1-step ahead forecasts, with a higher total forecast ELPD
+#> [1] 41.14095We can also plot the ELPDs for each model as a contrast. Here, values less than zero suggest the time-varying predictor model (Mod1) gives better 1-step ahead forecasts:
@@ -903,7 +902,7 @@A useful exercise to further expand this model would be to think about what kinds of predictors might impact measurement error, which could easily be implemented into the observation formula in diff --git a/inst/doc/trend_formulas.R b/inst/doc/trend_formulas.R index 98066ef4..6d625b4a 100644 --- a/inst/doc/trend_formulas.R +++ b/inst/doc/trend_formulas.R @@ -1,10 +1,13 @@ +params <- +list(EVAL = TRUE) + ## ---- echo = FALSE------------------------------------------------------------ -NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true") knitr::opts_chunk$set( collapse = TRUE, comment = "#>", - purl = NOT_CRAN, - eval = NOT_CRAN + message = FALSE, + warning = FALSE, + eval = if (isTRUE(exists("params"))) params$EVAL else FALSE ) diff --git a/inst/doc/trend_formulas.Rmd b/inst/doc/trend_formulas.Rmd index 8d636d2e..836a5d60 100644 --- a/inst/doc/trend_formulas.Rmd +++ b/inst/doc/trend_formulas.Rmd @@ -9,14 +9,16 @@ vignette: > %\VignetteIndexEntry{State-Space models in mvgam} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} +params: + EVAL: !r identical(tolower(Sys.getenv("NOT_CRAN")), "true") --- ```{r, echo = FALSE} -NOT_CRAN <- identical(tolower(Sys.getenv("NOT_CRAN")), "true") knitr::opts_chunk$set( collapse = TRUE, comment = "#>", - purl = NOT_CRAN, - eval = NOT_CRAN + message = FALSE, + warning = FALSE, + eval = if (isTRUE(exists("params"))) params$EVAL else FALSE ) ``` @@ -459,7 +461,7 @@ Auger‐Méthé, Marie, et al. ["A guide to state–space modeling of ecological Heaps, Sarah E. "[Enforcing stationarity through the prior in vector autoregressions.](https://www.tandfonline.com/doi/full/10.1080/10618600.2022.2079648)" *Journal of Computational and Graphical Statistics* 32.1 (2023): 74-83. -Hannaford, Naomi E., et al. "[A sparse Bayesian hierarchical vector autoregressive model for microbial dynamics in a wastewater treatment plant.](https://www.sciencedirect.com/science/article/pii/S0167947322002390)" *Computational Statistics & Data Analysis* 179 (2023): 107659. +Hannaford, Naomi E., et al. "[A sparse Bayesian hierarchical vector autoregressive model for microbial dynamics in a wastewater treatment plant.](https://doi.org/10.1016/j.csda.2022.107659)" *Computational Statistics & Data Analysis* 179 (2023): 107659. Holmes, Elizabeth E., Eric J. Ward, and Wills Kellie. "[MARSS: multivariate autoregressive state-space models for analyzing time-series data.](https://journal.r-project.org/archive/2012/RJ-2012-002/index.html)" *R Journal*. 4.1 (2012): 11. diff --git a/inst/doc/trend_formulas.html b/inst/doc/trend_formulas.html index 15dcaae4..f3c77a5c 100644 --- a/inst/doc/trend_formulas.html +++ b/inst/doc/trend_formulas.html @@ -368,7 +368,7 @@
plankton_data %>%
dplyr::filter(series == 'Diatoms') %>%
@@ -538,36 +533,30 @@ Capturing seasonality
The “global” tensor product smooth function can be quickly
visualized:
-
+
On this plot, red indicates below-average linear predictors and white
indicates above-average. We can then plot the deviation smooths for a
few algal groups to see how they vary from the “global” pattern:
-
+
-
+
These multidimensional smooths have done a good job of capturing the
seasonal variation in our observations:
-
-
-
-
-
-
+
+
+
+
+
+
This basic model gives us confidence that we can capture the seasonal
variation in the observations. But the model has not captured the
remaining temporal dynamics, which is obvious when we inspect Dunn-Smyth
residuals for a few series:
-
+
-
+
summary(var_mod, include_betas = FALSE)
#> GAM observation formula:
#> y ~ 1
-#> <environment: 0x0000020ef28a3068>
+#> <environment: 0x0000024fa27bd008>
#>
#> GAM process formula:
#> ~te(temp, month, k = c(4, 4)) + te(temp, month, k = c(4, 4),
#> by = trend) - 1
-#> <environment: 0x0000020ef28a3068>
+#> <environment: 0x0000024fa27bd008>
#>
#> Family:
#> gaussian
@@ -752,101 +741,98 @@ Inspecting SS models
#>
#> Observation error parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> sigma_obs[1] 0.20 0.25 0.34 1.00 481
-#> sigma_obs[2] 0.25 0.40 0.54 1.05 113
-#> sigma_obs[3] 0.41 0.64 0.81 1.08 84
-#> sigma_obs[4] 0.24 0.37 0.50 1.02 270
-#> sigma_obs[5] 0.31 0.43 0.54 1.01 268
+#> sigma_obs[1] 0.20 0.26 0.35 1.00 420
+#> sigma_obs[2] 0.25 0.40 0.54 1.01 162
+#> sigma_obs[3] 0.43 0.63 0.79 1.02 133
+#> sigma_obs[4] 0.25 0.37 0.50 1.02 275
+#> sigma_obs[5] 0.31 0.43 0.54 1.01 278
#>
#> GAM observation model coefficient (beta) estimates:
#> 2.5% 50% 97.5% Rhat n_eff
#> (Intercept) 0 0 0 NaN NaN
#>
#> Process model VAR parameter estimates:
-#> 2.5% 50% 97.5% Rhat n_eff
-#> A[1,1] -0.072 0.47000 0.830 1.05 151
-#> A[1,2] -0.370 -0.04000 0.200 1.01 448
-#> A[1,3] -0.540 -0.04500 0.360 1.02 232
-#> A[1,4] -0.290 0.03900 0.460 1.02 439
-#> A[1,5] -0.072 0.15000 0.560 1.03 183
-#> A[2,1] -0.150 0.01800 0.210 1.00 677
-#> A[2,2] 0.620 0.79000 0.920 1.01 412
-#> A[2,3] -0.400 -0.14000 0.042 1.01 295
-#> A[2,4] -0.045 0.12000 0.350 1.01 344
-#> A[2,5] -0.057 0.05800 0.200 1.01 641
-#> A[3,1] -0.480 0.00015 0.390 1.03 71
-#> A[3,2] -0.500 -0.22000 0.023 1.02 193
-#> A[3,3] 0.066 0.41000 0.710 1.01 259
-#> A[3,4] -0.020 0.24000 0.630 1.02 227
-#> A[3,5] -0.051 0.14000 0.410 1.02 195
-#> A[4,1] -0.220 0.05100 0.290 1.03 141
-#> A[4,2] -0.100 0.05300 0.260 1.01 402
-#> A[4,3] -0.420 -0.12000 0.120 1.02 363
-#> A[4,4] 0.480 0.74000 0.940 1.01 322
-#> A[4,5] -0.200 -0.03100 0.120 1.01 693
-#> A[5,1] -0.360 0.06400 0.430 1.03 99
-#> A[5,2] -0.430 -0.13000 0.074 1.01 230
-#> A[5,3] -0.610 -0.20000 0.110 1.02 232
-#> A[5,4] -0.061 0.19000 0.620 1.01 250
-#> A[5,5] 0.530 0.75000 0.970 1.02 273
+#> 2.5% 50% 97.5% Rhat n_eff
+#> A[1,1] 0.012 0.5000 0.830 1.01 138
+#> A[1,2] -0.390 -0.0340 0.200 1.02 356
+#> A[1,3] -0.500 -0.0340 0.360 1.02 262
+#> A[1,4] -0.280 0.0240 0.400 1.02 409
+#> A[1,5] -0.090 0.1400 0.550 1.01 280
+#> A[2,1] -0.140 0.0120 0.180 1.00 773
+#> A[2,2] 0.620 0.7900 0.920 1.01 331
+#> A[2,3] -0.410 -0.1200 0.048 1.00 394
+#> A[2,4] -0.047 0.1100 0.340 1.01 321
+#> A[2,5] -0.050 0.0590 0.190 1.00 681
+#> A[3,1] -0.280 0.0098 0.300 1.03 202
+#> A[3,2] -0.530 -0.1900 0.036 1.02 154
+#> A[3,3] 0.071 0.4300 0.740 1.02 224
+#> A[3,4] -0.033 0.2100 0.650 1.02 179
+#> A[3,5] -0.059 0.1200 0.400 1.03 220
+#> A[4,1] -0.140 0.0450 0.270 1.00 415
+#> A[4,2] -0.110 0.0500 0.260 1.01 362
+#> A[4,3] -0.450 -0.1100 0.130 1.02 229
+#> A[4,4] 0.500 0.7400 0.940 1.01 307
+#> A[4,5] -0.200 -0.0300 0.130 1.00 747
+#> A[5,1] -0.200 0.0620 0.420 1.01 338
+#> A[5,2] -0.420 -0.1100 0.086 1.03 154
+#> A[5,3] -0.650 -0.1700 0.130 1.02 180
+#> A[5,4] -0.067 0.1800 0.620 1.03 171
+#> A[5,5] 0.540 0.7400 0.940 1.01 300
#>
#> Process error parameter estimates:
#> 2.5% 50% 97.5% Rhat n_eff
-#> Sigma[1,1] 0.037 0.28 0.65 1.11 64
+#> Sigma[1,1] 0.067 0.29 0.64 1.02 83
#> Sigma[1,2] 0.000 0.00 0.00 NaN NaN
#> Sigma[1,3] 0.000 0.00 0.00 NaN NaN
#> Sigma[1,4] 0.000 0.00 0.00 NaN NaN
#> Sigma[1,5] 0.000 0.00 0.00 NaN NaN
#> Sigma[2,1] 0.000 0.00 0.00 NaN NaN
-#> Sigma[2,2] 0.067 0.11 0.19 1.00 505
+#> Sigma[2,2] 0.065 0.11 0.18 1.01 367
#> Sigma[2,3] 0.000 0.00 0.00 NaN NaN
#> Sigma[2,4] 0.000 0.00 0.00 NaN NaN
#> Sigma[2,5] 0.000 0.00 0.00 NaN NaN
#> Sigma[3,1] 0.000 0.00 0.00 NaN NaN
#> Sigma[3,2] 0.000 0.00 0.00 NaN NaN
-#> Sigma[3,3] 0.062 0.15 0.29 1.05 93
+#> Sigma[3,3] 0.055 0.16 0.31 1.01 155
#> Sigma[3,4] 0.000 0.00 0.00 NaN NaN
#> Sigma[3,5] 0.000 0.00 0.00 NaN NaN
#> Sigma[4,1] 0.000 0.00 0.00 NaN NaN
#> Sigma[4,2] 0.000 0.00 0.00 NaN NaN
#> Sigma[4,3] 0.000 0.00 0.00 NaN NaN
-#> Sigma[4,4] 0.052 0.13 0.26 1.01 201
+#> Sigma[4,4] 0.053 0.13 0.26 1.01 200
#> Sigma[4,5] 0.000 0.00 0.00 NaN NaN
#> Sigma[5,1] 0.000 0.00 0.00 NaN NaN
#> Sigma[5,2] 0.000 0.00 0.00 NaN NaN
#> Sigma[5,3] 0.000 0.00 0.00 NaN NaN
#> Sigma[5,4] 0.000 0.00 0.00 NaN NaN
-#> Sigma[5,5] 0.110 0.21 0.35 1.03 210
+#> Sigma[5,5] 0.100 0.21 0.35 1.01 261
#>
#> Approximate significance of GAM process smooths:
-#> edf Ref.df Chi.sq p-value
-#> te(temp,month) 2.67 15 38.52 0.405
-#> te(temp,month):seriestrend1 1.77 15 1.73 1.000
-#> te(temp,month):seriestrend2 2.18 15 4.07 0.995
-#> te(temp,month):seriestrend3 4.07 15 51.07 0.059 .
-#> te(temp,month):seriestrend4 3.72 15 6.98 0.825
-#> te(temp,month):seriestrend5 1.85 15 5.15 0.998
-#> ---
-#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-#>
-#> Stan MCMC diagnostics:
-#> n_eff / iter looks reasonable for all parameters
-#> Rhats above 1.05 found for 11 parameters
-#> *Diagnose further to investigate why the chains have not mixed
-#> 0 of 2000 iterations ended with a divergence (0%)
-#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
-#> E-FMI indicated no pathological behavior
-#>
-#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 9:30:41 AM 2024.
-#> For each parameter, n_eff is a crude measure of effective sample size,
-#> and Rhat is the potential scale reduction factor on split MCMC chains
-#> (at convergence, Rhat = 1)
The convergence of this model isn’t fabulous (more on this in a
moment). But we can again plot the smooth functions, which this time
operate on the process model. We can see the same plot using
trend_effects = TRUE
in the plotting functions:
The VAR matrix is of particular interest here, as it captures lagged
dependencies and cross-dependencies in the latent process model.
Unfortunately bayesplot
doesn’t know this is a matrix of
@@ -862,7 +848,7 @@
There is a lot happening in this matrix. Each cell captures the lagged effect of the process in the column on the process in the row in the next timestep. So for example, the effect in cell [1,3] shows how an @@ -883,12 +869,12 @@
The observation error estimates \((\sigma_{obs})\) represent how much the model thinks we might miss the true count when we take our imperfect measurements:
- +These are still a bit hard to identify overall, especially when trying to estimate both process and observation error. Often we need to make some strong assumptions about which of these is more important for @@ -930,7 +916,7 @@
This symmetric matrix tells us there is support for correlated process errors, as several of the off-diagonal entries are strongly non-zero. But it is easier to interpret these estimates if we convert @@ -943,11 +929,11 @@
Plot the expected responses of the remaining series to a positive shock for series 3 (Greens)
- +This series of plots makes it clear that some of the other series would be expected to show both instantaneous responses to a shock for the Greens (due to their correlated process errors) as well as delayed @@ -1000,7 +986,7 @@
And we can also compute the energy score for out of sample forecasts to get a sense of which model provides forecasts that are better calibrated:
@@ -1014,7 +1000,7 @@The models tend to provide similar forecasts, though the correlated error model does slightly better overall. We would probably need to use a more extensive rolling forecast evaluation exercise if we felt like we @@ -1038,10 +1024,10 @@
Hannaford, Naomi E., et al. “A -sparse Bayesian hierarchical vector autoregressive model for microbial -dynamics in a wastewater treatment plant.” Computational -Statistics & Data Analysis 179 (2023): 107659.
+Hannaford, Naomi E., et al. “A sparse Bayesian +hierarchical vector autoregressive model for microbial dynamics in a +wastewater treatment plant.” Computational Statistics & Data +Analysis 179 (2023): 107659.
Holmes, Elizabeth E., Eric J. Ward, and Wills Kellie. “MARSS: multivariate autoregressive state-space models for analyzing time-series data.” R Journal. 4.1 (2012): 11.
diff --git a/vignettes/SS_model.svg b/vignettes/SS_model.svg index 415097bf..f6441719 100644 --- a/vignettes/SS_model.svg +++ b/vignettes/SS_model.svg @@ -5,12 +5,36 @@ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" + xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" + xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" style="overflow:hidden" id="svg81" version="1.1" overflow="hidden" height="324.50705" - width="945.35211"> + width="945.35211" + sodipodi:docname="SS_model.svg" + inkscape:version="0.92.4 (5da689c313, 2019-01-14)"> +The purpose of this vignette is to show how the mvgam
+package can be used to fit and interrogate State-Space models with
+nonlinear effects.
State-Space models allow us to separately make inferences about the
+underlying dynamic process model that we are interested in
+(i.e. the evolution of a time series or a collection of time series) and
+the observation model (i.e. the way that we survey / measure
+this underlying process). This is extremely useful in ecology because
+our observations are always imperfect / noisy measurements of the thing
+we are interested in measuring. It is also helpful because we often know
+that some covariates will impact our ability to measure accurately
+(i.e. we cannot take accurate counts of rodents if there is a
+thunderstorm happening) while other covariate impact the underlying
+process (it is highly unlikely that rodent abundance responds to one
+storm, but instead probably responds to longer-term weather and climate
+variation). A State-Space model allows us to model both components in a
+single unified modelling framework. A major advantage of
+mvgam
is that it can include nonlinear effects and random
+effects in BOTH model components while also capturing dynamic
+processes.
The data we will use to illustrate how we can fit State-Space models
+in mvgam
are from a long-term monitoring study of plankton
+counts (cells per mL) taken from Lake Washington in Washington, USA. The
+data are available as part of the MARSS
package and can be
+downloaded using the following:
We will work with five different groups of plankton:
+ +As usual, preparing the data into the correct format for
+mvgam
modelling takes a little bit of wrangling in
+dplyr
:
# loop across each plankton group to create the long datframe
+plankton_data <- do.call(rbind, lapply(outcomes, function(x){
+
+ # create a group-specific dataframe with counts labelled 'y'
+ # and the group name in the 'series' variable
+ data.frame(year = lakeWAplanktonTrans[, 'Year'],
+ month = lakeWAplanktonTrans[, 'Month'],
+ y = lakeWAplanktonTrans[, x],
+ series = x,
+ temp = lakeWAplanktonTrans[, 'Temp'])})) %>%
+
+ # change the 'series' label to a factor
+ dplyr::mutate(series = factor(series)) %>%
+
+ # filter to only include some years in the data
+ dplyr::filter(year >= 1965 & year < 1975) %>%
+ dplyr::arrange(year, month) %>%
+ dplyr::group_by(series) %>%
+
+ # z-score the counts so they are approximately standard normal
+ dplyr::mutate(y = as.vector(scale(y))) %>%
+
+ # add the time indicator
+ dplyr::mutate(time = dplyr::row_number()) %>%
+ dplyr::ungroup()
Inspect the data structure
+head(plankton_data)
+#> # A tibble: 6 × 6
+#> year month y series temp time
+#> <dbl> <dbl> <dbl> <fct> <dbl> <int>
+#> 1 1965 1 -0.542 Greens -1.23 1
+#> 2 1965 1 -0.344 Bluegreens -1.23 1
+#> 3 1965 1 -0.0768 Diatoms -1.23 1
+#> 4 1965 1 -1.52 Unicells -1.23 1
+#> 5 1965 1 -0.491 Other.algae -1.23 1
+#> 6 1965 2 NA Greens -1.32 2
dplyr::glimpse(plankton_data)
+#> Rows: 600
+#> Columns: 6
+#> $ year <dbl> 1965, 1965, 1965, 1965, 1965, 1965, 1965, 1965, 1965, 1965, 196…
+#> $ month <dbl> 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5, …
+#> $ y <dbl> -0.54241769, -0.34410776, -0.07684901, -1.52243490, -0.49055442…
+#> $ series <fct> Greens, Bluegreens, Diatoms, Unicells, Other.algae, Greens, Blu…
+#> $ temp <dbl> -1.2306562, -1.2306562, -1.2306562, -1.2306562, -1.2306562, -1.…
+#> $ time <int> 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5, …
Note that we have z-scored the counts in this example as that will +make it easier to specify priors (though this is not completely +necessary; it is often better to build a model that respects the +properties of the actual outcome variables)
+ + +We have some missing observations, but this isn’t an issue for
+modelling in mvgam
. A useful property to understand about
+these counts is that they tend to be highly seasonal. Below are some
+plots of z-scored counts against the z-scored temperature measurements
+in the lake for each month:
plankton_data %>%
+ dplyr::filter(series == 'Other.algae') %>%
+ ggplot(aes(x = time, y = temp)) +
+ geom_line(size = 1.1) +
+ geom_line(aes(y = y), col = 'white',
+ size = 1.3) +
+ geom_line(aes(y = y), col = 'darkred',
+ size = 1.1) +
+ ylab('z-score') +
+ xlab('Time') +
+ ggtitle('Temperature (black) vs Other algae (red)')
plankton_data %>%
+ dplyr::filter(series == 'Diatoms') %>%
+ ggplot(aes(x = time, y = temp)) +
+ geom_line(size = 1.1) +
+ geom_line(aes(y = y), col = 'white',
+ size = 1.3) +
+ geom_line(aes(y = y), col = 'darkred',
+ size = 1.1) +
+ ylab('z-score') +
+ xlab('Time') +
+ ggtitle('Temperature (black) vs Diatoms (red)')
We will have to try and capture this seasonality in our process +model, which should be easy to do given the flexibility of GAMs. Next we +will split the data into training and testing splits:
+plankton_train <- plankton_data %>%
+ dplyr::filter(time <= 112)
+plankton_test <- plankton_data %>%
+ dplyr::filter(time > 112)
Now time to fit some models. This requires a bit of thinking about
+how we can best tackle the seasonal variation and the likely dependence
+structure in the data. These algae are interacting as part of a complex
+system within the same lake, so we certainly expect there to be some
+lagged cross-dependencies underling their dynamics. But if we do not
+capture the seasonal variation, our multivariate dynamic model will be
+forced to try and capture it, which could lead to poor convergence and
+unstable results (we could feasibly capture cyclic dynamics with a more
+complex multi-species Lotka-Volterra model, but ordinary differential
+equation approaches are beyond the scope of mvgam
).
First we will fit a model that does not include a dynamic component,
+just to see if it can reproduce the seasonal variation in the
+observations. This model introduces hierarchical multidimensional
+smooths, where all time series share a “global” tensor product of the
+month
and temp
variables, capturing our
+expectation that algal seasonality responds to temperature variation.
+But this response should depend on when in the year these temperatures
+are recorded (i.e. a response to warm temperatures in Spring should be
+different to a response to warm temperatures in Autumn). The model also
+fits series-specific deviation smooths (i.e. one tensor product per
+series) to capture how each algal group’s seasonality differs from the
+overall “global” seasonality. Note that we do not include
+series-specific intercepts in this model because each series was
+z-scored to have a mean of 0.
notrend_mod <- mvgam(y ~
+ # tensor of temp and month to capture
+ # "global" seasonality
+ te(temp, month, k = c(4, 4)) +
+
+ # series-specific deviation tensor products
+ te(temp, month, k = c(4, 4), by = series) - 1,
+ family = gaussian(),
+ data = plankton_train,
+ newdata = plankton_test,
+ trend_model = 'None')
The “global” tensor product smooth function can be quickly +visualized:
+ + +On this plot, red indicates below-average linear predictors and white +indicates above-average. We can then plot the deviation smooths for a +few algal groups to see how they vary from the “global” pattern:
+ + + + +These multidimensional smooths have done a good job of capturing the +seasonal variation in our observations:
+ + + + + + +This basic model gives us confidence that we can capture the seasonal +variation in the observations. But the model has not captured the +remaining temporal dynamics, which is obvious when we inspect Dunn-Smyth +residuals for a few series:
+ + + + +Now it is time to get into multivariate State-Space models. We will +fit two models that can both incorporate lagged cross-dependencies in +the latent process models. The first model assumes that the process +errors operate independently from one another, while the second assumes +that there may be contemporaneous correlations in the process errors. +Both models include a Vector Autoregressive component for the process +means, and so both can model complex community dynamics. The models can +be described mathematically as follows:
+\[\begin{align*} +\boldsymbol{count}_t & \sim \text{Normal}(\mu_{obs[t]}, +\sigma_{obs}) \\ +\mu_{obs[t]} & = process_t \\ +process_t & \sim \text{MVNormal}(\mu_{process[t]}, \Sigma_{process}) +\\ +\mu_{process[t]} & = A * process_{t-1} + +f_{global}(\boldsymbol{month},\boldsymbol{temp})_t + +f_{series}(\boldsymbol{month},\boldsymbol{temp})_t \\ +f_{global}(\boldsymbol{month},\boldsymbol{temp}) & = +\sum_{k=1}^{K}b_{global} * \beta_{global} \\ +f_{series}(\boldsymbol{month},\boldsymbol{temp}) & = +\sum_{k=1}^{K}b_{series} * \beta_{series} \end{align*}\]
+Here you can see that there are no terms in the observation model
+apart from the underlying process model. But we could easily add
+covariates into the observation model if we felt that they could explain
+some of the systematic observation errors. We also assume independent
+observation processes (there is no covariance structure in the
+observation errors \(\sigma_{obs}\)).
+At present, mvgam
does not support multivariate observation
+models. But this feature will be added in future versions. However the
+underlying process model is multivariate, and there is a lot going on
+here. This component has a Vector Autoregressive part, where the process
+mean at time \(t\) \((\mu_{process[t]})\) is a vector that
+evolves as a function of where the vector-valued process model was at
+time \(t-1\). The \(A\) matrix captures these dynamics with
+self-dependencies on the diagonal and possibly asymmetric
+cross-dependencies on the off-diagonals, while also incorporating the
+nonlinear smooth functions that capture seasonality for each series. The
+contemporaneous process errors are modeled by \(\Sigma_{process}\), which can be
+constrained so that process errors are independent (i.e. setting the
+off-diagonals to 0) or can be fully parameterized using a Cholesky
+decomposition (using Stan
’s \(LKJcorr\) distribution to place a prior on
+the strength of inter-species correlations). For those that are
+interested in the inner-workings, mvgam
makes use of a
+recent breakthrough by Sarah
+Heaps to enforce stationarity of Bayesian VAR processes. This is
+advantageous as we often don’t expect forecast variance to increase
+without bound forever into the future, but many estimated VARs tend to
+behave this way.
Ok that was a lot to take in. Let’s fit some models to try and
+inspect what is going on and what they assume. But first, we need to
+update mvgam
’s default priors for the observation and
+process errors. By default, mvgam
uses a fairly wide
+Student-T prior on these parameters to avoid being overly informative.
+But our observations are z-scored and so we do not expect very large
+process or observation errors. However, we also do not expect very small
+observation errors either as we know these measurements are not perfect.
+So let’s update the priors for these parameters. In doing so, you will
+get to see how the formula for the latent process (i.e. trend) model is
+used in mvgam
:
priors <- get_mvgam_priors(
+ # observation formula, which has no terms in it
+ y ~ -1,
+
+ # process model formula, which includes the smooth functions
+ trend_formula = ~ te(temp, month, k = c(4, 4)) +
+ te(temp, month, k = c(4, 4), by = trend) - 1,
+
+ # VAR1 model with uncorrelated process errors
+ trend_model = VAR(),
+ family = gaussian(),
+ data = plankton_train)
Get names of all parameters whose priors can be modified:
+priors[, 3]
+#> [1] "(Intercept)"
+#> [2] "process error sd"
+#> [3] "diagonal autocorrelation population mean"
+#> [4] "off-diagonal autocorrelation population mean"
+#> [5] "diagonal autocorrelation population variance"
+#> [6] "off-diagonal autocorrelation population variance"
+#> [7] "shape1 for diagonal autocorrelation precision"
+#> [8] "shape1 for off-diagonal autocorrelation precision"
+#> [9] "shape2 for diagonal autocorrelation precision"
+#> [10] "shape2 for off-diagonal autocorrelation precision"
+#> [11] "observation error sd"
+#> [12] "te(temp,month) smooth parameters, te(temp,month):trendtrend1 smooth parameters, te(temp,month):trendtrend2 smooth parameters, te(temp,month):trendtrend3 smooth parameters, te(temp,month):trendtrend4 smooth parameters, te(temp,month):trendtrend5 smooth parameters"
And their default prior distributions:
+priors[, 4]
+#> [1] "(Intercept) ~ student_t(3, -0.1, 2.5);"
+#> [2] "sigma ~ student_t(3, 0, 2.5);"
+#> [3] "es[1] = 0;"
+#> [4] "es[2] = 0;"
+#> [5] "fs[1] = sqrt(0.455);"
+#> [6] "fs[2] = sqrt(0.455);"
+#> [7] "gs[1] = 1.365;"
+#> [8] "gs[2] = 1.365;"
+#> [9] "hs[1] = 0.071175;"
+#> [10] "hs[2] = 0.071175;"
+#> [11] "sigma_obs ~ student_t(3, 0, 2.5);"
+#> [12] "lambda_trend ~ normal(5, 30);"
Setting priors is easy in mvgam
as you can use
+brms
routines. Here we use more informative Normal priors
+for both error components, but we impose a lower bound of 0.2 for the
+observation errors:
priors <- c(prior(normal(0.5, 0.1), class = sigma_obs, lb = 0.2),
+ prior(normal(0.5, 0.25), class = sigma))
You may have noticed something else unique about this model: there is
+no intercept term in the observation formula. This is because a shared
+intercept parameter can sometimes be unidentifiable with respect to the
+latent VAR process, particularly if our series have similar long-run
+averages (which they do in this case because they were z-scored). We
+will often get better convergence in these State-Space models if we drop
+this parameter. mvgam
accomplishes this by fixing the
+coefficient for the intercept to zero. Now we can fit the first model,
+which assumes that process errors are contemporaneously uncorrelated
var_mod <- mvgam(
+ # observation formula, which is empty
+ y ~ -1,
+
+ # process model formula, which includes the smooth functions
+ trend_formula = ~ te(temp, month, k = c(4, 4)) +
+ te(temp, month, k = c(4, 4), by = trend) - 1,
+
+ # VAR1 model with uncorrelated process errors
+ trend_model = VAR(),
+ family = gaussian(),
+ data = plankton_train,
+ newdata = plankton_test,
+
+ # include the updated priors
+ priors = priors,
+ silent = 2)
This model’s summary is a bit different to other mvgam
+summaries. It separates parameters based on whether they belong to the
+observation model or to the latent process model. This is because we may
+often have covariates that impact the observations but not the latent
+process, so we can have fairly complex models for each component. You
+will notice that some parameters have not fully converged, particularly
+for the VAR coefficients (called A
in the output) and for
+the process errors (Sigma
). Note that we set
+include_betas = FALSE
to stop the summary from printing
+output for all of the spline coefficients, which can be dense and hard
+to interpret:
summary(var_mod, include_betas = FALSE)
+#> GAM observation formula:
+#> y ~ 1
+#> <environment: 0x0000024fa27bd008>
+#>
+#> GAM process formula:
+#> ~te(temp, month, k = c(4, 4)) + te(temp, month, k = c(4, 4),
+#> by = trend) - 1
+#> <environment: 0x0000024fa27bd008>
+#>
+#> Family:
+#> gaussian
+#>
+#> Link function:
+#> identity
+#>
+#> Trend model:
+#> VAR()
+#>
+#> N process models:
+#> 5
+#>
+#> N series:
+#> 5
+#>
+#> N timepoints:
+#> 120
+#>
+#> Status:
+#> Fitted using Stan
+#> 4 chains, each with iter = 1500; warmup = 1000; thin = 1
+#> Total post-warmup draws = 2000
+#>
+#>
+#> Observation error parameter estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> sigma_obs[1] 0.20 0.26 0.35 1.00 420
+#> sigma_obs[2] 0.25 0.40 0.54 1.01 162
+#> sigma_obs[3] 0.43 0.63 0.79 1.02 133
+#> sigma_obs[4] 0.25 0.37 0.50 1.02 275
+#> sigma_obs[5] 0.31 0.43 0.54 1.01 278
+#>
+#> GAM observation model coefficient (beta) estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> (Intercept) 0 0 0 NaN NaN
+#>
+#> Process model VAR parameter estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> A[1,1] 0.012 0.5000 0.830 1.01 138
+#> A[1,2] -0.390 -0.0340 0.200 1.02 356
+#> A[1,3] -0.500 -0.0340 0.360 1.02 262
+#> A[1,4] -0.280 0.0240 0.400 1.02 409
+#> A[1,5] -0.090 0.1400 0.550 1.01 280
+#> A[2,1] -0.140 0.0120 0.180 1.00 773
+#> A[2,2] 0.620 0.7900 0.920 1.01 331
+#> A[2,3] -0.410 -0.1200 0.048 1.00 394
+#> A[2,4] -0.047 0.1100 0.340 1.01 321
+#> A[2,5] -0.050 0.0590 0.190 1.00 681
+#> A[3,1] -0.280 0.0098 0.300 1.03 202
+#> A[3,2] -0.530 -0.1900 0.036 1.02 154
+#> A[3,3] 0.071 0.4300 0.740 1.02 224
+#> A[3,4] -0.033 0.2100 0.650 1.02 179
+#> A[3,5] -0.059 0.1200 0.400 1.03 220
+#> A[4,1] -0.140 0.0450 0.270 1.00 415
+#> A[4,2] -0.110 0.0500 0.260 1.01 362
+#> A[4,3] -0.450 -0.1100 0.130 1.02 229
+#> A[4,4] 0.500 0.7400 0.940 1.01 307
+#> A[4,5] -0.200 -0.0300 0.130 1.00 747
+#> A[5,1] -0.200 0.0620 0.420 1.01 338
+#> A[5,2] -0.420 -0.1100 0.086 1.03 154
+#> A[5,3] -0.650 -0.1700 0.130 1.02 180
+#> A[5,4] -0.067 0.1800 0.620 1.03 171
+#> A[5,5] 0.540 0.7400 0.940 1.01 300
+#>
+#> Process error parameter estimates:
+#> 2.5% 50% 97.5% Rhat n_eff
+#> Sigma[1,1] 0.067 0.29 0.64 1.02 83
+#> Sigma[1,2] 0.000 0.00 0.00 NaN NaN
+#> Sigma[1,3] 0.000 0.00 0.00 NaN NaN
+#> Sigma[1,4] 0.000 0.00 0.00 NaN NaN
+#> Sigma[1,5] 0.000 0.00 0.00 NaN NaN
+#> Sigma[2,1] 0.000 0.00 0.00 NaN NaN
+#> Sigma[2,2] 0.065 0.11 0.18 1.01 367
+#> Sigma[2,3] 0.000 0.00 0.00 NaN NaN
+#> Sigma[2,4] 0.000 0.00 0.00 NaN NaN
+#> Sigma[2,5] 0.000 0.00 0.00 NaN NaN
+#> Sigma[3,1] 0.000 0.00 0.00 NaN NaN
+#> Sigma[3,2] 0.000 0.00 0.00 NaN NaN
+#> Sigma[3,3] 0.055 0.16 0.31 1.01 155
+#> Sigma[3,4] 0.000 0.00 0.00 NaN NaN
+#> Sigma[3,5] 0.000 0.00 0.00 NaN NaN
+#> Sigma[4,1] 0.000 0.00 0.00 NaN NaN
+#> Sigma[4,2] 0.000 0.00 0.00 NaN NaN
+#> Sigma[4,3] 0.000 0.00 0.00 NaN NaN
+#> Sigma[4,4] 0.053 0.13 0.26 1.01 200
+#> Sigma[4,5] 0.000 0.00 0.00 NaN NaN
+#> Sigma[5,1] 0.000 0.00 0.00 NaN NaN
+#> Sigma[5,2] 0.000 0.00 0.00 NaN NaN
+#> Sigma[5,3] 0.000 0.00 0.00 NaN NaN
+#> Sigma[5,4] 0.000 0.00 0.00 NaN NaN
+#> Sigma[5,5] 0.100 0.21 0.35 1.01 261
+#>
+#> Approximate significance of GAM process smooths:
+#> edf Ref.df Chi.sq p-value
+#> te(temp,month) 3.28 15 35.44 0.37
+#> te(temp,month):seriestrend1 1.97 15 1.68 1.00
+#> te(temp,month):seriestrend2 1.45 15 5.82 1.00
+#> te(temp,month):seriestrend3 5.20 15 57.17 0.51
+#> te(temp,month):seriestrend4 2.72 15 8.58 0.96
+#> te(temp,month):seriestrend5 1.31 15 6.01 1.00
+#>
+#> Stan MCMC diagnostics:
+#> n_eff / iter looks reasonable for all parameters
+#> Rhat looks reasonable for all parameters
+#> 0 of 2000 iterations ended with a divergence (0%)
+#> 0 of 2000 iterations saturated the maximum tree depth of 12 (0%)
+#> E-FMI indicated no pathological behavior
+#>
+#> Samples were drawn using NUTS(diag_e) at Wed Sep 04 12:15:10 PM 2024.
+#> For each parameter, n_eff is a crude measure of effective sample size,
+#> and Rhat is the potential scale reduction factor on split MCMC chains
+#> (at convergence, Rhat = 1)
The convergence of this model isn’t fabulous (more on this in a
+moment). But we can again plot the smooth functions, which this time
+operate on the process model. We can see the same plot using
+trend_effects = TRUE
in the plotting functions:
The VAR matrix is of particular interest here, as it captures lagged
+dependencies and cross-dependencies in the latent process model.
+Unfortunately bayesplot
doesn’t know this is a matrix of
+parameters so what we see is actually the transpose of the VAR matrix. A
+little bit of wrangling gives us these histograms in the correct
+order:
A_pars <- matrix(NA, nrow = 5, ncol = 5)
+for(i in 1:5){
+ for(j in 1:5){
+ A_pars[i, j] <- paste0('A[', i, ',', j, ']')
+ }
+}
+mcmc_plot(var_mod,
+ variable = as.vector(t(A_pars)),
+ type = 'hist')
There is a lot happening in this matrix. Each cell captures the +lagged effect of the process in the column on the process in the row in +the next timestep. So for example, the effect in cell [1,3] shows how an +increase in the process for series 3 (Greens) at time \(t\) is expected to impact the process for +series 1 (Bluegreens) at time \(t+1\). +The latent process model is now capturing these effects and the smooth +seasonal effects.
+The process error \((\Sigma)\) +captures unmodelled variation in the process models. Again, we fixed the +off-diagonals to 0, so the histograms for these will look like flat +boxes:
+Sigma_pars <- matrix(NA, nrow = 5, ncol = 5)
+for(i in 1:5){
+ for(j in 1:5){
+ Sigma_pars[i, j] <- paste0('Sigma[', i, ',', j, ']')
+ }
+}
+mcmc_plot(var_mod,
+ variable = as.vector(t(Sigma_pars)),
+ type = 'hist')
The observation error estimates \((\sigma_{obs})\) represent how much the +model thinks we might miss the true count when we take our imperfect +measurements:
+ + +These are still a bit hard to identify overall, especially when +trying to estimate both process and observation error. Often we need to +make some strong assumptions about which of these is more important for +determining unexplained variation in our observations.
+Because Vector Autoregressions can capture complex lagged
+dependencies, it is often difficult to understand how the member time
+series are thought to interact with one another. A method that is
+commonly used to directly test for possible interactions is to compute
+an Impulse
+Response Function (IRF). If \(h\)
+represents the simulated forecast horizon, an IRF asks how each of the
+remaining series might respond over times \((t+1):h\) if a focal series is given an
+innovation “shock” at time \(t = 0\).
+mvgam
can compute Generalized and Orthogonalized IRFs from
+models that included latent VAR dynamics. We simply feed the fitted
+model to the irf()
function and then use the S3
+plot()
function to view the estimated responses. By
+default, irf()
will compute IRFs by separately imposing
+positive shocks of one standard deviation to each series in the VAR
+process. Here we compute Generalized IRFs over a horizon of 12
+timesteps:
Plot the expected responses of the remaining series to a positive +shock for series 3 (Greens)
+ + +This series of plots makes it clear that some of the other series +would be expected to show both instantaneous responses to a shock for +the Greens (due to their correlated process errors) as well as delayed +and nonlinear responses over time (due to the complex lagged dependence +structure captured by the \(A\) +matrix). This hopefully makes it clear why IRFs are an important tool in +the analysis of multivariate autoregressive models.
+But which model is better? We can compute the variogram score for out +of sample forecasts to get a sense of which model does a better job of +capturing the dependence structure in the true evaluation set:
+# create forecast objects for each model
+fcvar <- forecast(var_mod)
+fcvarcor <- forecast(varcor_mod)
+
+# plot the difference in variogram scores; a negative value means the VAR1cor model is better, while a positive value means the VAR1 model is better
+diff_scores <- score(fcvarcor, score = 'variogram')$all_series$score -
+ score(fcvar, score = 'variogram')$all_series$score
+plot(diff_scores, pch = 16, cex = 1.25, col = 'darkred',
+ ylim = c(-1*max(abs(diff_scores), na.rm = TRUE),
+ max(abs(diff_scores), na.rm = TRUE)),
+ bty = 'l',
+ xlab = 'Forecast horizon',
+ ylab = expression(variogram[VAR1cor]~-~variogram[VAR1]))
+abline(h = 0, lty = 'dashed')
And we can also compute the energy score for out of sample forecasts +to get a sense of which model provides forecasts that are better +calibrated:
+# plot the difference in energy scores; a negative value means the VAR1cor model is better, while a positive value means the VAR1 model is better
+diff_scores <- score(fcvarcor, score = 'energy')$all_series$score -
+ score(fcvar, score = 'energy')$all_series$score
+plot(diff_scores, pch = 16, cex = 1.25, col = 'darkred',
+ ylim = c(-1*max(abs(diff_scores), na.rm = TRUE),
+ max(abs(diff_scores), na.rm = TRUE)),
+ bty = 'l',
+ xlab = 'Forecast horizon',
+ ylab = expression(energy[VAR1cor]~-~energy[VAR1]))
+abline(h = 0, lty = 'dashed')
The models tend to provide similar forecasts, though the correlated
+error model does slightly better overall. We would probably need to use
+a more extensive rolling forecast evaluation exercise if we felt like we
+needed to only choose one for production. mvgam
offers some
+utilities for doing this (i.e. see ?lfo_cv
for guidance).
+Alternatively, we could use forecasts from both models by
+creating an evenly-weighted ensemble forecast distribution. This
+capability is available using the ensemble()
function in
+mvgam
(see ?ensemble
for guidance).
The following papers and resources offer a lot of useful material +about multivariate State-Space models and how they can be applied in +practice:
+Auger‐Méthé, Marie, et al. “A +guide to state–space modeling of ecological time series.” +Ecological Monographs 91.4 (2021): e01470.
+Heaps, Sarah E. “Enforcing +stationarity through the prior in vector autoregressions.” +Journal of Computational and Graphical Statistics 32.1 (2023): +74-83.
+Hannaford, Naomi E., et al. “A sparse Bayesian +hierarchical vector autoregressive model for microbial dynamics in a +wastewater treatment plant.” Computational Statistics & Data +Analysis 179 (2023): 107659.
+Holmes, Elizabeth E., Eric J. Ward, and Wills Kellie. “MARSS: +multivariate autoregressive state-space models for analyzing time-series +data.” R Journal. 4.1 (2012): 11.
+Ward, Eric J., et al. “Inferring +spatial structure from time‐series data: using multivariate state‐space +models to detect metapopulation structure of California sea lions in the +Gulf of California, Mexico.” Journal of Applied Ecology +47.1 (2010): 47-56.
+I’m actively seeking PhD students and other researchers to work in
+the areas of ecological forecasting, multivariate model evaluation and
+development of mvgam
. Please reach out if you are
+interested (n.clark’at’uq.edu.au)