Skip to content

Commit

Permalink
Recompiled
Browse files Browse the repository at this point in the history
  • Loading branch information
robjhyndman committed Jan 10, 2024
1 parent 258c524 commit 75267f5
Show file tree
Hide file tree
Showing 19 changed files with 277 additions and 0 deletions.
15 changes: 15 additions & 0 deletions _freeze/assignments/A1/execute-results/html.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
{
"hash": "52ed2c72c91ae4f8daa8336e7db58c5e",
"result": {
"engine": "knitr",
"markdown": "---\ntitle: Assignment 1\n---\n\n\n**You must provide forecasts for the following items:**\n\n 1. Google closing stock price on 20 March 2024 [[Data](https://finance.yahoo.com/quote/GOOG/)].\n 2. Maximum temperature at Melbourne airport on 10 April 2024 [[Data](http://www.bom.gov.au/climate/dwo/IDCJDW3049.latest.shtml)].\n 3. The difference in points (Collingwood minus Essendon) scored in the AFL match between Collingwood and Essendon for the Anzac Day clash. 25 April 2024 [[Data](https://en.wikipedia.org/wiki/Anzac_Day_match)].\n 4. The seasonally adjusted estimate of total employment for April 2024. ABS CAT 6202, to be released around mid May 2024 [[Data](https://www.abs.gov.au/statistics/labour/employment-and-unemployment/labour-force-australia/latest-release)].\n 5. Google closing stock price on 22 May 2024 [[Data](https://finance.yahoo.com/quote/GOOG/)].\n\n**For each of these, give a point forecast and an 80% prediction interval, and explain in a couple of sentences how each was obtained.**\n\n* The [Data] links give you possible data to start with, but you are free to use any data you like.\n* There is no need to use any fancy models or sophisticated methods. Simple is better for this assignment. The methods you use should be understandable to any high school student.\n* Full marks will be awarded if you submit the required information, and are able to meaningfully justify your results in a couple of sentences in each case.\n* Once the true values in each case are available, we will come back to this exercise and see who did the best using the scoring method described in class.\n* The student with the lowest score is the winner of our forecasting competition, and will win a $50 cash prize.\n* The assignment mark is not dependent on your score.\n\n\n<br><br><hr><b>Due: 8 March 2024</b><br><a href=https://learning.monash.edu/mod/assign/view.php?id=???? class = 'badge badge-large badge-blue'><font size='+2'>&nbsp;&nbsp;<b>Submit</b>&nbsp;&nbsp;</font><br></a>\n",
"supporting": [],
"filters": [
"rmarkdown/pagebreak.lua"
],
"includes": {},
"engineDependencies": {},
"preserve": {},
"postProcess": true
}
}
15 changes: 15 additions & 0 deletions _freeze/assignments/A2/execute-results/html.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
{
"hash": "2fdc228899b4120f50b1f353ed12bfc8",
"result": {
"engine": "knitr",
"markdown": "---\ntitle: Assignment 2\n---\n\n\nThis assignment will use the same data that you will use in the [retail project](Project.qmd) later in semester. Each student will use a different time series, selected using their student ID number as follows.\n\n```r\n# Replace the seed with your student ID\nset.seed(12345678)\nretail <- readr::read_rds(\"https://bit.ly/monashretaildata\") |>\n filter(`Series ID` == sample(`Series ID`, 1))\n```\n\n 1. Plot your time series using the `autoplot()` command. What do you learn from the plot?\n 2. Plot your time series using the `gg_season()` command. What do you learn from the plot?\n 3. Plot your time series using the `gg_subseries()` command. What do you learn from the plot?\n 4. Find an appropriate Box-Cox transformation for your data and explain why you have chosen the particular transformation parameter $\\lambda$.\n 5. Produce a plot of an STL decomposition of the transformed data. What do you learn from the plot?\n\nYou need to submit one Rmarkdown or Quarto file which implements all steps above.\n\nTo receive full marks, the Rmd or qmd file must compile without errors.\n\n\n<br><br><hr><b>Due: 22 March 2024</b><br><a href=https://learning.monash.edu/mod/assign/view.php?id=2034165 class = 'badge badge-large badge-blue'><font size='+2'>&nbsp;&nbsp;<b>Submit</b>&nbsp;&nbsp;</font><br></a>\n",
"supporting": [],
"filters": [
"rmarkdown/pagebreak.lua"
],
"includes": {},
"engineDependencies": {},
"preserve": {},
"postProcess": true
}
}
15 changes: 15 additions & 0 deletions _freeze/assignments/A3/execute-results/html.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
{
"hash": "dd25dc06ba2854b756fac0005bb4a47e",
"result": {
"engine": "knitr",
"markdown": "---\ntitle: Assignment 3\n---\n\n\nThis assignment will use national population data from 1960 -- 2022. Each student will use a different time series, selected using their student ID number as follows.\n\n```r\n# Replace seed with your student ID\nset.seed(12345678)\npop <- readr::read_rds(\"https://bit.ly/monashpopulationdata\") |>\n filter(Country == sample(Country, 1))\n```\n\nPopulation should be modelled as a logarithm as it increases exponentially.\n\n1. Using a test set of 2018--2022, fit an ETS model chosen automatically, and three benchmark methods to the training data. Which gives the best forecasts on the test set, based on RMSE?\n2. Check the residuals from the best model using an ACF plot and a Ljung-Box test. Do the residuals appear to be white noise?\n3. Now use time-series cross-validation with a minimum sample size of 15 years, a step size of 1 year, and a forecast horizon of 5 years. Calculate the RMSE of the results. Does it change the conclusion you reach based on the test set?\n4. Which of these two methods of computing accuracy is more reliable? Why?\n\nSubmit an Rmd or qmd file which carries out the above analysis. You need to submit one file which implements all steps above.\n\nTo receive full marks, the Rmd or qmd file must compile without errors.\n\n\n<br><br><hr><b>Due: 12 April 2024</b><br><a href=https://learning.monash.edu/mod/assign/view.php?id=2034169 class = 'badge badge-large badge-blue'><font size='+2'>&nbsp;&nbsp;<b>Submit</b>&nbsp;&nbsp;</font><br></a>\n",
"supporting": [],
"filters": [
"rmarkdown/pagebreak.lua"
],
"includes": {},
"engineDependencies": {},
"preserve": {},
"postProcess": true
}
}
15 changes: 15 additions & 0 deletions _freeze/assignments/A4/execute-results/html.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
{
"hash": "ecd870b2226cc85092e37213bea2523e",
"result": {
"engine": "knitr",
"markdown": "---\ntitle: Assignment 4\n---\n\n\n## Background\n\nHere is a function that generates data from an AR(1) model starting with the first value set to 0\n\n```r\ngenerate_ar1 <- function(n = 100, c = 0, phi, sigma = 1) {\n # Generate errors\n error <- rnorm(n, mean = 0, sd = sigma)\n # Set up vector for the response with initial values set to 0\n y <- rep(0, n)\n # Generate remaining observations\n for(i in seq(2, length = n-1)) {\n y[i] <- c + phi * y[i-1] + error[i]\n }\n return(y)\n}\n```\n\nHere `n` is the number of observations to simulate, `c` is the constant, `phi` is the AR coefficient, and `sigma` is the standard deviation of the noise. The following example shows the function being used to generate 50 observations\n\n```r\nlibrary(fpp3)\ntsibble(time = 1:50, y = generate_ar1(n=50, c=1, phi=0.8), index = time) |>\n autoplot(y)\n```\n\n## Instructions\n\n<ol>\n<li> Modify the `generate_ar1` function to generate data from any ARMA(p,q) model with parameters to be specified by the user. The first line of your function definition should be\n\n ```r\n generate_arma <- function(n = 100, c = 0, phi = NULL, theta = NULL, sigma = 1)\n ```\n\n Here `phi` and `theta` are vectors of AR and MA coefficients. Your function should return a numeric vector of length `n`.\n\n For example `generate_arma(n = 50, c = 2, phi = c(0.4, -0.6))` should return 50 observations generated from the model\n $$y_t = 2 + 0.4y_{t-1} - 0.6y_{t-2} + \\varepsilon_t$$\n where $\\varepsilon \\sim N(0,1)$.\n\n<li> The noise should be generated using the `rnorm()` function.\n\n<li> Your function should check stationarity and invertibility conditions and return an error if either condition is not satisfied. You can use the `stop()` function to generate an error. The model will be stationary if the following expression returns `TRUE`:\n\n ```r\n !any(abs(polyroot(c(1,-phi))) <= 1)\n ```\n\n The MA parameters will be invertible if the following expression returns `TRUE`:\n\n ```r\n !any(abs(polyroot(c(1,theta))) <= 1)\n ```\n\n<li> The above function sets the first value of every series to 0. Your function should fix this problem by generating more observations than required and then discarding the first few observations. You will need to consider how many observations to discard, to prevent the returned series from being affected by the initial values. Test that it is working by checking that the first few values of the series are close to the mean of the series, even when `c` is a large value.\n</ol>\n\nPlease submit your solution as a .R file.\n\n\n<br><br><hr><b>Due: 3 May 2024</b><br><a href=https://learning.monash.edu/mod/assign/view.php?id=2034170 class = 'badge badge-large badge-blue'><font size='+2'>&nbsp;&nbsp;<b>Submit</b>&nbsp;&nbsp;</font><br></a>\n",
"supporting": [],
"filters": [
"rmarkdown/pagebreak.lua"
],
"includes": {},
"engineDependencies": {},
"preserve": {},
"postProcess": true
}
}
15 changes: 15 additions & 0 deletions _freeze/assignments/Project/execute-results/html.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
{
"hash": "2dc1e4f426524f9526895c6bbcaf5b29",
"result": {
"engine": "knitr",
"markdown": "---\ntitle: Retail Project\n---\n\n\n**Objective:** To forecast a real time series using ETS and ARIMA models.\n\n**Data:** Each student will be use a different time series, selected using their student ID number as follows. This is the same series that you used in [Assignment 2](A2.qmd).\n\n```r\n# Use your student ID as the seed\nset.seed(12345678)\nretail <- readr::read_rds(\"https://bit.ly/monashretaildata\") |>\n filter(`Series ID` == sample(`Series ID`, 1))\n```\n\n**Assignment value:** This assignment is worth 20% of the overall unit assessment.\n\n**Report:**\n\nYou should produce forecasts of the series using ETS and ARIMA models. Write a report in Rmarkdown or Quarto format of your analysis explaining carefully what you have done and why you have done it. Your report should include the following elements.\n\n* A discussion of the statistical features of the original data, including the effect of COVID-19 on your series. [4 marks]\n* Explanation of transformations and differencing used. You should use a unit-root test as part of the discussion. [5 marks]\n* A description of the methodology used to create a short-list of appropriate ARIMA models and ETS models. Include discussion of AIC values as well as results from applying the models to a test-set consisting of the last 24 months of the data provided. [6 marks]\n* Choose one ARIMA model and one ETS model based on this analysis and show parameter estimates, residual diagnostics, forecasts and prediction intervals for both models. Diagnostic checking for both models should include ACF graphs and the Ljung-Box test. [8 marks]\n* Comparison of the results from each of your preferred models. Which method do you think gives the better forecasts? Explain with reference to the test-set. [2 marks]\n* Apply your two chosen models to the full data set, re-estimating the parameters but not changing the model structure. Produce out-of-sample point forecasts and 80% prediction intervals for each model for two years past the end of the data provided. [4 marks]\n* Obtain up-to-date data from the [ABS website](https://www.abs.gov.au/statistics/industry/retail-and-wholesale-trade/retail-trade-australia) (Table 11). You may need to use the previous release of data, rather than the latest release. Compare your forecasts with the actual numbers. How well did you do? [5 marks]\n* A discussion of benefits and limitations of the models for your data. [3 marks]\n* Graphs should be properly labelled, including appropriate units of measurement. [3 marks]\n\n**Notes**\n\n* Your submission must include the Rmarkdown or Quarto file (.Rmd or .qmd), and should run without error.\n* There will be a 5 marks penalty if file does not run without error.\n* You may also include a knitted version of the document (HTML preferred), but it is not required.\n* When using the updated ABS data set, do not edit the downloaded file in any way.\n* There is no need to provide the updated ABS data with your submission.\n\n\n<br><br><hr><b>Due: 24 May 2024</b><br><a href=https://learning.monash.edu/mod/assign/view.php?id=2034167 class = 'badge badge-large badge-blue'><font size='+2'>&nbsp;&nbsp;<b>Submit</b>&nbsp;&nbsp;</font><br></a>\n",
"supporting": [],
"filters": [
"rmarkdown/pagebreak.lua"
],
"includes": {},
"engineDependencies": {},
"preserve": {},
"postProcess": true
}
}
15 changes: 15 additions & 0 deletions _freeze/index/execute-results/html.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
{
"hash": "0350eee142980c8e32a56080d2456a4e",
"result": {
"engine": "knitr",
"markdown": "---\ntitle: \"ETC3550/5550 Applied forecasting\"\n---\n\n\n\n\n## Lecturer/Chief Examiner\n\n* [**Rob J Hyndman**](https://robjhyndman.com). Email: [[email protected]](mailto:[email protected])\n\n## Tutors\n\n* [**Mitchell O'Hara-Wild**](https://mitchelloharawild.com). Email: [Mitch.O'[email protected]](mailto:Mitch.O'[email protected])\n* Elena Sanina\n* Xiaoqian Wang\n* Yangzhuoran (Fin) Yang\n* Zhixiang (Elvis) Yang\n\n## Consultations\n\n* Rob\n* Mitch\n* Elena\n* Elvis\n* Fin\n* Xiaoqian\n\n## Weekly schedule\n\n* Pre-recorded lectures: 1 hour per week ([Slides](https://github.com/robjhyndman/fpp3_slides))\n* In-person lectures: 9am Fridays, [Central 1 Lecture Theatre, 25 Exhibition Walk](https://maps.app.goo.gl/RKdmJq2tBfw8ViNT9).\n* Tutorials: 1.5 hours in class per week\n\n\n\n\n|Date |Topic |Chapter |Assessments |\n|:------|:-----------------------------------|:--------------------------------|:--------------|\n|26 Feb |[Introduction to forecasting and R](./week1.html)|[1. Getting started](https://OTexts.com/fpp3/intro.html)| |\n|04 Mar |[Time series graphics](./week2.html)|[2. Time series graphics](https://OTexts.com/fpp3/graphics.html)|[Assignment 1](assignments/A1.qmd)|\n|11 Mar |[Time series decomposition](./week3.html)|[3. Time series decomposition](https://OTexts.com/fpp3/decomposition.html)| |\n|18 Mar |[The forecaster's toolbox](./week4.html)|[5. The forecaster's toolbox](https://OTexts.com/fpp3/toolbox.html)|[Assignment 2](assignments/A2.qmd)|\n|25 Mar |[Exponential smoothing](./week5.html)|[8. Exponential smoothing](https://OTexts.com/fpp3/expsmooth.html)| |\n|01 Apr |Mid-semester break | | |\n|08 Apr |[Exponential smoothing](./week6.html)|[8. Exponential smoothing](https://OTexts.com/fpp3/expsmooth.html)|[Assignment 3](assignments/A3.qmd)|\n|15 Apr |[ARIMA models](./week7.html) |[9. ARIMA models](https://OTexts.com/fpp3/arima.html)| |\n|22 Apr |[ARIMA models](./week8.html) |[9. ARIMA models](https://OTexts.com/fpp3/arima.html)| |\n|29 Apr |[ARIMA models](./week9.html) |[9. ARIMA models](https://OTexts.com/fpp3/arima.html)|[Assignment 4](assignments/A4.qmd)|\n|06 May |[Multiple regression and forecasting](./week10.html)|[7. Time series regression models](https://OTexts.com/fpp3/regression.html)| |\n|13 May |[Dynamic regression](./week11.html) |[10. Dynamic regression models](https://OTexts.com/fpp3/dynamic.html)| |\n|20 May |[Dynamic regression](./week12.html) |[10. Dynamic regression models](https://OTexts.com/fpp3/dynamic.html)|[Retail Project](assignments/Project.qmd)|\n\n\n## Assessments\n\nFinal exam 60%, project 20%, other assignments 20%\n\n## R package installation\n\nHere is the code to install the R packages we will be using in this unit.\n\n```r\ninstall.packages(c(\"tidyverse\",\"fpp3\", \"GGally\"), dependencies = TRUE)\n```\n",
"supporting": [],
"filters": [
"rmarkdown/pagebreak.lua"
],
"includes": {},
"engineDependencies": {},
"preserve": {},
"postProcess": true
}
}
Loading

0 comments on commit 75267f5

Please sign in to comment.