Question on test metrics evaluation #1446
Replies: 1 comment
-
The metrics should be on the same scale since NeuralProphet normalizes the data by default. You could also use error metrics such as MASE or MSSE, which are not sensitive to scaling. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I want to evaluate 2 different models that I have tried in Neural Prophet. One is a global local model set with these hyperparameters:
trend_global_local="local",
season_global_local="local",
Second one is a simple model without these parameters, with some regular parameters set according to the tuitorial.
While evaluating the test metrics for the global model, I can see a log message:
WARNING - (NP.forecaster.test) - Note that the metrics are displayed in normalized scale because of local normalization.
I want to compare the 2 models on the basis of the test metrics.
Hence both model test metrics need to be produced in the same scale. I would either like to enable normalization of test metrics for model 2 or disable it for model 1. How can I achieve this?
Beta Was this translation helpful? Give feedback.
All reactions