Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducing Results #12

Open
camerons1967 opened this issue Nov 6, 2022 · 3 comments
Open

Reproducing Results #12

camerons1967 opened this issue Nov 6, 2022 · 3 comments

Comments

@camerons1967
Copy link

Hello,

I downloaded the repository to my computer and tried to reproduce the results that were published in the paper for the traffic dataset with a prediction window of length 96. I ran the code with the following args:

--hyperopt_max_evals
10
--experiment_id
run_1

But the results were 0.504 for the MSE and 0.311 for the MAE which is significantly worse than what I was expecting to achieve. Is there anything else that needs to be done before running the code and training the model in order to reproduce the results?

Thanks in advance!

@kdgutier
Copy link
Collaborator

kdgutier commented Nov 7, 2022

Hi @camerons1967,

We would recommend you to try this in a Google Colab, it is a lot faster. Here is a link to the ETT's replication.

Eventually hyperopt bayesian optimization can still stuck in bad local optimas, so you need to increase the number of hyperparameter exploration steps.

@omerlux
Copy link

omerlux commented Nov 27, 2022

Hi,
I've tried to work with the notebook and the code. The notebook reproduces the results, but the code doesn't.
What is the reason for this issue? Is there something missing in the repository?

Thanks!

@kdgutier
Copy link
Collaborator

Hi @omerlux,

We conducted all of our experiments in GPUs (a lot faster). An explanation is that initialization of the network's parameters and rounding differ between GPUs and CPUs.

Nevertheless N-HiTS should be able to achieve similar results on the CPU if you let the hyperparameter optimization algorithm run for longer to escape bad local optimas.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants