-
Notifications
You must be signed in to change notification settings - Fork 679
[BUG] TFT: Target Included in Historic Encoder #1820
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
FYI @jdb78, any insights? |
Hi @fkiraly, the conclusion I've come to through extensive testing is that even though you are forced to have non-NaN values for your target variable when instantiating the dataloader for inference, if you don't have the target variable assigned explicitly under It would be grand if I could have an absolute confirmation of this though. |
The target variable is generally not used in inference but only for training and evaluation. If you add it separately to unknown reals, it will use historic data but make a cutoff for inference. There is one exception and that is normalization. So make sure to use a prefitted target normalizer or the encodernormalizer. That stuff happens automatically if you use construct your test dataset from the training dataset with .from_dataset |
Hi @jdb78, thanks for the confirmation. On your comment, If I use a target normalisation, will this effectively then be 'baked in' to the model or would I need to supply the normalisation weights using historic data? For instance, if I'm doing inference with a model trained using GroupNormalizer, is this normalisation then associated with the specific group and baked in or will it be determined based on the new input data — assuming all group ids are known? As a side note, funnily enough I've noticed for my use case that not using any normalisation actual results in better peformance. Using these deep learning packages can be quite opaque as not all functionality is clearly defined in the docs. |
I'm busy implementing a TFT for a use case where the target variable will be unavailable during inference time. At first I assumed that if the target is excluded from the
time_varying_unknown_reals
that the past encoder doesn't consider it all, but upon further reading, it seems that the target variable is handled differently.Is there any clean way to configure the TFT or TimeSeriesDataset to explicitly make it avoid any notion of the past targets? In my use-case specifically we want to only predict one step into the future.
I also found this thread for which I can't deduce anything conclusive.
The text was updated successfully, but these errors were encountered: