-
Notifications
You must be signed in to change notification settings - Fork 5
Add a feature to save/load a learned model #23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
|
I'm reopening this, I think there could be some simplification in the At the moment, loading a previously trained model needs the training texts (to build the tokenizer in the state required), the exact config for the training (which can be loaded from json), the checkpoint folder (from lightning), and finally to set the flag torchft_model = torchFastText.from_json('torchft_model')
torchft_model.build(np.asarray(X_train), lr=0.1)
torchft_model.load_from_checkpoint('lightning_logs/version_1/checkpoints/epoch=0-step=1705.ckpt')
torchft_model.trained=True Did I miss something ? Omitting the requirement to have the training texts would make the models more portable across environment I think. |
Another thing : Loading a model trained with CUDA and using predict gives the following error :
Adding the line |
We need to be able to save a model after train to reuse it later. Maybe by using : https://pytorch.org/tutorials/beginner/saving_loading_models.html
The text was updated successfully, but these errors were encountered: