We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi Team,
Thanks a lot for this.
Few questions -
Is the speedup only for GPU or the inference on CPU is also boosted?
Wondering if an inference example with T5/BART summarization from Huggingface etc can be provided in a colab notebook or so. Easier to adopt.
Sorry if it is a bit of a stretch to request this. Appreciate you reading this.
The text was updated successfully, but these errors were encountered:
Here is a colab notebook to do inference with BART.
https://github.com/microsoft/fastseq/blob/main/examples/EL-attention/README.md#example-usage
Sorry, something went wrong.
Hi, I tried to install using pip. But i got this error :
ModuleNotFoundError Traceback (most recent call last) Cell In[2], line 1 ----> 1 import fastseq
File /opt/conda/lib/python3.10/site-packages/fastseq/init.py:9 6 from fastseq.logging import set_default_log_level 7 set_default_log_level() ----> 9 import fastseq.models # pylint: disable=wrong-import-position 10 import fastseq.optimizer # pylint: disable=wrong-import-position
ModuleNotFoundError: No module named 'fastseq.models'
No branches or pull requests
Hi Team,
Thanks a lot for this.
Few questions -
Is the speedup only for GPU or the inference on CPU is also boosted?
Wondering if an inference example with T5/BART summarization from Huggingface etc can be provided in a colab notebook or so. Easier to adopt.
Sorry if it is a bit of a stretch to request this. Appreciate you reading this.
The text was updated successfully, but these errors were encountered: