Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any end-end inference example with Google Colab & HuggingFace #92

Open
ramsrigouthamg opened this issue Jun 10, 2021 · 2 comments
Open

Comments

@ramsrigouthamg
Copy link

Hi Team,

Thanks a lot for this.

Few questions -

  1. Is the speedup only for GPU or the inference on CPU is also boosted?

  2. Wondering if an inference example with T5/BART summarization from Huggingface etc can be provided in a colab notebook or so. Easier to adopt.

Sorry if it is a bit of a stretch to request this. Appreciate you reading this.

@yuyan2do
Copy link
Member

Here is a colab notebook to do inference with BART.

https://github.com/microsoft/fastseq/blob/main/examples/EL-attention/README.md#example-usage

@teakay
Copy link

teakay commented Jun 22, 2023

Hi, I tried to install using pip. But i got this error :

ModuleNotFoundError Traceback (most recent call last) Cell In[2], line 1
----> 1 import fastseq

File /opt/conda/lib/python3.10/site-packages/fastseq/init.py:9
6 from fastseq.logging import set_default_log_level
7 set_default_log_level()
----> 9 import fastseq.models # pylint: disable=wrong-import-position
10 import fastseq.optimizer # pylint: disable=wrong-import-position

ModuleNotFoundError: No module named 'fastseq.models'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants