-
Notifications
You must be signed in to change notification settings - Fork 28.7k
[docs] Redesign #31757
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] Redesign #31757
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like! 👍
title: Pipelines for webserver inference | ||
- local: add_new_pipeline | ||
title: How to add a pipeline to 🤗 Transformers? | ||
- title: LLMs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A type of model that's becoming increasingly common are VLMs: they are the same as LLMs, but also accept image inputs.
Would it make sense to call this section "LLMs and VLMs"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For sure! Let's rename the section when we have some VLM-specific docs?
Indeed easier to read ❤️ . But there are a few places need to be moved if I understand correctly? |
I've kicked off the redesign with the "Get Started" section. Feel free to review this section while I start on the next one (Base classes)! The main changes are:
|
Hi, I'm back with an update! I've wrapped up the technical guides in the Models section. I'll circle back to the more conceptual docs later and also create some visual diagrams in Figma. Next up, I'll start working on the Preprocessors section. 🙂 The main focus is on how to load, customize, share, and contribute a model, basically a one-stop section for all your general model docs. The Load and Contribute docs have more significant changes:
|
Finished the first draft of the Tokenizers doc, and I'm pretty excited that it reduces "content creep" from 6 different docs to just 1! 😎 |
The first draft of the practical guides in the base classes section is finished now! Please feel free to check it out and leave any comments or feedback (not sure why the feature extractor and processor docs aren't showing in the preview at the moment) 😄 I'll start working on the inference section after I review the first draft. |
01434fd
to
ca38c6a
Compare
0813af1
to
bfc386d
Compare
3076497
to
a524d56
Compare
fc48f55
to
75bace2
Compare
d66930e
to
9312112
Compare
* toctree * not-doctested.txt * collapse sections * feedback * update * rewrite get started sections * fixes * fix * loading models * fix * customize models * share * fix link * contribute part 1 * contribute pt 2 * fix toctree * tokenization pt 1 * Add new model (huggingface#32615) * v1 - working version * fix * fix * fix * fix * rename to correct name * fix title * fixup * rename files * fix * add copied from on tests * rename to `FalconMamba` everywhere and fix bugs * fix quantization + accelerate * fix copies * add `torch.compile` support * fix tests * fix tests and add slow tests * copies on config * merge the latest changes * fix tests * add few lines about instruct * Apply suggestions from code review Co-authored-by: Arthur <[email protected]> * fix * fix tests --------- Co-authored-by: Arthur <[email protected]> * "to be not" -> "not to be" (huggingface#32636) * "to be not" -> "not to be" * Update sam.md * Update trainer.py * Update modeling_utils.py * Update test_modeling_utils.py * Update test_modeling_utils.py * fix hfoption tag * tokenization pt. 2 * image processor * fix toctree * backbones * feature extractor * fix file name * processor * update not-doctested * update * make style * fix toctree * revision * make fixup * fix toctree * fix * make style * fix hfoption tag * pipeline * pipeline gradio * pipeline web server * add pipeline * fix toctree * not-doctested * prompting * llm optims * fix toctree * fixes * cache * text generation * fix * chat pipeline * chat stuff * xla * torch.compile * cpu inference * toctree * gpu inference * agents and tools * gguf/tiktoken * finetune * toctree * trainer * trainer pt 2 * optims * optimizers * accelerate * parallelism * fsdp * update * distributed cpu * hardware training * gpu training * gpu training 2 * peft * distrib debug * deepspeed 1 * deepspeed 2 * chat toctree * quant pt 1 * quant pt 2 * fix toctree * fix * fix * quant pt 3 * quant pt 4 * serialization * torchscript * scripts * tpu * review * model addition timeline * modular * more reviews * reviews * fix toctree * reviews reviews * continue reviews * more reviews * modular transformers * more review * zamba2 * fix * all frameworks * pytorch * supported model frameworks * flashattention * rm check_table * not-doctested.txt * rm check_support_list.py * feedback * updates/feedback * review * feedback * fix * update * feedback * updates * update --------- Co-authored-by: Younes Belkada <[email protected]> Co-authored-by: Arthur <[email protected]> Co-authored-by: Quentin Gallouédec <[email protected]>
* toctree * not-doctested.txt * collapse sections * feedback * update * rewrite get started sections * fixes * fix * loading models * fix * customize models * share * fix link * contribute part 1 * contribute pt 2 * fix toctree * tokenization pt 1 * Add new model (huggingface#32615) * v1 - working version * fix * fix * fix * fix * rename to correct name * fix title * fixup * rename files * fix * add copied from on tests * rename to `FalconMamba` everywhere and fix bugs * fix quantization + accelerate * fix copies * add `torch.compile` support * fix tests * fix tests and add slow tests * copies on config * merge the latest changes * fix tests * add few lines about instruct * Apply suggestions from code review Co-authored-by: Arthur <[email protected]> * fix * fix tests --------- Co-authored-by: Arthur <[email protected]> * "to be not" -> "not to be" (huggingface#32636) * "to be not" -> "not to be" * Update sam.md * Update trainer.py * Update modeling_utils.py * Update test_modeling_utils.py * Update test_modeling_utils.py * fix hfoption tag * tokenization pt. 2 * image processor * fix toctree * backbones * feature extractor * fix file name * processor * update not-doctested * update * make style * fix toctree * revision * make fixup * fix toctree * fix * make style * fix hfoption tag * pipeline * pipeline gradio * pipeline web server * add pipeline * fix toctree * not-doctested * prompting * llm optims * fix toctree * fixes * cache * text generation * fix * chat pipeline * chat stuff * xla * torch.compile * cpu inference * toctree * gpu inference * agents and tools * gguf/tiktoken * finetune * toctree * trainer * trainer pt 2 * optims * optimizers * accelerate * parallelism * fsdp * update * distributed cpu * hardware training * gpu training * gpu training 2 * peft * distrib debug * deepspeed 1 * deepspeed 2 * chat toctree * quant pt 1 * quant pt 2 * fix toctree * fix * fix * quant pt 3 * quant pt 4 * serialization * torchscript * scripts * tpu * review * model addition timeline * modular * more reviews * reviews * fix toctree * reviews reviews * continue reviews * more reviews * modular transformers * more review * zamba2 * fix * all frameworks * pytorch * supported model frameworks * flashattention * rm check_table * not-doctested.txt * rm check_support_list.py * feedback * updates/feedback * review * feedback * fix * update * feedback * updates * update --------- Co-authored-by: Younes Belkada <[email protected]> Co-authored-by: Arthur <[email protected]> Co-authored-by: Quentin Gallouédec <[email protected]>
Ok guys let me the self taught developer have it roast my ass! just provide much needed mentorship afterwards?!🫣😬🥸, i luv u guys?🥹 |
The main goal of this PR is to redesign the Transformers docs to:
This PR proposes a potential structure for achieving 2 and 3. Once the structure is in place, each doc will be rewritten to achieve 1.
If you're interested in more details about the redesign's motivation, please read this blog post. If you want more details about 1, 2, and 3, please read this post and this one too.
All feedback, alternative structures, and comments are welcomed! Thanks 🙂