NeMo Framework is NVIDIA's GPU accelerated, end-to-end training framework for large language models (LLMs), multi-modal models and speech models. It enables seamless scaling of training (both pretraining and post-training) workloads from single GPU to thousand-node clusters for both 🤗Hugging Face/PyTorch and Megatron models. This GitHub organization includes a suite of libraries and recipe collections to help users train models from end to end.
NeMo Framework is also a part of the NVIDIA NeMo software suite for managing the AI agent lifecycle.
Visit the individual repos to find out more 🔍, raise 🐛, contribute ✍️ and participate in discussion forums 🗣️!
- NeMo Megatron-Bridge (PyT native loop, Megatron-core backend trainng)
- NeMo AutoModel (PyT native loop, PyTorch backend training)
- NeMo RL (PyT native loop, with both PyTorch and Megatron-core backends)
- NeMo Curator
- NeMo Eval
- NeMo Export-Deploy
- NeMo Run
- Previous NeMo (with Lightning) (This is the previous NeMo 1.x/2.x repo with Lightning loop that will be added to the GitHub Org and repurposed to focus on Speech)
- NeMo Guardrails (to be added to the Github Org)
- NeMo Speech (to be added to the Github Org)
- NeMo Skills (to be added to the Github Org)
- NeMo VFM (coming up - PyT native loop, both Megatron-core and PyTorch backends)
📢 Also take a look at our blogs for the latest exciting things that we are working on!
The NeMo GitHub Org and its repo collections are created to address the following problems
- Need for composability: The Previous NeMo is monolithic and encompasses too many things, making it hard for users to find what they need. Container size is also an issue. Breaking down the Monolithic repo into a series of functional-focused repos to facilitate code discovery.
- Need for customizability: The Previous NeMo uses PyTorch Lighting as the default trainer loop, which provides some out of the box functionality but making it hard to customize. NeMo Megatron-Bridge, NeMo AutoModel, and NeMo RL have adopted pytorch native custom loop to improve flexibility and ease of use for developers.
To learn more about NVIDIA NeMo Framework and all of its component libraries, please refer to the NeMo Framework User Guide, which includes quick start guide, tutorials, model-specific recipes, best practice guides and performance benchmarks.
Apache 2.0 licensed with third-party attributions documented in each repository.