Skip to content
@NVIDIA-NeMo

NVIDIA-NeMo

NVIDIA NeMo Framework

NeMo Framework is NVIDIA's GPU accelerated, end-to-end training framework for large language models (LLMs), multi-modal models and speech models. It enables seamless scaling of training (both pretraining and post-training) workloads from single GPU to thousand-node clusters for both 🤗Hugging Face/PyTorch and Megatron models. This GitHub organization includes a suite of libraries and recipe collections to help users train models from end to end.

NeMo Framework is also a part of the NVIDIA NeMo software suite for managing the AI agent lifecycle.

image

Figure 1. NeMo Framework Repo Overview

Visit the individual repos to find out more 🔍, raise 🐛, contribute ✍️ and participate in discussion forums 🗣️!

📢 Also take a look at our blogs for the latest exciting things that we are working on!

Some background contexts and motivations

The NeMo GitHub Org and its repo collections are created to address the following problems

  • Need for composability: The Previous NeMo is monolithic and encompasses too many things, making it hard for users to find what they need. Container size is also an issue. Breaking down the Monolithic repo into a series of functional-focused repos to facilitate code discovery.
  • Need for customizability: The Previous NeMo uses PyTorch Lighting as the default trainer loop, which provides some out of the box functionality but making it hard to customize. NeMo Megatron-Bridge, NeMo AutoModel, and NeMo RL have adopted pytorch native custom loop to improve flexibility and ease of use for developers.

Documentation

To learn more about NVIDIA NeMo Framework and all of its component libraries, please refer to the NeMo Framework User Guide, which includes quick start guide, tutorials, model-specific recipes, best practice guides and performance benchmarks.

License

Apache 2.0 licensed with third-party attributions documented in each repository.

Pinned Loading

  1. Curator Curator Public

    Scalable data pre processing and curation toolkit for LLMs

    Python 1.1k 166

  2. RL RL Public

    Scalable toolkit for efficient model reinforcement

    Python 822 118

  3. Automodel Automodel Public

    Fine-tune any Hugging Face LLM or VLM on day-0 using PyTorch-native features for GPU-accelerated distributed training with superior performance and memory efficiency.

    Python 54 7

  4. Megatron-Bridge Megatron-Bridge Public

    Training library for Megatron-based models

    Python 40 11

Repositories

Showing 9 of 9 repositories

Top languages

Loading…

Most used topics