Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run GPU supported LLM inside container with devcontainer #143

Merged
merged 7 commits into from
Feb 1, 2025
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
update intro
Signed-off-by: Kiran1689 <kirannaragund197@gmail.com>
  • Loading branch information
Kiran1689 committed Jan 31, 2025
commit 2b3244aceb9eadb3fb60f5ae8b32024a49ee507d
Original file line number Diff line number Diff line change
@@ -10,22 +10,20 @@ tags: ["huggingface", "daytona", "llm"]

# Introduction

Large language models ([LLMs](../definitions/20241219_definition_llm.md)) are becoming increasingly important in AI and software development.
They are great for tasks like code generation, debugging, and creating natural language responses.
Running these models in a [containerized](../definitions/20240819_definition_containerization.md) environment makes them easier to set up, portable, and GPU-accelerated.
Running large language models ([LLMs](../definitions/20241219_definition_llm.md)) inside a [containerized](../definitions/20240819_definition_containerization.md) environment provides flexibility, portability, and GPU acceleration, making it easier to manage dependencies and optimize performance.

This guide will walk you through how to set up and run the Mamba-Codestral-7B-v0.1 model,
hosted on [Hugging Face](https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1), within a container with [devcontainer](../definitions/20240819_definition_development%20container.md).

## TL;DR

- Required tools and knowledge for hands-on learning
- Overview of Mamba-Codestral-7B
- Preparations
- Setting Up the Dev container and project repository
- Running LLM in Daytona with dev container
- Confirmation
- Conclusion
- **Required Tools & Knowledge**: Tools and skills needed for hands on learning.
- **Overview of Mamba-Codestral-7B**: Overview and Key features of Mamba-Codestral-7B model.
- **Preparation Steps**: What needs to be set up before starting the guide.
- **Setting Up the Dev Container & Project Repository**: Detailed steps for preparing the development environment.
- **Running LLM in Daytona**: Running the model within the Daytona workspace using the dev container.
- **Confirmation**: confirming everything is working as expected.
- **Conclusion**: Key takeaways.

## Prerequisites