-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run GPU supported LLM inside container with devcontainer #4
Comments
/bounty $300 |
💎 $300 bounty • DaytonaSteps to solve:
If no one is assigned to the issue, feel free to tackle it, without confirmation from us, after registering your attempt. In the event that multiple PRs are made from different people, we will generally accept those with the cleanest code. Please respect others by working on PRs that you are allowed to submit attempts to. e.g. If you reached the limit of active attempts, please wait for the ability to do so before submitting a new PR. If you can not submit an attempt, you will not receive your payout. Thank you for contributing to daytonaio/content! Add a bounty • Share on socials
|
/attempt #4 Options |
/attempt #4
|
Hi @nkkko |
/attempt #4
|
💡 @Kiran1689 submitted a pull request that claims the bounty. You can visit your bounty board to reward. |
Content Type
Guide
Article Description
You need to get https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1 running inside the devcontainer in Daytona and write about it.
Write several short samples of Python scripts.
Target Audience
dev interested in integrating LLMs
References/Resources
No response
Examples
Examples of simple python scripts
Special Instructions
No response
The text was updated successfully, but these errors were encountered: