Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Update docs/ai.md to include Ollama Quadlet instructions #93

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 64 additions & 2 deletions docs/ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,70 @@

## Running Ollama as a Service

Ollama can also be used for people who prefer to use that tool. If you want third party tools to integrate with it, (for example an IDE) you should consider installing it [in a docker container](https://hub.docker.com/r/ollama/ollama).
Since Alpaca doesn't expose any API, if you need other applications than Alpaca to interact with your ollama instance (for example an IDE) you should consider installing it in a [container](https://hub.docker.com/r/ollama/ollama).

To do so, first configure docker to use the nvidia drivers (that come preinstalled with Bluefin) with:
### Quadlet (recommended)

Check notice on line 61 in docs/ai.md

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

docs/ai.md#L61

Expected: 1; Actual: 0; Below
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Expected: 1; Actual: 0; Below

I don't know what Codacy wants here.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found a link to the Codacy website for this issue, but there isn't much information and the suggested fix looks exactly the same

[Bazzite Quadlet Docs](https://docs.bazzite.gg/Installing_and_Managing_Software/Quadlet/)
`~/.config/containers/systemd/ollama.container`
```
[Unit]
Description=Ollama Service
After=network.target local-fs.target

[Container]
Image=ollama/ollama:latest
ContainerName=ollama
AutoUpdate=yes
PublishPort=11434:11434
Volume=%h/.ollama:/.ollama
RemapUsers=keep-id
RunInit=yes
NoNewPrivileges=no
PodmanArgs=--userns=keep-id
PodmanArgs=--group-add=keep-groups
PodmanArgs=--ulimit=host
PodmanArgs=--security-opt=label=disable
PodmanArgs=--cgroupns=host

# Nvidia
AddDevice=nvidia.com/gpu=all

# AMD
AddDevice=/dev/dri
AddDevice=/dev/kfd

[Service]
RestartUnlessStopped=yes
TimeoutStartSec=60s
# Ensure there's a userland podman.sock
ExecStartPre=/bin/systemctl --user enable podman.socket
# Ensure that the dir exists
ExecStartPre=-mkdir -p %h/.ollama

[Install]
WantedBy=multi-user.target
```

```sh
❯ systemctl --user daemon-reload

# start Ollama podlet for current session
❯ systemctl --user start ollama
❯ systemctl --user status ollama

# start Ollama podlet automatically after reboot
❯ systemctl --user enable ollama

# connect to ollama
❯ ollama list

# download and run model https://ollama.com/search
❯ ollama run <model>
```

### Docker Compose

To do so, first configure docker to use the nvidia drivers (that come preinstalled with Bluefin).

```bash
sudo nvidia-ctk runtime configure --runtime=docker
Expand Down Expand Up @@ -86,6 +147,7 @@
- gpu
```


Finally, open a terminal in the folder containing the file just created and start the container with

```bash
Expand Down