Skip to content

Commit 51768e9

Browse files
authored
Merge pull request #82 from LeonardoMantovani/patch-1
Added ollama through docker in AI docs
2 parents d298339 + 40b12a5 commit 51768e9

File tree

1 file changed

+36
-6
lines changed

1 file changed

+36
-6
lines changed

docs/ai.md

+36-6
Original file line numberDiff line numberDiff line change
@@ -7,15 +7,45 @@ slug: /ai
77

88
GPU Acceleration for both Nvidia and AMD are included out of the box and usually do not require any extra setup.
99

10-
### Desktop integration
10+
### Ollama GUI
1111

12-
[Install alpaca](https://flathub.org/apps/com.jeffser.Alpaca) to use a native desktop application. If you prefer an all-GUI solution just use Alpaca and manage your models from within the application. Alpaca supports Nvidia and AMD acceleration natively and _includes ollama_.
12+
[Install Alpaca](https://flathub.org/apps/com.jeffser.Alpaca) to manage and chat with your LLM models from within a native desktop application. Alpaca supports Nvidia and AMD acceleration natively and *includes ollama*.
1313

1414
![image](https://github.com/user-attachments/assets/9fd38164-e2a9-4da1-9bcd-29e0e7add071)
1515

16-
### Ollama CLI
16+
### Ollama API
1717

18-
If you prefer a CLI or to interact with the ollama server from your scripts, then you can install ollama with either
18+
Since Alpaca doesn't expose any API, if you need other applications than Alpaca to interact with your ollama instance (for example an IDE) you should consider installing it [in a docker container](https://hub.docker.com/r/ollama/ollama).
1919

20-
- `brew install ollama` (recommended) with [Homebrew](https://formulae.brew.sh/formula/ollama)
21-
- The installation script from the [ollama website](https://ollama.com), but will require some manual tweaks in their docs to get it in your path.
20+
To do so, first configure docker to use the nvidia drivers (that come preinstalled with Bluefin) with:
21+
```bash
22+
sudo nvidia-ctk runtime configure --runtime=docker
23+
sudo systemctl restart docker
24+
```
25+
Then, choose a folder where to install the ollama container (for example `~/Containers/ollama`) and inside it create a new file named `docker-compose.yaml` with the following content:
26+
```yaml
27+
---
28+
services:
29+
ollama:
30+
image: ollama/ollama
31+
container_name: ollama
32+
restart: unless-stopped
33+
ports:
34+
- 11434:11434
35+
volumes:
36+
- ./ollama_v:/root/.ollama
37+
deploy:
38+
resources:
39+
reservations:
40+
devices:
41+
- capabilities:
42+
- gpu
43+
```
44+
Finally, open a terminal in the folder containing the file just created and start the container with
45+
```bash
46+
docker compose up -d
47+
```
48+
and your ollama instance should be up and running at `http://127.0.0.1:11434`!
49+
50+
> **NOTE:** if you still want to use Alpaca as one of the way of interacting with Ollama, you can open the application, then go to *Preferences*, toggle the option *Use the Remote Connection to Ollama*, specify the endpoint above (`http://127.0.0.1:11434`) as *Server URL* (leave *Bearer Token* empty) in the dialog that will pop up and then press *Connect*.
51+
> This way you should be able to manage the models installed on your ollama container and chat with them from the Alpaca GUI.

0 commit comments

Comments
 (0)