You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/ai.md
+36-6
Original file line number
Diff line number
Diff line change
@@ -7,15 +7,45 @@ slug: /ai
7
7
8
8
GPU Acceleration for both Nvidia and AMD are included out of the box and usually do not require any extra setup.
9
9
10
-
### Desktop integration
10
+
### Ollama GUI
11
11
12
-
[Install alpaca](https://flathub.org/apps/com.jeffser.Alpaca) to use a native desktop application. If you prefer an all-GUI solution just use Alpaca and manage your models from within the application. Alpaca supports Nvidia and AMD acceleration natively and _includes ollama_.
12
+
[Install Alpaca](https://flathub.org/apps/com.jeffser.Alpaca) to manage and chat with your LLM models from within a native desktop application. Alpaca supports Nvidia and AMD acceleration natively and *includes ollama*.
If you prefer a CLI or to interact with the ollama server from your scripts, then you can install ollama with either
18
+
Since Alpaca doesn't expose any API, if you need other applications than Alpaca to interact with your ollama instance (for example an IDE) you should consider installing it [in a docker container](https://hub.docker.com/r/ollama/ollama).
19
19
20
-
-`brew install ollama` (recommended) with [Homebrew](https://formulae.brew.sh/formula/ollama)
21
-
- The installation script from the [ollama website](https://ollama.com), but will require some manual tweaks in their docs to get it in your path.
20
+
To do so, first configure docker to use the nvidia drivers (that come preinstalled with Bluefin) with:
Then, choose a folder where to install the ollama container (for example `~/Containers/ollama`) and inside it create a new file named `docker-compose.yaml` with the following content:
26
+
```yaml
27
+
---
28
+
services:
29
+
ollama:
30
+
image: ollama/ollama
31
+
container_name: ollama
32
+
restart: unless-stopped
33
+
ports:
34
+
- 11434:11434
35
+
volumes:
36
+
- ./ollama_v:/root/.ollama
37
+
deploy:
38
+
resources:
39
+
reservations:
40
+
devices:
41
+
- capabilities:
42
+
- gpu
43
+
```
44
+
Finally, open a terminal in the folder containing the file just created and start the container with
45
+
```bash
46
+
docker compose up -d
47
+
```
48
+
and your ollama instance should be up and running at `http://127.0.0.1:11434`!
49
+
50
+
> **NOTE:** if you still want to use Alpaca as one of the way of interacting with Ollama, you can open the application, then go to *Preferences*, toggle the option *Use the Remote Connection to Ollama*, specify the endpoint above (`http://127.0.0.1:11434`) as *Server URL* (leave *Bearer Token* empty) in the dialog that will pop up and then press *Connect*.
51
+
> This way you should be able to manage the models installed on your ollama container and chat with them from the Alpaca GUI.
0 commit comments