You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/content/docs/getting-started/quickstart.md
+16-12
Original file line number
Diff line number
Diff line change
@@ -51,18 +51,19 @@ To customize the models, see [Model customization]({{%relref "docs/getting-start
51
51
| Model | Category | Docker command |
52
52
| --- | --- | --- |
53
53
|[phi-2](https://huggingface.co/microsoft/phi-2)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core phi-2```|
54
-
|[llava](https://github.com/SkunkworksAI/BakLLaVA)|[Multimodal LLM]({{%relref "docs/features/gpt-vision" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core llava```|
54
+
|🌋 [llava](https://github.com/SkunkworksAI/BakLLaVA)|[Multimodal LLM]({{%relref "docs/features/gpt-vision" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core llava```|
55
55
|[mistral-openorca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core mistral-openorca```|
56
56
|[bert-cpp](https://github.com/skeskinen/bert.cpp)|[Embeddings]({{%relref "docs/features/embeddings" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core bert-cpp```|
57
57
|[all-minilm-l6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)|[Embeddings]({{%relref "docs/features/embeddings" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg all-minilm-l6-v2```|
58
58
| whisper-base |[Audio to Text]({{%relref "docs/features/audio-to-text" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core whisper-base```|
59
59
| rhasspy-voice-en-us-amy |[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core rhasspy-voice-en-us-amy```|
60
-
| coqui |[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg coqui```|
61
-
| bark |[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg bark```|
62
-
| vall-e-x |[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg vall-e-x```|
60
+
|🐸 [coqui](https://github.com/coqui-ai/TTS)|[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg coqui```|
61
+
|🐶 [bark](https://github.com/suno-ai/bark)|[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg bark```|
62
+
|🔊 [vall-e-x](https://github.com/Plachtaa/VALL-E-X)|[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg vall-e-x```|
63
63
| mixtral-instruct Mixtral-8x7B-Instruct-v0.1 |[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core mixtral-instruct```|
64
64
|[tinyllama-chat](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF)[original model](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.3)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core tinyllama-chat```|
65
65
|[dolphin-2.5-mixtral-8x7b](https://huggingface.co/TheBloke/dolphin-2.5-mixtral-8x7b-GGUF)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core dolphin-2.5-mixtral-8x7b```|
@@ -72,18 +73,20 @@ To customize the models, see [Model customization]({{%relref "docs/getting-start
72
73
| Model | Category | Docker command |
73
74
| --- | --- | --- |
74
75
|[phi-2](https://huggingface.co/microsoft/phi-2)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core phi-2```|
75
-
|[llava](https://github.com/SkunkworksAI/BakLLaVA)|[Multimodal LLM]({{%relref "docs/features/gpt-vision" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core llava```|
76
+
|🌋 [llava](https://github.com/SkunkworksAI/BakLLaVA)|[Multimodal LLM]({{%relref "docs/features/gpt-vision" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core llava```|
76
77
|[mistral-openorca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core mistral-openorca```|
77
78
|[bert-cpp](https://github.com/skeskinen/bert.cpp)|[Embeddings]({{%relref "docs/features/embeddings" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core bert-cpp```|
78
79
|[all-minilm-l6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)|[Embeddings]({{%relref "docs/features/embeddings" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 all-minilm-l6-v2```|
79
80
| whisper-base |[Audio to Text]({{%relref "docs/features/audio-to-text" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core whisper-base```|
80
81
| rhasspy-voice-en-us-amy |[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core rhasspy-voice-en-us-amy```|
81
-
| coqui |[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 coqui```|
82
-
| bark |[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 bark```|
83
-
| vall-e-x |[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 vall-e-x```|
82
+
|🐸 [coqui](https://github.com/coqui-ai/TTS)|[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 coqui```|
83
+
|🐶 [bark](https://github.com/suno-ai/bark)|[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 bark```|
84
+
|🔊 [vall-e-x](https://github.com/Plachtaa/VALL-E-X)|[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 vall-e-x```|
84
85
| mixtral-instruct Mixtral-8x7B-Instruct-v0.1 |[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core mixtral-instruct```|
85
86
|[tinyllama-chat](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF)[original model](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.3)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core tinyllama-chat```|
86
87
|[dolphin-2.5-mixtral-8x7b](https://huggingface.co/TheBloke/dolphin-2.5-mixtral-8x7b-GGUF)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core dolphin-2.5-mixtral-8x7b```|
88
+
| 🐍 [mamba](https://github.com/state-spaces/mamba)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 mamba-chat```|
89
+
87
90
{{% /tab %}}
88
91
89
92
@@ -94,18 +97,19 @@ To customize the models, see [Model customization]({{%relref "docs/getting-start
94
97
| Model | Category | Docker command |
95
98
| --- | --- | --- |
96
99
|[phi-2](https://huggingface.co/microsoft/phi-2)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core phi-2```|
97
-
|[llava](https://github.com/SkunkworksAI/BakLLaVA)|[Multimodal LLM]({{%relref "docs/features/gpt-vision" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core llava```|
100
+
|🌋 [llava](https://github.com/SkunkworksAI/BakLLaVA)|[Multimodal LLM]({{%relref "docs/features/gpt-vision" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core llava```|
98
101
|[mistral-openorca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core mistral-openorca```|
99
102
|[bert-cpp](https://github.com/skeskinen/bert.cpp)|[Embeddings]({{%relref "docs/features/embeddings" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core bert-cpp```|
100
103
|[all-minilm-l6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)|[Embeddings]({{%relref "docs/features/embeddings" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 all-minilm-l6-v2```|
101
104
| whisper-base |[Audio to Text]({{%relref "docs/features/audio-to-text" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core whisper-base```|
102
105
| rhasspy-voice-en-us-amy |[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core rhasspy-voice-en-us-amy```|
103
-
| coqui |[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 coqui```|
104
-
| bark |[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 bark```|
105
-
| vall-e-x |[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 vall-e-x```|
106
+
|🐸 [coqui](https://github.com/coqui-ai/TTS)|[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 coqui```|
107
+
|🐶 [bark](https://github.com/suno-ai/bark)|[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 bark```|
108
+
|🔊 [vall-e-x](https://github.com/Plachtaa/VALL-E-X)|[Text to Audio]({{%relref "docs/features/text-to-audio" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 vall-e-x```|
106
109
| mixtral-instruct Mixtral-8x7B-Instruct-v0.1 |[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core mixtral-instruct```|
107
110
|[tinyllama-chat](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF)[original model](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.3)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core tinyllama-chat```|
108
111
|[dolphin-2.5-mixtral-8x7b](https://huggingface.co/TheBloke/dolphin-2.5-mixtral-8x7b-GGUF)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core dolphin-2.5-mixtral-8x7b```|
112
+
| 🐍 [mamba](https://github.com/state-spaces/mamba)|[LLM]({{%relref "docs/features/text-generation" %}}) |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 mamba-chat```|
0 commit comments