This repository was archived by the owner on Oct 6, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 16
Add a readme for clarity #130
Merged
Merged
Changes from 1 commit
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,84 @@ | ||
| # Docker Model CLI | ||
|
|
||
| A powerful command-line interface for managing, running, packaging, and deploying AI/ML models using Docker. This CLI lets you install and control the Docker Model Runner, interact with models, manage model artifacts, and integrate with OpenAI and other backends—all from your terminal. | ||
|
|
||
| ## Features | ||
| - **Install Model Runner**: Easily set up the Docker Model Runner for local or cloud environments with GPU support. | ||
| - **Run Models**: Execute models with prompts or in interactive chat mode, supporting multiline input and OpenAI-style backends. | ||
| - **List Models**: View all models available locally or via OpenAI, with options for JSON and quiet output. | ||
| - **Package Models**: Convert GGUF files into Docker model OCI artifacts and push them to registries, including license and context size options. | ||
| - **Configure Models**: Set runtime flags and context sizes for models. | ||
| - **Logs & Status**: Stream logs and check the status of the Model Runner and individual models. | ||
| - **Tag, Pull, Push, Remove, Unload**: Full lifecycle management for model artifacts. | ||
| - **Compose & Desktop Integration**: Advanced orchestration and desktop support for model backends. | ||
|
|
||
| ## Installation | ||
| 1. **Clone the repo:** | ||
| ```bash | ||
| git clone https://github.com/docker/model-cli.git | ||
| cd model-cli | ||
| ``` | ||
| 2. **Build the CLI:** | ||
| ```bash | ||
| make build | ||
| ``` | ||
| 3. **Install Model Runner:** | ||
| ```bash | ||
| ./model install-runner | ||
| ``` | ||
| Use `--gpu cuda` for GPU support, or `--gpu auto` for automatic detection. | ||
|
|
||
| ## Usage | ||
| Run `./model --help` to see all commands and options. | ||
|
|
||
| ### Common Commands | ||
| - `model install-runner` — Install the Docker Model Runner | ||
| - `model run MODEL [PROMPT]` — Run a model with a prompt or enter chat mode | ||
| - `model list` — List available models | ||
| - `model package --gguf <path> --push <target>` — Package and push a model | ||
| - `model logs` — View logs | ||
| - `model status` — Check runner status | ||
| - `model configure MODEL [flags]` — Configure model runtime | ||
| - `model unload MODEL` — Unload a model | ||
| - `model tag SOURCE TARGET` — Tag a model | ||
| - `model pull MODEL` — Pull a model | ||
| - `model push MODEL` — Push a model | ||
| - `model rm MODEL` — Remove a model | ||
|
|
||
| ## Example: Interactive Chat | ||
| ```bash | ||
| ./model run llama.cpp "What is the capital of France?" | ||
| ``` | ||
| Or enter chat mode: | ||
| ```bash | ||
| ./model run llama.cpp | ||
| Interactive chat mode started. Type '/bye' to exit. | ||
| > """ | ||
| Tell me a joke. | ||
| """ | ||
| ``` | ||
|
|
||
| ## Advanced | ||
| - **Compose Integration:** | ||
| Use `model compose up` to orchestrate model backends with Docker Compose. | ||
xenoscopic marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| - **OpenAI Backend:** | ||
| Use `--backend openai` and provide your API key for OpenAI-compatible models. | ||
xenoscopic marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| - **Packaging:** | ||
| Add licenses and set context size when packaging models for distribution. | ||
|
|
||
| ## Development | ||
| - **Run unit tests:** | ||
| ```bash | ||
| make unit-tests | ||
| ``` | ||
| - **Generate docs:** | ||
| ```bash | ||
| make docs | ||
| ``` | ||
|
|
||
| ## License | ||
| [Apache 2.0](LICENSE) | ||
|
|
||
| --- | ||
|
|
||
| Docker Model CLI — The fastest way to run, manage, and ship AI models with Docker. | ||
xenoscopic marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.