-
Notifications
You must be signed in to change notification settings - Fork 2k
Open
Labels
DockerSupport for Docker containerizationSupport for Docker containerizationLocal ModelsRunning NemoClaw with local modelsRunning NemoClaw with local modelsNV QABugs found by the NVIDIA QA TeamBugs found by the NVIDIA QA TeambugSomething isn't workingSomething isn't working
Description
Description
Ollama is detected because localhost:11434 responds, but onboarding later fails because containers cannot reach host.openshell.internal:11434.
Reproduction Steps
- Start from an Ubuntu machine with Docker CE installed and running.
- Install Ollama.
- Start Ollama in a host-only way so it binds to loopback, for example using the default system service or any mode that results in:
ss -ltnp | grep 11434
showing:
127.0.0.1:11434 - Verify that Ollama appears healthy from the host:
curl http://localhost:11434/api/tags - Ensure at least one Ollama model exists, for example:
ollama pull nemotron-3-nano:30b
ollama list - Run the NemoClaw installer:
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash - During onboarding:
enter a valid sandbox name
choose Local Ollama
choose the available Ollama model - Continue until onboarding reaches inference provider setup.
Environment
- Openshell: 0.0.11
- nemoclaw: 0.1.0 (main: dbfd78c)
- OS: Ubuntu
- Container runtime: Docker CE
- GPU: Available
- Ollama: Installed
- Ollama models: None (ollama list is empty)
- NemoClaw install method: curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
Debug Output
Logs
Checklist
- I confirmed this bug is reproducible
- I searched existing issues and this is not a duplicate
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
DockerSupport for Docker containerizationSupport for Docker containerizationLocal ModelsRunning NemoClaw with local modelsRunning NemoClaw with local modelsNV QABugs found by the NVIDIA QA TeamBugs found by the NVIDIA QA TeambugSomething isn't workingSomething isn't working