Replies: 1 comment
-
|
If your local model is exposed over an openai-like api, you can add a new model to For instance, I installed {
"Local-Qwen-3-coder-30b": {
"type": "custom_openai",
"name": "qwen/qwen3-coder-30b",
"custom_endpoint": {
"url": "http://localhost:1234/v1"
},
"context_length": 65536
}
}and everything worked mostly out of the box. I had to tweak my model to get a context window of 65k to fit in memory with the model, and the name property needs to be the name that you see in LM Studio. The type should be custom_openai. If like me, you run code-puppy and other tools out of a linux image on WSL2, you may need extra steps to get localhost to mirror correctly to a server run from the Windows side. For me, that mean adding to the |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Just wanted to start a discussion regarding having the setup for locally hosted models on their desktop through Ollama.
Do you guys have that in the roadmap. I am interested in contributing to this community as well
Beta Was this translation helpful? Give feedback.
All reactions