Replies: 13 comments 7 replies
-
|
I have a negative experience with getting started on Linux. When I first launched a cli it caused excessive disk load while showing nothing. After a 5 minutes of empty console it showed a prompt asking for either logging in to Continue or to use Anthropic key. I wanted to use local models and nothing of that options was suitable for me, so I tried to exit the cli, but it does not react to Ctrl-C, neither Ctrl-d. The only way to exit is to send SIGKILL. I hope the next updates will make it at least runnable |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
experiencing the issue as @Igorgro , it often stuck and no response, and hard to exit even if press cmd+C/ctrl+C, and wait for a long time after input /exit. |
Beta Was this translation helpful? Give feedback.
-
|
Trying the continue.dev cli on debian. - some feedback. 1 - The cn cli ignores the stream: false directive from the config.yaml file (it always uses streaming, would be nice to support non-streaming too if possible) 2 - have not found an ollama model that works well with cn cli .. either you get "Error: 400 registry.ollama.ai/library/ does not support tools" . or the model does poorly when function calling (like qwen 2.5 coder) A section in the docs for suggested models to use would be awesome. Some positive feedback -- worked great with Gemini 2.5 Pro. |
Beta Was this translation helpful? Give feedback.
-
|
Continue CLI v1.5.10 is not recognizing the MCP servers in the |
Beta Was this translation helpful? Give feedback.
-
|
For me it does not work at all. I have a config which works perfectly fine in the VSCode extension but not at all in the CLI. See my question here: #8628 |
Beta Was this translation helpful? Give feedback.
-
|
ok |
Beta Was this translation helpful? Give feedback.
-
Continue CLI v1.5.14 doesn't load custom prompts from .continue/prompts directory while the IDE version does - is there a plan to support this functionality in the CLI in the future?
|
Beta Was this translation helpful? Give feedback.
-
|
Here's a polite GitHub issue summary that captures all the problems you've encountered: Title: CLI setup wizard lacks "Use Local Models" option and doesn't detect existing config Description: First, thank you for creating Continue.dev! I'm excited to use it with local models like Ollama, but I've encountered some UX issues with the CLI setup that make it difficult to get started without cloud services. Problems:
Current workaround:
Suggested improvements:
Environment:
Again, thank you for this project! These changes would make the local-first experience much smoother for new users.
|
Beta Was this translation helpful? Give feedback.
-
|
Here's a polite GitHub issue that captures the Title: Allow configurable Description: Thank you for creating Continue! I'm excited to use it with local models, but I've encountered a significant UX issue with how Continue handles environment variables. The Problem: Continue currently searches for and uses
This creates conflicts when projects already use Why This Matters: Many development tools (including aider and Suggested Solutions:
Current Workarounds:
Impact: This affects any Continue user whose projects already use Again, thank you for this excellent project! This change would make the local-first experience much smoother and prevent conflicts with existing project tooling. Environment:
|
Beta Was this translation helpful? Give feedback.
-
|
Based on the documentation, here's a polished bug report for you: Title: Description: The configuration property Current Behavior: When configuring Ollama models (or any provider), parameters like models:
- name: gpt-oss
provider: ollama
model: gpt-oss:20b-fp16
apiBase: http://192.168.1.69:11435
defaultCompletionOptions:
num_ctx: 32768 # This applies to ALL roles, not just completionThis didn't work, and was recommended by the Documentation Agent. The Problem:
Expected Behavior: The property name should accurately reflect its scope. Suggested alternatives:
Suggested Solution:
Additional Context: According to the config.yaml reference, Impact: This naming issue affects user experience and leads to:
Thank you for considering this improvement to make Continue's configuration more intuitive! Edit: It seems |
Beta Was this translation helpful? Give feedback.
-
|
Here's a bug report you can post on the GitHub tracker: Title: TUI mode shows diff but no permission prompts or interactive UI in xterm Description: I'm using the Expected Behavior:
Actual Behavior:
Environment:
Configuration: name: default
version: 0.0.1
schema: v1
allowAnonymousTelemetry: false
models:
- name: gpt-oss
provider: ollama
model: gpt-oss:20b-fp16
apiBase: http://192.168.69.69:11435
requestOptions:
extraBodyProperties:
num_ctx: 32768
defaultCompletionOptions:
num_ctx: 32768
roles:
- chat
- edit
- apply
embeddingsProvider:
provider: ollama
model: qwen3-vl:2b-instruct-q4_K_M
apiBase: http://192.168.69.169:11434Questions:
The TUI appears to be missing the permission interface entirely, making it impossible to approve file writes even with the Thank you for your help! |
Beta Was this translation helpful? Give feedback.
-
|
Here's a complete, polite bug report for GitHub: Title: Unexpected Network Requests to Amazon Servers with Local-Only Configuration Description: I've configured Continue with Configuration: name: default
version: 0.0.1
schema: v1
allowAnonymousTelemetry: false
models:
- name: gpt-oss
provider: ollama
model: gpt-oss:20b-fp16
apiBase: http://10.X.X.X:11435
roles: [chat, edit, apply]
- name: Qwen3 Embedder
provider: ollama
model: qwen3-vl:2b-instruct-q4_K_M
apiBase: http://10.X.X.X:11434
roles: [embed]Expected Behavior: With telemetry disabled and only local Ollama models configured, Continue should not make any external network requests, including to Amazon servers. Actual Behavior: Network monitoring shows connections to Amazon servers despite the local-only configuration. Additional Context: I've verified that:
This may indicate unexpected behavior that warrants investigation. Thank you for looking into this issue. |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
-
🚀 We just launched Continue CLI - async AI agents in your terminal!
We'd love your feedback on the new CLI tool for building async coding agents. What workflows would you automate? Any feature requests or use cases we should consider?
Try it:
npm i -g @continuedev/cliDocs: https://docs.continue.dev/guides/cli
Announcement: https://x.com/continuedev/status/1958591429804269906
Beta Was this translation helpful? Give feedback.
All reactions