-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: No tokenizer found for model
with Ollama
#291
Comments
Happens to me to with groq:
|
Happens to me to with gpt-4o-mini
|
This error occurs here Line 101 in 94eeb2f
because this library relies on https://github.com/zurawiki/tiktoken-rs to count and limit prompt tokens. Both libraries are intended to be used exclusively with OpenAI models:
For ollama users, there's a workaround that involves creating an alias for a target model. Here's how you can do it: $ ollama cp qwen2.5-coder:7b gpt-4--gptcommit-workaround-alias--qwen2.5-coder:7b # gptcommit/config.toml
[openai]
api_base = "http://localhost:11434/v1"
api_key = ""
model = "gpt-4--gptcommit-workaround-alias--qwen2.5-coder:7b"
retries = 2
proxy = "" To remove the alias when you no longer need it, use the following command: $ ollama rm gpt-4--gptcommit-workaround-alias--qwen2.5-coder:7b Only alias will be removed, not the original model. Also note that you can use any name or prefix from the Line 95 in 94eeb2f
|
Describe the bug
Trying to use ollama but failed with
Error: No tokenizer found for model
, tried to change the model but still same error.To Reproduce
ollama run llama3.1:latest
~/.config/gptcommit/config.toml
and add above configgit add
something and Executegit commit
Expected behavior
Works correctly and without error
Screenshots
If applicable, add screenshots to help explain your problem.
Run your command with the
RUST_LOG=trace
environment variable for the best supportSystem Info (please complete the following information):
The text was updated successfully, but these errors were encountered: