-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local LLM could not connect #205
Comments
Thanks for the feedback. Sorry you have run into an issue. Which model are you trying to use? |
Thank you I'm trying to use bling-sheared-llama-1.3b-0.1 Load model directlyfrom transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1") I changed the path to my c drive . But getting error , seemilike I have to have huggingface api token? |
Hmm, can you provide the error? I ran those three lines of code, and it seems to download the model fine. |
where you able to get the model running? Were you able to run the model outside the frameware with the ollama command? |
If I understand the OP correctly, then I want to know this, too. How do I load a model from a non-standard location on my local drive? It's a GGUF, and it's not in the huggingface cache system at all. load_model() seems to expect a huggingface model path. Reference issue: #433 |
haved you solved the problem? |
Hi, thank you for the wonderful this code. I have downloaded the model from huggingface but when i try to load from prompt load , I could not be able to load
Can you pls help me. I don't want to load model thru huggingface app key
The text was updated successfully, but these errors were encountered: