Split LLM server #1906
Unanswered
ramashishb-yubi
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Yes, install ollama and fetch model there. Then configure privategpt towards this. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
First of all thanks for creating this wonderful repo! I was able to get privateGPT running on a local server. I need help to run it in a slightly different configuration as below.
Has anyone tried this or has any idea of how this can be done? Appreciate any help!
Thanks,
Ramashish
Beta Was this translation helpful? Give feedback.
All reactions