Help serving vllm with kuberay #11957
Unanswered
torsteinelv
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
have been trying to get this setup working for a long long time and now im about to give up. I have been googling and reading on others issues and deployment, and I see people ask of help on kuberay's github page and the are recommended to ask on vllms page , I hope someone will be able to assist.
Currently I have 3 node 1 gpu in each of them and im able to get the deployment started on 3 nodes but get internal server error when i try to make a request to the openai api. Was hoping someone could help me here :)
current setup:
code
error:
Beta Was this translation helpful? Give feedback.
All reactions