Error connecting to local llm #8856
Unanswered
ashish-kumar-hpe
asked this question in
Help
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have configured my local vscode continue extension to point to local nvidia nim codellama model and my configuration looks something like this.
name: Config
version: 1.0.0
schema: v1
assistants:
model: CodeLlama
models:
provider: ollama
model: codellama/codellama-13b-instruct
apiBase: https://ashish-code-llama-1.project-user-ashish-kumar.serving.adt-alto01-ingress.us.rdlabs.hpecorp.net/v1
apiKey: '**************'
title: codellama:13b
roles:
I am getting following error while connecting to llm...
request to https://ashish-code-llama-1.project-user-ashish-kumar.serving.adt-alto01-ingress.us.rdlabs.hpecorp.net/v1/api/chat failed, reason: unable to verify the first certificate
My question is that how can i configure my ssl certificate since my endpoint is https.
Beta Was this translation helpful? Give feedback.
All reactions