You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -207,7 +207,8 @@ The gguf-converted files for this model can be found here: [functionary-7b-v1](h
207
207
messages= [
208
208
{
209
209
"role": "system",
210
-
"content": "A chat between a curious user and an artificial intelligence assitant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant callse functions with appropriate input when necessary"
210
+
"content": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant calls functions with appropriate input when necessary"
211
+
211
212
},
212
213
{
213
214
"role": "user",
@@ -265,7 +266,8 @@ Then you'll need to use a custom chat handler to load the clip model and process
265
266
>>> llm = Llama(
266
267
model_path="./path/to/llava/llama-model.gguf",
267
268
chat_handler=chat_handler,
268
-
n_ctx=2048# n_ctx should be increased to accomodate the image embedding
269
+
n_ctx=2048, # n_ctx should be increased to accomodate the image embedding
0 commit comments