You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: NEWS.md
+5
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,10 @@
1
1
# ollamar (development version)
2
2
3
+
-`generate()` and `chat()` support [structured output](https://ollama.com/blog/structured-outputs) via `format` parameter.
4
+
-`test_connection()` returns boolean instead of `httr2` object. #29
5
+
-`chat()` supports [tool calling](https://ollama.com/blog/tool-support) via `tools` parameter. Added `get_tool_calls()` helper function to process tools. #30
6
+
- Simplify README and add Get started vignette with more examples.
7
+
3
8
# ollamar 1.2.1
4
9
5
10
-`generate()` and `chat()` accept multiple images as prompts/messages.
Copy file name to clipboardexpand all lines: README.Rmd
+1-1
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ The library also makes it easy to work with data structures (e.g., conversationa
27
27
28
28
To use this R library, ensure the [Ollama](https://ollama.com) app is installed. Ollama can use GPUs for accelerating LLM inference. See [Ollama GPU documentation](https://github.com/ollama/ollama/blob/main/docs/gpu.md) for more information.
29
29
30
-
See [Ollama's Github page](https://github.com/ollama/ollama) for more information. This library uses the [Ollama REST API (see documentation for details)](https://github.com/ollama/ollama/blob/main/docs/api.md) and has been tested on Ollama v0.1.30 and above. It was last tested on Ollama v0.3.10.
30
+
See [Ollama's Github page](https://github.com/ollama/ollama) for more information. This library uses the [Ollama REST API (see documentation for details)](https://github.com/ollama/ollama/blob/main/docs/api.md) and was last tested on v0.5.4.
31
31
32
32
> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
The `chat()` and `generate()` functions support [structured outputs](https://ollama.com/blog/structured-outputs), making it possible to constrain a model's output to a specified format defined by a JSON schema (R list).
368
+
369
+
```{r eval=FALSE}
370
+
# define a JSON schema as a list to constrain a model's output
371
+
format <- list(
372
+
type = "object",
373
+
properties = list(
374
+
name = list(type = "string"),
375
+
capital = list(type = "string"),
376
+
languages = list(type = "array",
377
+
items = list(type = "string")
378
+
)
379
+
),
380
+
required = list("name", "capital", "languages")
381
+
)
382
+
383
+
generate("llama3.1", "tell me about Canada", output = "structured", format = format)
384
+
385
+
msg <- create_message("tell me about Canada")
386
+
chat("llama3.1", msg, format = format, output = "structured")
387
+
```
388
+
365
389
### Parallel requests
366
390
367
391
For the `generate()` and `chat()` endpoints/functions, you can specify `output = 'req'` in the function so the functions return `httr2_request` objects instead of `httr2_response` objects.
0 commit comments