Skip to content

Commit 5bac155

Browse files
quaffspring-builds
authored andcommitted
Use inline literal in documents
`https://example.com` is rendered to a link which the text is `example.com` without https scheme, we should use literal here. See https://docs.asciidoctor.org/asciidoc/latest/syntax-quick-reference/#literals-and-source-code Signed-off-by: Yanming Zhou <[email protected]> (cherry picked from commit 54f5127)
1 parent d075519 commit 5bac155

File tree

8 files changed

+17
-17
lines changed

8 files changed

+17
-17
lines changed

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/deepseek-chat.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ The prefix `spring.ai.deepseek` is used as the property prefix that lets you con
102102
|====
103103
| Property | Description | Default
104104

105-
| spring.ai.deepseek.base-url | The URL to connect to | https://api.deepseek.com
105+
| spring.ai.deepseek.base-url | The URL to connect to | `+https://api.deepseek.com+`
106106
| spring.ai.deepseek.api-key | The API Key | -
107107
|====
108108

@@ -115,10 +115,10 @@ The prefix `spring.ai.deepseek.chat` is the property prefix that lets you config
115115
| Property | Description | Default
116116

117117
| spring.ai.deepseek.chat.enabled | Enables the DeepSeek chat model. | true
118-
| spring.ai.deepseek.chat.base-url | Optionally overrides the spring.ai.deepseek.base-url to provide a chat-specific URL | https://api.deepseek.com/
118+
| spring.ai.deepseek.chat.base-url | Optionally overrides the spring.ai.deepseek.base-url to provide a chat-specific URL | `+https://api.deepseek.com/+`
119119
| spring.ai.deepseek.chat.api-key | Optionally overrides the spring.ai.deepseek.api-key to provide a chat-specific API key | -
120-
| spring.ai.deepseek.chat.completions-path | The path to the chat completions endpoint | /chat/completions
121-
| spring.ai.deepseek.chat.beta-prefix-path | The prefix path to the beta feature endpoint | /beta
120+
| spring.ai.deepseek.chat.completions-path | The path to the chat completions endpoint | `/chat/completions`
121+
| spring.ai.deepseek.chat.beta-prefix-path | The prefix path to the beta feature endpoint | `/beta`
122122
| spring.ai.deepseek.chat.options.model | ID of the model to use. You can use either deepseek-reasoner or deepseek-chat. | deepseek-chat
123123
| spring.ai.deepseek.chat.options.frequencyPenalty | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | 0.0f
124124
| spring.ai.deepseek.chat.options.maxTokens | The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. | -

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/groq-chat.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Visit https://console.groq.com/keys[here] to create an API Key.
2323
The Spring AI project defines a configuration property named `spring.ai.openai.api-key` that you should set to the value of the `API Key` obtained from groq.com.
2424

2525
* **Set the Groq URL**:
26-
You have to set the `spring.ai.openai.base-url` property to `https://api.groq.com/openai`.
26+
You have to set the `spring.ai.openai.base-url` property to `+https://api.groq.com/openai+`.
2727

2828
* **Select a Groq Model**:
2929
Use the `spring.ai.openai.chat.model=<model name>` property to select from the available https://console.groq.com/docs/models[Groq Models].
@@ -139,7 +139,7 @@ The prefix `spring.ai.openai` is used as the property prefix that lets you conne
139139
|====
140140
| Property | Description | Default
141141

142-
| spring.ai.openai.base-url | The URL to connect to. Must be set to `https://api.groq.com/openai` | -
142+
| spring.ai.openai.base-url | The URL to connect to. Must be set to `+https://api.groq.com/openai+` | -
143143
| spring.ai.openai.api-key | The Groq API Key | -
144144
|====
145145

@@ -165,7 +165,7 @@ The prefix `spring.ai.openai.chat` is the property prefix that lets you configur
165165

166166
| spring.ai.openai.chat.enabled (Removed and no longer valid) | Enable OpenAI chat model. | true
167167
| spring.ai.openai.chat | Enable OpenAI chat model. | openai
168-
| spring.ai.openai.chat.base-url | Optional overrides the spring.ai.openai.base-url to provide chat specific url. Must be set to `https://api.groq.com/openai` | -
168+
| spring.ai.openai.chat.base-url | Optional overrides the spring.ai.openai.base-url to provide chat specific url. Must be set to `+https://api.groq.com/openai+` | -
169169
| spring.ai.openai.chat.api-key | Optional overrides the spring.ai.openai.api-key to provide chat specific api-key | -
170170
| spring.ai.openai.chat.options.model | The https://console.groq.com/docs/models[available model] names are `llama3-8b-8192`, `llama3-70b-8192`, `mixtral-8x7b-32768`, `gemma2-9b-it`. | -
171171
| spring.ai.openai.chat.options.temperature | The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict. | 0.8

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/nvidia-chat.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
https://docs.api.nvidia.com/nim/reference/llm-apis[NVIDIA LLM API] is a proxy AI Inference Engine offering a wide range of models from link:https://docs.api.nvidia.com/nim/reference/llm-apis#models[various providers].
44

55
Spring AI integrates with the NVIDIA LLM API by reusing the existing xref::api/chat/openai-chat.adoc[OpenAI] client.
6-
For this you need to set the base-url to `https://integrate.api.nvidia.com`, select one of the provided https://docs.api.nvidia.com/nim/reference/llm-apis#model[LLM models] and get an `api-key` for it.
6+
For this you need to set the base-url to `+https://integrate.api.nvidia.com+`, select one of the provided https://docs.api.nvidia.com/nim/reference/llm-apis#model[LLM models] and get an `api-key` for it.
77

88
image::spring-ai-nvidia-llm-api-1.jpg[w=800,align="center"]
99

@@ -77,7 +77,7 @@ The prefix `spring.ai.openai` is used as the property prefix that lets you conne
7777
|====
7878
| Property | Description | Default
7979

80-
| spring.ai.openai.base-url | The URL to connect to. Must be set to `https://integrate.api.nvidia.com` | -
80+
| spring.ai.openai.base-url | The URL to connect to. Must be set to `+https://integrate.api.nvidia.com+` | -
8181
| spring.ai.openai.api-key | The NVIDIA API Key | -
8282
|====
8383

@@ -102,7 +102,7 @@ The prefix `spring.ai.openai.chat` is the property prefix that lets you configur
102102

103103
| spring.ai.openai.chat.enabled (Removed and no longer valid) | Enable OpenAI chat model. | true
104104
| spring.ai.model.chat | Enable OpenAI chat model. | openai
105-
| spring.ai.openai.chat.base-url | Optional overrides the spring.ai.openai.base-url to provide chat specific url. Must be set to `https://integrate.api.nvidia.com` | -
105+
| spring.ai.openai.chat.base-url | Optional overrides the spring.ai.openai.base-url to provide chat specific url. Must be set to `+https://integrate.api.nvidia.com+` | -
106106
| spring.ai.openai.chat.api-key | Optional overrides the spring.ai.openai.api-key to provide chat specific api-key | -
107107
| spring.ai.openai.chat.options.model | The link:https://docs.api.nvidia.com/nim/reference/llm-apis#models[NVIDIA LLM model] to use | -
108108
| spring.ai.openai.chat.options.temperature | The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict. | 0.8

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/ollama-chat.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ The prefix `spring.ai.ollama` is the property prefix to configure the connection
7272
[cols="3,6,1", stripes=even]
7373
|====
7474
| Property | Description | Default
75-
| spring.ai.ollama.base-url | Base URL where Ollama API server is running. | `http://localhost:11434`
75+
| spring.ai.ollama.base-url | Base URL where Ollama API server is running. | `+http://localhost:11434+`
7676
|====
7777

7878
Here are the properties for initializing the Ollama integration and xref:auto-pulling-models[auto-pulling models].

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chat/perplexity-chat.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Visit https://docs.perplexity.ai/guides/getting-started[here] to create an API K
2121
Configure it using the `spring.ai.openai.api-key` property in your Spring AI project.
2222

2323
* **Set the Perplexity Base URL**:
24-
Set the `spring.ai.openai.base-url` property to `https://api.perplexity.ai`.
24+
Set the `spring.ai.openai.base-url` property to `+https://api.perplexity.ai+`.
2525

2626
* **Select a Perplexity Model**:
2727
Use the `spring.ai.openai.chat.model=<model name>` property to specify the model.
@@ -146,7 +146,7 @@ The prefix `spring.ai.openai` is used as the property prefix that lets you conne
146146
|====
147147
| Property | Description | Default
148148

149-
| spring.ai.openai.base-url | The URL to connect to. Must be set to `https://api.perplexity.ai` | -
149+
| spring.ai.openai.base-url | The URL to connect to. Must be set to `+https://api.perplexity.ai+` | -
150150
| spring.ai.openai.chat.api-key | Your Perplexity API Key | -
151151
|====
152152

@@ -171,7 +171,7 @@ The prefix `spring.ai.openai.chat` is the property prefix that lets you configur
171171

172172
| spring.ai.model.chat | Enable OpenAI chat model. | openai
173173
| spring.ai.openai.chat.model | One of the supported https://docs.perplexity.ai/guides/model-cards[Perplexity models]. Example: `llama-3.1-sonar-small-128k-online`. | -
174-
| spring.ai.openai.chat.base-url | Optional overrides the spring.ai.openai.base-url to provide chat specific url. Must be set to `https://api.perplexity.ai` | -
174+
| spring.ai.openai.chat.base-url | Optional overrides the spring.ai.openai.base-url to provide chat specific url. Must be set to `+https://api.perplexity.ai+` | -
175175
| spring.ai.openai.chat.completions-path | Must be set to `/chat/completions` | `/v1/chat/completions`
176176
| spring.ai.openai.chat.options.temperature | The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Required range: `0 < x < 2`. | 0.2
177177
| spring.ai.openai.chat.options.frequencyPenalty | A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. Incompatible with presence_penalty. Required range: `x > 0`. | 1

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/embeddings/ollama-embeddings.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ The prefix `spring.ai.ollama` is the property prefix to configure the connection
7676
[cols="3,6,1"]
7777
|====
7878
| Property | Description | Default
79-
| spring.ai.ollama.base-url | Base URL where Ollama API server is running. | `http://localhost:11434`
79+
| spring.ai.ollama.base-url | Base URL where Ollama API server is running. | `+http://localhost:11434+`
8080
|====
8181

8282
Here are the properties for initializing the Ollama integration and xref:auto-pulling-models[auto-pulling models].

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/image/stabilityai-image.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ The prefix `spring.ai.stabilityai.image` is the property prefix that lets you co
102102

103103
| spring.ai.stabilityai.image.enabled (Removed and no longer valid) | Enable Stability AI image model. | true
104104
| spring.ai.model.image | Enable Stability AI image model. | stabilityai
105-
| spring.ai.stabilityai.image.base-url | Optional overrides the spring.ai.openai.base-url to provide a specific url | `https://api.stability.ai/v1`
105+
| spring.ai.stabilityai.image.base-url | Optional overrides the spring.ai.openai.base-url to provide a specific url | `+https://api.stability.ai/v1+`
106106
| spring.ai.stabilityai.image.api-key | Optional overrides the spring.ai.openai.api-key to provide a specific api-key | -
107107
| spring.ai.stabilityai.image.option.n | The number of images to be generated. Must be between 1 and 10. | 1
108108
| spring.ai.stabilityai.image.option.model | The engine/model to use in Stability AI. The model is passed in the URL as a path parameter. | `stable-diffusion-v1-6`

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/vectordbs/elasticsearch.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ The Spring Boot properties starting with `spring.elasticsearch.*` are used to co
138138
| `spring.elasticsearch.connection-timeout` | Connection timeout used when communicating with Elasticsearch. | `1s`
139139
| `spring.elasticsearch.password` | Password for authentication with Elasticsearch. | -
140140
| `spring.elasticsearch.username` | Username for authentication with Elasticsearch.| -
141-
| `spring.elasticsearch.uris` | Comma-separated list of the Elasticsearch instances to use. | `http://localhost:9200`
141+
| `spring.elasticsearch.uris` | Comma-separated list of the Elasticsearch instances to use. | `+http://localhost:9200+`
142142
| `spring.elasticsearch.path-prefix` | Prefix added to the path of every request sent to Elasticsearch. | -
143143
| `spring.elasticsearch.restclient.sniffer.delay-after-failure` | Delay of a sniff execution scheduled after a failure.| `1m`
144144
| `spring.elasticsearch.restclient.sniffer.interval` | Interval between consecutive ordinary sniff executions. | `5m`

0 commit comments

Comments
 (0)