Skip to content

Commit 8f9bb56

Browse files
minor tweaks
1 parent 362c1db commit 8f9bb56

File tree

4 files changed

+12
-10
lines changed

4 files changed

+12
-10
lines changed

docs/assets/invariant.css

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -967,7 +967,7 @@ ul.md-nav__list {
967967
.code-caption {
968968
font-size: 0.65rem;
969969
color: #666;
970-
margin-top: -0.9rem;
970+
margin-top: -0.5rem;
971971
padding-left: 4px;
972972
font-style: italic;
973973
}

docs/guardrails/llm.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
2-
title: LLM
3-
description: Call an LLM on a given prompt.
2+
title: LLM-as-Guardrail
3+
description: Invoke an model to validate an action.
44
---
55

6-
# LLM
6+
# LLM-as-Guardrail
77
<div class='subtitle'>
8-
Call LLM on a given prompt.
8+
Invoke an model to validate a response or action.
99
</div>
1010

1111
During policy execution, you can call an LLM with custom prompts allowing for more flexible rules. LLMs are very powerful, especially in contexts where it is hard to state strict and deterministic rules or when some rudimentary thinking is needed.
@@ -41,9 +41,9 @@ Function to run an LLM in the policy execution.
4141
| `str` | The LLM response. |
4242

4343
### Prompt Injection Detector
44-
The `llm` function can be used instead of the `prompt-injection` function as a prompt injection detector. This is generally not recommended due to higher latency, but, in some contexts, it can be valuable to adjust the prompt to steer the behavior of the detector.
44+
For instance, the `llm` function can be used instead of [`prompt_injection`](./prompt-injections.md), to serve as a prompt injection detector. This is generally not recommended due to higher latency, but, in some contexts, it can be valuable to adjust the prompt to steer the behavior of the detector.
4545

46-
**Example:** Prompt Injection.
46+
**Example:** Detect a prompt injection in a tool's output.
4747
```guardrail
4848
from invariant import llm
4949

@@ -102,4 +102,4 @@ raise "Found prompt injection in tool output" if:
102102
}
103103
]
104104
```
105-
<div class="code-caption"> Detect prompt injection. </div>
105+
<div class="code-caption"> Detects a prompt injection hidden in a tool's output. </div>

docs/guardrails/sentence_similarity.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,9 @@ description: Detect semantically similar sentences.
88
Detect semantically similar sentences.
99
</div>
1010

11-
Keywords are a simple way to flag potentially sensitive content in text, but they don’t always capture the full meaning. In cases where you need a deeper understanding of the content, semantic similarity is more effective. is_similar provides fuzzy matching between strings using sentence embedding models to detect whether two pieces of text are semantically alike.
11+
Keywords are a simple way to flag potentially sensitive content in text, but they don’t always capture the full meaning. In cases where you need a deeper understanding of the content, semantic similarity is more effective.
12+
13+
`is_similar` provides fuzzy matching between strings using sentence embedding models to detect whether two pieces of text are semantically alike.
1214

1315

1416
## is_similar

mkdocs.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ nav:
110110
- Copyrighted Content: guardrails/copyright.md
111111
- Secret Tokens and Credentials: guardrails/secrets.md
112112
- Sentence Similarity: guardrails/sentence_similarity.md
113-
- LLM Calls: guardrails/llm.md
113+
- LLM-as-Guardrail: guardrails/llm.md
114114
- Guardrails in Gateway: guardrails/gateway.md
115115
- Guardrails in Explorer: guardrails/explorer.md
116116
- Rule Writing Reference: guardrails/rules.md

0 commit comments

Comments
 (0)