From b9e7d0145547554e2dec4e68fc7546f565e11b7f Mon Sep 17 00:00:00 2001 From: Zayd Simjee Date: Mon, 2 Sep 2024 14:00:48 -0700 Subject: [PATCH 01/13] guardrails rails integ --- .../nemo_guardrails/guardrails_rails.md | 79 +++++++++++++++++++ .../nemo_guardrails/rails_guardrails.md | 0 2 files changed, 79 insertions(+) create mode 100644 docs/integrations/nemo_guardrails/guardrails_rails.md create mode 100644 docs/integrations/nemo_guardrails/rails_guardrails.md diff --git a/docs/integrations/nemo_guardrails/guardrails_rails.md b/docs/integrations/nemo_guardrails/guardrails_rails.md new file mode 100644 index 000000000..914ae8f95 --- /dev/null +++ b/docs/integrations/nemo_guardrails/guardrails_rails.md @@ -0,0 +1,79 @@ +This guide will teach you how to add guardrails configurations built with NeMo Guardrails to your Guardrails AI application. + +# Overview + +The Guardrails AI library provides a Rails integration that allows you to use a Rails application as an LLM callable. This will result in a Rails application that generates completions that are validated using a GuardrailsAI guard configuration. + +We start by defining a Guardrails AI Guard and a Rails configuration. + +```python +from nemoguardrails import LLMRails, RailsConfig +from guardrails import Guard + +# Load a guardrails configuration from the specified path. +config = RailsConfig.from_path("PATH/TO/CONFIG") +rails = LLMRails(config) + +# Define a guardrails guard. +guard = Guard().use( + ToxicLanguage() +) +``` + +Then, we have the guard validate the completions generated by the Rails application. + +```python +from guardrails import RailsGuard +railsguard = RailsGuard(rails, guard) + +result = railsguard( + messages: [{ + "role":"user", + "content":"Hello! What can you do for me?" + }] +) +``` + +The `RailsGuard` class is a wrapper around the Guard class. Just like a Guard, it can [called](https://www.guardrailsai.com/docs/api_reference_markdown/guards#__call__) with similar parameters to the OpenAI completions API. It also returns a `ValidationOutcome` object (or iterable, in streaming cases). That object can be destructured to get the raw output, the validated output, and other metadata. + +Here, `raw_llm_output` is the output returned by the NeMo Guardrails Rails. + +``` +result.raw_llm_output +result.validated_output +result.validation_passed +``` + +## Expected NeMo Guardrails Rails output + +The NeMo Guardrails Rails may return any serializable type expressable in python using native types or Pydantic. The output must conform to the datatypes expected by the specified Guard. If the output is structured, make sure to initialize the Guardrails AI Guard using pydantic, [following this guide](https://www.guardrailsai.com/docs/how_to_guides/generate_structured_data). + +# Integration with the NeMo Guardrails server + +To wrap a call to the NeMo Guardrails server, we can leverage the OpenAI-style API endpoint available. We can talk to this endpoint directly through the Guard, setting the correct endpoint and config_id. + + +First, start the NeMo Guardrails server: + +```bash +nemoguardrails server [--config PATH/TO/CONFIGS] [--port PORT] +``` + +Then, talk to it using the Guard: + +```python +from guardrails import Guard +guard = Guard.use( + ToxicLanguage() +) + +# invoke the guard using the endpoint and config_id +guard( + endpoint="http://localhost:8000/v1/chat/completions", + config_id="CONFIG_ID", + messages: [{ + "role":"user", + "content":"Hello! What can you do for me?" + }] +) +``` \ No newline at end of file diff --git a/docs/integrations/nemo_guardrails/rails_guardrails.md b/docs/integrations/nemo_guardrails/rails_guardrails.md new file mode 100644 index 000000000..e69de29bb From cac51206a7dc676d07d09abc141cbdb96244e34b Mon Sep 17 00:00:00 2001 From: Zayd Simjee Date: Wed, 4 Sep 2024 11:51:19 -0700 Subject: [PATCH 02/13] add server details --- .../nemo_guardrails/guardrails_rails.md | 9 ++-- .../nemo_guardrails/rails_guardrails.md | 52 +++++++++++++++++++ .../using_rails_as_a_validator.md | 3 ++ 3 files changed, 61 insertions(+), 3 deletions(-) create mode 100644 docs/integrations/nemo_guardrails/using_rails_as_a_validator.md diff --git a/docs/integrations/nemo_guardrails/guardrails_rails.md b/docs/integrations/nemo_guardrails/guardrails_rails.md index 914ae8f95..c5c42e55a 100644 --- a/docs/integrations/nemo_guardrails/guardrails_rails.md +++ b/docs/integrations/nemo_guardrails/guardrails_rails.md @@ -4,11 +4,14 @@ This guide will teach you how to add guardrails configurations built with NeMo G The Guardrails AI library provides a Rails integration that allows you to use a Rails application as an LLM callable. This will result in a Rails application that generates completions that are validated using a GuardrailsAI guard configuration. -We start by defining a Guardrails AI Guard and a Rails configuration. +We start by defining a Guardrails AI Guard and a Rails configuration. We'll also install the [ToxicLanguage validator](https://hub.guardrailsai.com/validator/guardrails/toxic_language) from the [Guardrails AI Hub](https://hub.guardrailsai.com/). ```python from nemoguardrails import LLMRails, RailsConfig -from guardrails import Guard +from guardrails import Guard, install + +install("hub://guardrails/toxic_language") +from guardrails.hub import ToxicLanguage # Load a guardrails configuration from the specified path. config = RailsConfig.from_path("PATH/TO/CONFIG") @@ -76,4 +79,4 @@ guard( "content":"Hello! What can you do for me?" }] ) -``` \ No newline at end of file +``` diff --git a/docs/integrations/nemo_guardrails/rails_guardrails.md b/docs/integrations/nemo_guardrails/rails_guardrails.md index e69de29bb..4acfcdace 100644 --- a/docs/integrations/nemo_guardrails/rails_guardrails.md +++ b/docs/integrations/nemo_guardrails/rails_guardrails.md @@ -0,0 +1,52 @@ +::: +note: This will exist in the NeMo Guardrails docs +::: + + +# Introduction + +Integrating Guardrails AI with NeMo Guardrails combines the strengths of both frameworks: + +Guardrails AI's extensive hub of validators can enhance NeMo Guardrails' input and output checking capabilities. +NeMo Guardrails' flexible configuration system can provide a powerful context for applying Guardrails AI validators. +Users of both frameworks can benefit from a seamless integration, reducing development time and improving overall safety measures. +This integration allows developers to leverage the best features of both frameworks, creating more robust and secure LLM applications. + +# Overview +This document provides a guide to using a Guardrails AI Guard as an action within a NeMo Guardrails Rails application. This can be done either by defining an entire Guard and registering it, or by registering a validator directly. + +## Registering a Guard as an action + +First, we install our validators and define our Guard + +```python +from guardrails import Guard, install +install("hub://guardrails/toxic_language") +from guardrails.hub import ToxicLanguage + +guard = Guard().use( + ToxicLanguage() +) +``` + +Next, we register our `guard` using the nemoguardrails registration API + +```python +from nemoguardrails import RailsConfig, LLMRails + +config = RailsConfig.from_path("path/to/config") +rails = LLMRails(config) + +rails.register_action(guard, "custom_guard_action") +``` + +Now, the `custom_guard_action` can be used as an action within the Rails specification. This action can be used on input or output, and may be used in any number of flows. + +```yaml +define flow + ... + $result = execute custom_guard_action + ... +``` + + diff --git a/docs/integrations/nemo_guardrails/using_rails_as_a_validator.md b/docs/integrations/nemo_guardrails/using_rails_as_a_validator.md new file mode 100644 index 000000000..7aca91573 --- /dev/null +++ b/docs/integrations/nemo_guardrails/using_rails_as_a_validator.md @@ -0,0 +1,3 @@ +# Placeholder + +The idea here is to use an entire Rails app as a validator. The Rails app would have to return results in either the ValidatorOutcome format or a boolean indicating whether the validation passed or not. \ No newline at end of file From fd54b9732e64cf27293475a41ed108f83e589aae Mon Sep 17 00:00:00 2001 From: zsimjee Date: Mon, 16 Sep 2024 17:00:59 -0700 Subject: [PATCH 03/13] WIP --- docs/integrations/nemo_guardrails/config.yml | 4 + .../nemoguardrails_in_guardrails.ipynb | 111 ++++++++++++++++++ docs/integrations/nemo_guardrails/nm_gr.py | 28 +++++ guardrails/Untitled-1.ipynb | 77 ++++++++++++ .../integrations/nemoguardrails/__init__.py | 0 .../async_nemoguardrails_guard.py | 0 .../nemoguardrails/nemoguardrails_guard.py | 79 +++++++++++++ guardrails/llm_providers.py | 2 +- pyproject.toml | 6 + 9 files changed, 306 insertions(+), 1 deletion(-) create mode 100644 docs/integrations/nemo_guardrails/config.yml create mode 100644 docs/integrations/nemo_guardrails/nemoguardrails_in_guardrails.ipynb create mode 100644 docs/integrations/nemo_guardrails/nm_gr.py create mode 100644 guardrails/Untitled-1.ipynb create mode 100644 guardrails/integrations/nemoguardrails/__init__.py create mode 100644 guardrails/integrations/nemoguardrails/async_nemoguardrails_guard.py create mode 100644 guardrails/integrations/nemoguardrails/nemoguardrails_guard.py diff --git a/docs/integrations/nemo_guardrails/config.yml b/docs/integrations/nemo_guardrails/config.yml new file mode 100644 index 000000000..2002a9690 --- /dev/null +++ b/docs/integrations/nemo_guardrails/config.yml @@ -0,0 +1,4 @@ +models: + - type: main + engine: openai + model: gpt-3.5-turbo-instruct \ No newline at end of file diff --git a/docs/integrations/nemo_guardrails/nemoguardrails_in_guardrails.ipynb b/docs/integrations/nemo_guardrails/nemoguardrails_in_guardrails.ipynb new file mode 100644 index 000000000..84a668d84 --- /dev/null +++ b/docs/integrations/nemo_guardrails/nemoguardrails_in_guardrails.ipynb @@ -0,0 +1,111 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/zayd/workspace/guardrails/.venv/lib/python3.11/site-packages/nemoguardrails/rails/llm/llmrails.py:371: LangChainDeprecationWarning: The class `OpenAI` was deprecated in LangChain 0.0.10 and will be removed in 1.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run `pip install -U langchain-openai` and import as `from langchain_openai import OpenAI`.\n", + " self.llm = provider_cls(**kwargs)\n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "4c455303c28c41dab4750619d100d92c", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Fetching 5 files: 0%| | 0/5 [00:00 47\u001b[0m \u001b[43msync_test\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n", + "Cell \u001b[0;32mIn[1], line 39\u001b[0m, in \u001b[0;36msync_test\u001b[0;34m()\u001b[0m\n\u001b[1;32m 38\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21msync_test\u001b[39m():\n\u001b[0;32m---> 39\u001b[0m \u001b[43mrails\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mgenerate\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 40\u001b[0m \u001b[43m \u001b[49m\u001b[43mmessages\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m[\u001b[49m\u001b[43m{\u001b[49m\n\u001b[1;32m 41\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mrole\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43muser\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 42\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mcontent\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mGenerate a list of words that I should ban at work. Make sure to spell out all the words. Return only the list of words without preamble or postamble. Make sure to return ONLY the list of words. This list should include profanity, slurs, and other offensive language.\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\n\u001b[1;32m 43\u001b[0m \u001b[43m \u001b[49m\u001b[43m}\u001b[49m\u001b[43m]\u001b[49m\n\u001b[1;32m 44\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n", + "File \u001b[0;32m~/workspace/guardrails/.venv/lib/python3.11/site-packages/nemoguardrails/rails/llm/llmrails.py:879\u001b[0m, in \u001b[0;36mLLMRails.generate\u001b[0;34m(self, prompt, messages, return_context, options, state)\u001b[0m\n\u001b[1;32m 876\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m\"\"\"Synchronous version of generate_async.\"\"\"\u001b[39;00m\n\u001b[1;32m 878\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m check_sync_call_from_async_loop():\n\u001b[0;32m--> 879\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mRuntimeError\u001b[39;00m(\n\u001b[1;32m 880\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mYou are using the sync `generate` inside async code. \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 881\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mYou should replace with `await generate_async(...)` or use `nest_asyncio.apply()`.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 882\u001b[0m )\n\u001b[1;32m 884\u001b[0m loop \u001b[38;5;241m=\u001b[39m get_or_create_event_loop()\n\u001b[1;32m 886\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m loop\u001b[38;5;241m.\u001b[39mrun_until_complete(\n\u001b[1;32m 887\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mgenerate_async(\n\u001b[1;32m 888\u001b[0m prompt\u001b[38;5;241m=\u001b[39mprompt,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 893\u001b[0m )\n\u001b[1;32m 894\u001b[0m )\n", + "\u001b[0;31mRuntimeError\u001b[0m: You are using the sync `generate` inside async code. You should replace with `await generate_async(...)` or use `nest_asyncio.apply()`." + ] + } + ], + "source": [ + "from nemoguardrails import LLMRails, RailsConfig\n", + "from guardrails import AsyncGuard, install\n", + "\n", + "# install(\"hub://guardrails/toxic_language\", install_local_models=False)\n", + "from guardrails.hub import ToxicLanguage\n", + "\n", + "# Load a guardrails configuration from the specified path.\n", + "config = RailsConfig.from_path(\"./config.yml\")\n", + "rails = LLMRails(config)\n", + "\n", + "# Define a guardrails guard.\n", + "guard = AsyncGuard().use(\n", + " ToxicLanguage()\n", + ")\n", + "\n", + "\n", + "# await rails.generate_async(\n", + "# messages=[{\n", + "# \"role\": \"user\",\n", + "# \"content\": \"Generate a list of words that I should ban at work. Make sure to spell out all the words. Return only the list of words without preamble or postamble. Make sure to return ONLY the list of words. This list should include profanity, slurs, and other offensive language.\"\n", + "# }]\n", + "# )\n", + "\n", + "async def custom_callable(prompt: str):\n", + " return (await rails.generate_async(\n", + " messages=[{\n", + " \"role\": \"user\",\n", + " \"content\": prompt\n", + " }]))['content']\n", + "\n", + "res = await guard(\n", + " llm_api=custom_callable,\n", + " prompt=\"Generate a list of words that I should ban at work. Make sure to spell out all the words. Return only the list of words without preamble or postamble. Make sure to return ONLY the list of words. This list should include profanity, slurs, and other offensive language.\"\n", + ")\n", + "print(res.validation_passed)\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/docs/integrations/nemo_guardrails/nm_gr.py b/docs/integrations/nemo_guardrails/nm_gr.py new file mode 100644 index 000000000..e3c6cfe2b --- /dev/null +++ b/docs/integrations/nemo_guardrails/nm_gr.py @@ -0,0 +1,28 @@ +from nemoguardrails import LLMRails, RailsConfig +from guardrails.integrations.nemoguardrails.nemoguardrails_guard import ( + NemoguardrailsGuard, +) + +# install("hub://guardrails/toxic_language", install_local_models=False) +from guardrails.hub import ToxicLanguage # noqa: E402 + +# Load a guardrails configuration from the specified path. +config = RailsConfig.from_path("./config.yml") +nemorails = LLMRails(config) + +# define your guard and validator(s) +guard = NemoguardrailsGuard(nemorails=nemorails).use(ToxicLanguage()) + +res = guard( + messages=[ + { + "role": "user", + "content": """Generate a list of words that I should ban at work. + Make sure to spell out all the words. Return only the list of words without + preamble or postamble. Make sure to return ONLY the list of words. + This list should include profanity, slurs, and other offensive language.""", + } + ] +) + +print(res.validation_passed) # type: ignore diff --git a/guardrails/Untitled-1.ipynb b/guardrails/Untitled-1.ipynb new file mode 100644 index 000000000..b8cae8d87 --- /dev/null +++ b/guardrails/Untitled-1.ipynb @@ -0,0 +1,77 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "ename": "", + "evalue": "", + "output_type": "error", + "traceback": [ + "\u001b[1;31mThe kernel failed to start as 'MappingProxyType' could not be imported from 'most likely due to a circular import'.\n", + "\u001b[1;31mClick here for more info." + ] + } + ], + "source": [ + "from guardrails import Guard\n", + "\n", + "guard = Guard.fetch_guard(\n", + " name=\"nsfw-guard\"\n", + ")\n", + "\n", + "guard(\n", + " model='gpt-3.5-turbo',\n", + " messages=[{\n", + " \"role\": \"user\",\n", + " \"content\": \"I'm, like, so horny right now.\"\n", + " }]\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "ename": "", + "evalue": "", + "output_type": "error", + "traceback": [ + "\u001b[1;31mThe kernel failed to start as 'MappingProxyType' could not be imported from 'most likely due to a circular import'.\n", + "\u001b[1;31mClick here for more info." + ] + } + ], + "source": [ + "from guardrails import GuardClient\n", + "\n", + "client = GuardClient(name=\"nsfw-guard\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/guardrails/integrations/nemoguardrails/__init__.py b/guardrails/integrations/nemoguardrails/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/guardrails/integrations/nemoguardrails/async_nemoguardrails_guard.py b/guardrails/integrations/nemoguardrails/async_nemoguardrails_guard.py new file mode 100644 index 000000000..e69de29bb diff --git a/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py b/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py new file mode 100644 index 000000000..9e8998b49 --- /dev/null +++ b/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py @@ -0,0 +1,79 @@ +from typing import Callable, Iterable, Optional, Union +from typing_extensions import deprecated + +from guardrails.classes.output_type import OT +from guardrails.classes.validation_outcome import ValidationOutcome + +from guardrails import Guard +from nemoguardrails import LLMRails + + +class NemoguardrailsGuard(Guard): + def __init__( + self, + nemorails: LLMRails, + *args, + **kwargs, + ): + super().__init__(*args, **kwargs) + self._nemorails = nemorails + + def __call__( + self, llm_api: Optional[Callable] = None, *args, **kwargs + ) -> Union[ValidationOutcome[OT], Iterable[ValidationOutcome[OT]]]: + # peel llm_api off of kwargs + llm_api = kwargs.pop("llm_api", None) + + # if llm_api is defined, throw an error + if llm_api is not None: + raise ValueError( + """llm_api should not be passed to a NemoguardrailsGuard object. + The Nemoguardrails LLMRails object passed in will be used as the LLM.""" + ) + + # peel off messages from kwargs + messages = kwargs.get("messages", None) + + # if messages is not defined, throw an error + if messages is None: + raise ValueError( + """messages should be passed to a NemoguardrailsGuard object. + The messages to be passed to the LLM should be passed in as a list of + dictionaries, where each dictionary has a 'role' key and a 'content' key.""" + ) + + # create the callable + def custom_callable(**kwargs): + # .generate doesn't like temp + kwargs.pop("temperature", None) + + msg_history = kwargs.pop("msg_history", None) + messages = ( + msg_history + if kwargs.get("messages") is None + else kwargs.get("messages") + ) + + prompt = kwargs.get("prompt") + + if messages is not None: + kwargs["messages"] = messages + + if (messages is None) and (prompt is None): + raise ValueError("""messages or prompt should be passed.""") + + return (self._nemorails.generate(**kwargs))["content"] # type: ignore + + return super().__call__(llm_api=custom_callable, *args, **kwargs) + + def from_pydantic(self, *args, **kwargs): + pass + + @deprecated( + "This method has been deprecated. Please use the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` or the `from_pydantic` method.", + ) + def from_rail_string(cls, *args, **kwargs): + raise NotImplementedError("""\ +`from_rail_string` is not implemented for NemoguardrailsGuard. +We recommend using the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` +or the `from_pydantic` method.""") diff --git a/guardrails/llm_providers.py b/guardrails/llm_providers.py index d601d228a..4c76c340e 100644 --- a/guardrails/llm_providers.py +++ b/guardrails/llm_providers.py @@ -159,7 +159,7 @@ def _invoke_llm( text=..., instructions=..., msg_history=..., - temperature=..., + =..., ... ) ``` diff --git a/pyproject.toml b/pyproject.toml index 16f740e82..bd73c568e 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -105,6 +105,12 @@ pillow = "^10.1.0" cairosvg = "^2.7.1" mkdocs-glightbox = "^0.3.4" +[tool.poetry.group.nemoguardrials] +optional = true + +[tool.poetry.group.nemoguardrials.dependencies] +nemoguardrials = "0.9.1.1" + [[tool.poetry.source]] name = "PyPI" From 59f2be50b4978e7e15d47157d91ead712c3ebd0c Mon Sep 17 00:00:00 2001 From: zsimjee Date: Wed, 18 Sep 2024 12:50:38 -0700 Subject: [PATCH 04/13] wip --- docs/integrations/nemo_guardrails/nm_gr.py | 185 ++++++++++++++++- .../nemoguardrails/nemoguardrails_guard.py | 188 +++++++++++++++--- 2 files changed, 337 insertions(+), 36 deletions(-) diff --git a/docs/integrations/nemo_guardrails/nm_gr.py b/docs/integrations/nemo_guardrails/nm_gr.py index e3c6cfe2b..1bc6aaa97 100644 --- a/docs/integrations/nemo_guardrails/nm_gr.py +++ b/docs/integrations/nemo_guardrails/nm_gr.py @@ -1,7 +1,11 @@ +from typing import List from nemoguardrails import LLMRails, RailsConfig +from guardrails import Guard from guardrails.integrations.nemoguardrails.nemoguardrails_guard import ( NemoguardrailsGuard, ) +from pydantic import BaseModel, Field +import rich # install("hub://guardrails/toxic_language", install_local_models=False) from guardrails.hub import ToxicLanguage # noqa: E402 @@ -13,16 +17,175 @@ # define your guard and validator(s) guard = NemoguardrailsGuard(nemorails=nemorails).use(ToxicLanguage()) -res = guard( - messages=[ - { - "role": "user", - "content": """Generate a list of words that I should ban at work. - Make sure to spell out all the words. Return only the list of words without - preamble or postamble. Make sure to return ONLY the list of words. - This list should include profanity, slurs, and other offensive language.""", - } - ] +# res = guard( +# messages=[ +# { +# "role": "user", +# "content": """Generate a list of words that I should ban at work. +# Make sure to spell out all the words. Return only the list of words without +# preamble or postamble. Make sure to return ONLY the list of words. +# This list should include profanity, slurs, and other offensive language.""", +# } +# ], +# ) + +# print(res.validation_passed) # type: ignore + +# rich.print(guard.history.last.tree) + +# res = guard( +# messages=[ +# { +# "role": "user", +# "content": """Say hi in the nicest way you can""", +# } +# ] +# ) + +# print(res.validation_passed) # type: ignore + +class StructuredOutput(BaseModel): + nontoxic: str = Field(validators=[ToxicLanguage()]) + +# pydantic_guard = NemoguardrailsGuard.from_pydantic( +# nemorails=nemorails, +# output_class=StructuredOutput, +# ) + +# res = pydantic_guard( +# messages=[ +# { +# "role": "system", +# "content": """Only respond in JSON. The JSON should be formatted as follows: + +# {'nontoxic': response to the prompt} + +# AGAIN, be sure to respond ONLY with valid JSON. Do not include any other text. +# """, +# }, +# { +# "role": "user", +# "content": """Generate a list of words that I should ban at work. +# Make sure to spell out all the words. Return only the list of words without +# preamble or postamble. Make sure to return ONLY the list of words. +# This list should include profanity, slurs, and other offensive language.""", +# } +# ], +# num_reasks=1 +# ) + +# print(res.validation_passed) # type: ignore + +# rich.print(pydantic_guard.history.last.tree) + + + +# class StructuredOutput(BaseModel): +# nontoxic: str = Field(validators=[ToxicLanguage()]) + +# pydantic_guard = Guard.from_pydantic( +# # nemorails=nemorails, +# output_class=StructuredOutput, +# ) + +# g = Guard() +# g( +# model="gpt-3.5-turbo", +# messages=[ +# # { +# # "role": "system", +# # "content": """Only respond in JSON. The JSON should be formatted as follows: + +# # {'nontoxic': response to the prompt} + +# # AGAIN, be sure to respond ONLY with valid JSON. Do not include any other text. +# # """, +# # }, +# { +# "role": "user", +# "content": """Write pydantic structure about medical data. It should have hundreds of fields, with some nesting""", +# } +# ], +# # tools=pydantic_guard.json_function_calling_tool(), +# # tool_choice="required", +# # generate_kwargs={"temperature": 0.0}, +# ) +# # rich.print(pydantic_guard.history.last.tree) +# rich.print(g.history.last.tree) +# print(g.history.last.raw_outputs) + + +class MedicalData(BaseModel): + patient_id: int + patient_name: str + age: int + gender: str + address: str + contact_number: str + blood_type: str + allergies: List[str] + medications: List[str] + medical_history: List[str] + family_history: List[str] + insurance_provider: str + insurance_policy_number: str + emergency_contact_name: str + emergency_contact_number: str + primary_care_physician: str + last_checkup_date: str + next_checkup_date: str + weight: float + height: float + blood_pressure: str + heart_rate: int + respiratory_rate: int + temperature: float + symptoms: List[str] + diagnosis: str + treatment_plan: str + lab_results: List[str] + imaging_results: List[str] + follow_up_instructions: str + additional_notes: str + vaccination_records: List[str] + surgeries: List[str] + hospitalizations: List[str] + chronic_conditions: List[str] + lifestyle_factors: List[str] + exercise_routine: str + diet_plan: str + sleep_pattern: str + stress_level: str + smoking_status: str + alcohol_consumption: str + drug_usage: str + mental_health_history: List[str] + social_support_system: List[str] + living_situation: str + employment_status: str + education_level: str + income_level: str + access_to_healthcare: str + healthcare_preferences: List[str] + healthcare_goals: List[str] + healthcare_challenges: List[str] + healthcare_needs: List[str] + healthcare_preferences: List[str] + healthcare_expectations: List[str] + healthcare_satisfaction: str + healthcare_feedback: str + additional_information: str + # Add more fields as needed + + +med_pydantic_guard = Guard.from_pydantic( + output_class=MedicalData, ) -print(res.validation_passed) # type: ignore +import time + +init_time = time.time() + +med_pydantic_guard.json_function_calling_tool() + +print(f"Time taken: {time.time() - init_time}") \ No newline at end of file diff --git a/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py b/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py index 9e8998b49..0a5365642 100644 --- a/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py +++ b/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py @@ -1,12 +1,23 @@ -from typing import Callable, Iterable, Optional, Union +from typing import Callable, Dict, Iterable, List, Optional, Union, cast +import warnings from typing_extensions import deprecated -from guardrails.classes.output_type import OT +from guardrails.classes.execution.guard_execution_options import GuardExecutionOptions +from guardrails.classes.output_type import OT, OutputTypes from guardrails.classes.validation_outcome import ValidationOutcome from guardrails import Guard from nemoguardrails import LLMRails +from guardrails.formatters import get_formatter +from guardrails.formatters.base_formatter import BaseFormatter +from guardrails.schema.pydantic_schema import pydantic_model_to_schema +from guardrails.types.pydantic import ModelOrListOfModels + +from guardrails.stores.context import ( + Tracer +) + class NemoguardrailsGuard(Guard): def __init__( @@ -19,7 +30,7 @@ def __init__( self._nemorails = nemorails def __call__( - self, llm_api: Optional[Callable] = None, *args, **kwargs + self, llm_api: Optional[Callable] = None, generate_kwargs: Optional[Dict] = None, *args, **kwargs ) -> Union[ValidationOutcome[OT], Iterable[ValidationOutcome[OT]]]: # peel llm_api off of kwargs llm_api = kwargs.pop("llm_api", None) @@ -42,32 +53,150 @@ def __call__( dictionaries, where each dictionary has a 'role' key and a 'content' key.""" ) - # create the callable - def custom_callable(**kwargs): - # .generate doesn't like temp - kwargs.pop("temperature", None) - - msg_history = kwargs.pop("msg_history", None) - messages = ( - msg_history - if kwargs.get("messages") is None - else kwargs.get("messages") - ) - - prompt = kwargs.get("prompt") + def _custom_nemo_callable(*args, **kwargs): + return self._custom_nemo_callable(*args, generate_kwargs=generate_kwargs, **kwargs) - if messages is not None: - kwargs["messages"] = messages + return super().__call__(llm_api=_custom_nemo_callable, *args, **kwargs) - if (messages is None) and (prompt is None): - raise ValueError("""messages or prompt should be passed.""") - - return (self._nemorails.generate(**kwargs))["content"] # type: ignore + @classmethod + def from_pydantic( + cls, + nemorails: LLMRails, + output_class: ModelOrListOfModels, + *, + prompt: Optional[str] = None, + instructions: Optional[str] = None, + num_reasks: Optional[int] = None, + reask_prompt: Optional[str] = None, + reask_instructions: Optional[str] = None, + reask_messages: Optional[List[Dict]] = None, + messages: Optional[List[Dict]] = None, + tracer: Optional[Tracer] = None, + name: Optional[str] = None, + description: Optional[str] = None, + output_formatter: Optional[Union[str, BaseFormatter]] = None, + ): + """Create a Guard instance using a Pydantic model to specify the output + schema. + + Args: + output_class: (Union[Type[BaseModel], List[Type[BaseModel]]]): The pydantic model that describes + the desired structure of the output. + prompt (str, optional): The prompt used to generate the string. Defaults to None. + instructions (str, optional): Instructions for chat models. Defaults to None. + reask_prompt (str, optional): An alternative prompt to use during reasks. Defaults to None. + reask_instructions (str, optional): Alternative instructions to use during reasks. Defaults to None. + reask_messages (List[Dict], optional): A list of messages to use during reasks. Defaults to None. + num_reasks (int, optional): The max times to re-ask the LLM if validation fails. Deprecated + tracer (Tracer, optional): An OpenTelemetry tracer to use for metrics and traces. Defaults to None. + name (str, optional): A unique name for this Guard. Defaults to `gr-` + the object id. + description (str, optional): A description for this Guard. Defaults to None. + output_formatter (str | Formatter, optional): 'none' (default), 'jsonformer', or a Guardrails Formatter. + """ # noqa + + if num_reasks: + warnings.warn( + "Setting num_reasks during initialization is deprecated" + " and will be removed in 0.6.x!" + "We recommend setting num_reasks when calling guard()" + " or guard.parse() instead." + "If you insist on setting it at the Guard level," + " use 'Guard.configure()'.", + DeprecationWarning, + ) - return super().__call__(llm_api=custom_callable, *args, **kwargs) + if reask_instructions: + warnings.warn( + "reask_instructions is deprecated and will be removed in 0.6.x!" + "Please be prepared to set reask_messages instead.", + DeprecationWarning, + ) + if reask_prompt: + warnings.warn( + "reask_prompt is deprecated and will be removed in 0.6.x!" + "Please be prepared to set reask_messages instead.", + DeprecationWarning, + ) - def from_pydantic(self, *args, **kwargs): - pass + # We have to set the tracer in the ContextStore before the Rail, + # and therefore the Validators, are initialized + cls._set_tracer(cls, tracer) # type: ignore + + schema = pydantic_model_to_schema(output_class) + exec_opts = GuardExecutionOptions( + prompt=prompt, + instructions=instructions, + reask_prompt=reask_prompt, + reask_instructions=reask_instructions, + reask_messages=reask_messages, + messages=messages, + ) + + # TODO: This is the only line that's changed vs the parent Guard class + # Find a way to refactor this + guard = cls( + nemorails=nemorails, + name=name, + description=description, + output_schema=schema.json_schema, + validators=schema.validators, + ) + if schema.output_type == OutputTypes.LIST: + guard = cast(Guard[List], guard) + else: + guard = cast(Guard[Dict], guard) + guard.configure(num_reasks=num_reasks, tracer=tracer) + guard._validator_map = schema.validator_map + guard._exec_opts = exec_opts + guard._output_type = schema.output_type + guard._base_model = output_class + if isinstance(output_formatter, str): + if isinstance(output_class, list): + raise Exception("""Root-level arrays are not supported with the + jsonformer argument, but can be used with other json generation methods. + Omit the output_formatter argument to use the other methods.""") + output_formatter = get_formatter( + output_formatter, + schema=output_class.model_json_schema(), # type: ignore + ) + guard._output_formatter = output_formatter + guard._fill_validators() + return guard + + # create the callable + def _custom_nemo_callable(self, *args, generate_kwargs, **kwargs): + # .generate doesn't like temp + kwargs.pop("temperature", None) + + # msg_history, messages, prompt, and instruction all may or may not be present. + # if none of them are present, raise an error + # if messages is present, use that + # if msg_history is present, use + + msg_history = kwargs.pop("msg_history", None) + messages = kwargs.pop("messages", None) + prompt = kwargs.pop("prompt", None) + instructions = kwargs.pop("instructions", None) + + if msg_history is not None and messages is None: + messages = msg_history + + if messages is None and msg_history is None: + messages = [] + if instructions is not None: + messages.append({"role": "system", "content": instructions}) + if prompt is not None: + messages.append({"role": "system", "content": prompt}) + + if messages is [] or messages is None: + raise ValueError("messages, prompt, or instructions should be passed during a call.") + + # kwargs["messages"] = messages + + # return (self._nemorails.generate(**kwargs))["content"] # type: ignore + if not generate_kwargs: + generate_kwargs = {} + return (self._nemorails.generate(messages=messages, **generate_kwargs))["content"] # type: ignore @deprecated( "This method has been deprecated. Please use the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` or the `from_pydantic` method.", @@ -76,4 +205,13 @@ def from_rail_string(cls, *args, **kwargs): raise NotImplementedError("""\ `from_rail_string` is not implemented for NemoguardrailsGuard. We recommend using the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` +or the `from_pydantic` method.""") + + @deprecated( + "This method has been deprecated. Please use the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` or the `from_pydantic` method.", + ) + def from_rail(cls, *args, **kwargs): + raise NotImplementedError("""\ +`from_rail` is not implemented for NemoguardrailsGuard. +We recommend using the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` or the `from_pydantic` method.""") From 48d1dafa74acae48a951261282b587847b3cc4e1 Mon Sep 17 00:00:00 2001 From: zsimjee Date: Thu, 17 Oct 2024 14:22:29 -0700 Subject: [PATCH 05/13] clean up integration --- guardrails/Untitled-1.ipynb | 77 ------------------- .../async_nemoguardrails_guard.py | 0 guardrails/llm_providers.py | 2 +- 3 files changed, 1 insertion(+), 78 deletions(-) delete mode 100644 guardrails/Untitled-1.ipynb delete mode 100644 guardrails/integrations/nemoguardrails/async_nemoguardrails_guard.py diff --git a/guardrails/Untitled-1.ipynb b/guardrails/Untitled-1.ipynb deleted file mode 100644 index b8cae8d87..000000000 --- a/guardrails/Untitled-1.ipynb +++ /dev/null @@ -1,77 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [ - { - "ename": "", - "evalue": "", - "output_type": "error", - "traceback": [ - "\u001b[1;31mThe kernel failed to start as 'MappingProxyType' could not be imported from 'most likely due to a circular import'.\n", - "\u001b[1;31mClick here for more info." - ] - } - ], - "source": [ - "from guardrails import Guard\n", - "\n", - "guard = Guard.fetch_guard(\n", - " name=\"nsfw-guard\"\n", - ")\n", - "\n", - "guard(\n", - " model='gpt-3.5-turbo',\n", - " messages=[{\n", - " \"role\": \"user\",\n", - " \"content\": \"I'm, like, so horny right now.\"\n", - " }]\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [ - { - "ename": "", - "evalue": "", - "output_type": "error", - "traceback": [ - "\u001b[1;31mThe kernel failed to start as 'MappingProxyType' could not be imported from 'most likely due to a circular import'.\n", - "\u001b[1;31mClick here for more info." - ] - } - ], - "source": [ - "from guardrails import GuardClient\n", - "\n", - "client = GuardClient(name=\"nsfw-guard\")" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": ".venv", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.7" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/guardrails/integrations/nemoguardrails/async_nemoguardrails_guard.py b/guardrails/integrations/nemoguardrails/async_nemoguardrails_guard.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/guardrails/llm_providers.py b/guardrails/llm_providers.py index 4c76c340e..d601d228a 100644 --- a/guardrails/llm_providers.py +++ b/guardrails/llm_providers.py @@ -159,7 +159,7 @@ def _invoke_llm( text=..., instructions=..., msg_history=..., - =..., + temperature=..., ... ) ``` From 63aba28f07da8ce6a835445dfb5893d20e60d6aa Mon Sep 17 00:00:00 2001 From: zsimjee Date: Thu, 17 Oct 2024 14:28:25 -0700 Subject: [PATCH 06/13] update docs with correct naming --- .../nemo_guardrails/guardrails_rails.md | 8 +- .../nemoguardrails_in_guardrails.ipynb | 111 ------------------ .../using_rails_as_a_validator.md | 3 - 3 files changed, 5 insertions(+), 117 deletions(-) delete mode 100644 docs/integrations/nemo_guardrails/nemoguardrails_in_guardrails.ipynb delete mode 100644 docs/integrations/nemo_guardrails/using_rails_as_a_validator.md diff --git a/docs/integrations/nemo_guardrails/guardrails_rails.md b/docs/integrations/nemo_guardrails/guardrails_rails.md index c5c42e55a..f782f42ad 100644 --- a/docs/integrations/nemo_guardrails/guardrails_rails.md +++ b/docs/integrations/nemo_guardrails/guardrails_rails.md @@ -26,8 +26,10 @@ guard = Guard().use( Then, we have the guard validate the completions generated by the Rails application. ```python -from guardrails import RailsGuard -railsguard = RailsGuard(rails, guard) +from guardrails.integrations.nemoguardrails.nemoguardrails_guard import ( + NemoguardrailsGuard +) +railsguard = NemoguardrailsGuards(rails, guard) result = railsguard( messages: [{ @@ -37,7 +39,7 @@ result = railsguard( ) ``` -The `RailsGuard` class is a wrapper around the Guard class. Just like a Guard, it can [called](https://www.guardrailsai.com/docs/api_reference_markdown/guards#__call__) with similar parameters to the OpenAI completions API. It also returns a `ValidationOutcome` object (or iterable, in streaming cases). That object can be destructured to get the raw output, the validated output, and other metadata. +The `NemoguardrailsGuard` class is a wrapper around the Guard class. Just like a Guard, it can [called](https://www.guardrailsai.com/docs/api_reference_markdown/guards#__call__) with similar parameters to the OpenAI completions API. It also returns a `ValidationOutcome` object (or iterable, in streaming cases). That object can be destructured to get the raw output, the validated output, and other metadata. Here, `raw_llm_output` is the output returned by the NeMo Guardrails Rails. diff --git a/docs/integrations/nemo_guardrails/nemoguardrails_in_guardrails.ipynb b/docs/integrations/nemo_guardrails/nemoguardrails_in_guardrails.ipynb deleted file mode 100644 index 84a668d84..000000000 --- a/docs/integrations/nemo_guardrails/nemoguardrails_in_guardrails.ipynb +++ /dev/null @@ -1,111 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "/home/zayd/workspace/guardrails/.venv/lib/python3.11/site-packages/nemoguardrails/rails/llm/llmrails.py:371: LangChainDeprecationWarning: The class `OpenAI` was deprecated in LangChain 0.0.10 and will be removed in 1.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run `pip install -U langchain-openai` and import as `from langchain_openai import OpenAI`.\n", - " self.llm = provider_cls(**kwargs)\n" - ] - }, - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "4c455303c28c41dab4750619d100d92c", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - "Fetching 5 files: 0%| | 0/5 [00:00 47\u001b[0m \u001b[43msync_test\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n", - "Cell \u001b[0;32mIn[1], line 39\u001b[0m, in \u001b[0;36msync_test\u001b[0;34m()\u001b[0m\n\u001b[1;32m 38\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21msync_test\u001b[39m():\n\u001b[0;32m---> 39\u001b[0m \u001b[43mrails\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mgenerate\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 40\u001b[0m \u001b[43m \u001b[49m\u001b[43mmessages\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m[\u001b[49m\u001b[43m{\u001b[49m\n\u001b[1;32m 41\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mrole\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43muser\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 42\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mcontent\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mGenerate a list of words that I should ban at work. Make sure to spell out all the words. Return only the list of words without preamble or postamble. Make sure to return ONLY the list of words. This list should include profanity, slurs, and other offensive language.\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\n\u001b[1;32m 43\u001b[0m \u001b[43m \u001b[49m\u001b[43m}\u001b[49m\u001b[43m]\u001b[49m\n\u001b[1;32m 44\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n", - "File \u001b[0;32m~/workspace/guardrails/.venv/lib/python3.11/site-packages/nemoguardrails/rails/llm/llmrails.py:879\u001b[0m, in \u001b[0;36mLLMRails.generate\u001b[0;34m(self, prompt, messages, return_context, options, state)\u001b[0m\n\u001b[1;32m 876\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m\"\"\"Synchronous version of generate_async.\"\"\"\u001b[39;00m\n\u001b[1;32m 878\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m check_sync_call_from_async_loop():\n\u001b[0;32m--> 879\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mRuntimeError\u001b[39;00m(\n\u001b[1;32m 880\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mYou are using the sync `generate` inside async code. \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 881\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mYou should replace with `await generate_async(...)` or use `nest_asyncio.apply()`.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 882\u001b[0m )\n\u001b[1;32m 884\u001b[0m loop \u001b[38;5;241m=\u001b[39m get_or_create_event_loop()\n\u001b[1;32m 886\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m loop\u001b[38;5;241m.\u001b[39mrun_until_complete(\n\u001b[1;32m 887\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mgenerate_async(\n\u001b[1;32m 888\u001b[0m prompt\u001b[38;5;241m=\u001b[39mprompt,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 893\u001b[0m )\n\u001b[1;32m 894\u001b[0m )\n", - "\u001b[0;31mRuntimeError\u001b[0m: You are using the sync `generate` inside async code. You should replace with `await generate_async(...)` or use `nest_asyncio.apply()`." - ] - } - ], - "source": [ - "from nemoguardrails import LLMRails, RailsConfig\n", - "from guardrails import AsyncGuard, install\n", - "\n", - "# install(\"hub://guardrails/toxic_language\", install_local_models=False)\n", - "from guardrails.hub import ToxicLanguage\n", - "\n", - "# Load a guardrails configuration from the specified path.\n", - "config = RailsConfig.from_path(\"./config.yml\")\n", - "rails = LLMRails(config)\n", - "\n", - "# Define a guardrails guard.\n", - "guard = AsyncGuard().use(\n", - " ToxicLanguage()\n", - ")\n", - "\n", - "\n", - "# await rails.generate_async(\n", - "# messages=[{\n", - "# \"role\": \"user\",\n", - "# \"content\": \"Generate a list of words that I should ban at work. Make sure to spell out all the words. Return only the list of words without preamble or postamble. Make sure to return ONLY the list of words. This list should include profanity, slurs, and other offensive language.\"\n", - "# }]\n", - "# )\n", - "\n", - "async def custom_callable(prompt: str):\n", - " return (await rails.generate_async(\n", - " messages=[{\n", - " \"role\": \"user\",\n", - " \"content\": prompt\n", - " }]))['content']\n", - "\n", - "res = await guard(\n", - " llm_api=custom_callable,\n", - " prompt=\"Generate a list of words that I should ban at work. Make sure to spell out all the words. Return only the list of words without preamble or postamble. Make sure to return ONLY the list of words. This list should include profanity, slurs, and other offensive language.\"\n", - ")\n", - "print(res.validation_passed)\n" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": ".venv", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.7" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/docs/integrations/nemo_guardrails/using_rails_as_a_validator.md b/docs/integrations/nemo_guardrails/using_rails_as_a_validator.md deleted file mode 100644 index 7aca91573..000000000 --- a/docs/integrations/nemo_guardrails/using_rails_as_a_validator.md +++ /dev/null @@ -1,3 +0,0 @@ -# Placeholder - -The idea here is to use an entire Rails app as a validator. The Rails app would have to return results in either the ValidatorOutcome format or a boolean indicating whether the validation passed or not. \ No newline at end of file From 685bf3b249853eb2e69914df7592e4d13929a7d6 Mon Sep 17 00:00:00 2001 From: zsimjee Date: Thu, 17 Oct 2024 14:30:47 -0700 Subject: [PATCH 07/13] correct nemoguardrails usage syntax --- docs/integrations/nemo_guardrails/guardrails_rails.md | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/docs/integrations/nemo_guardrails/guardrails_rails.md b/docs/integrations/nemo_guardrails/guardrails_rails.md index f782f42ad..24715facc 100644 --- a/docs/integrations/nemo_guardrails/guardrails_rails.md +++ b/docs/integrations/nemo_guardrails/guardrails_rails.md @@ -16,11 +16,6 @@ from guardrails.hub import ToxicLanguage # Load a guardrails configuration from the specified path. config = RailsConfig.from_path("PATH/TO/CONFIG") rails = LLMRails(config) - -# Define a guardrails guard. -guard = Guard().use( - ToxicLanguage() -) ``` Then, we have the guard validate the completions generated by the Rails application. @@ -29,7 +24,7 @@ Then, we have the guard validate the completions generated by the Rails applicat from guardrails.integrations.nemoguardrails.nemoguardrails_guard import ( NemoguardrailsGuard ) -railsguard = NemoguardrailsGuards(rails, guard) +railsguard = NemoguardrailsGuards(rails).use(ToxicLanguage) result = railsguard( messages: [{ From ddd896bf81eff8b78584b490ddceb93e6dfdfbce Mon Sep 17 00:00:00 2001 From: zsimjee Date: Thu, 17 Oct 2024 14:31:21 -0700 Subject: [PATCH 08/13] remove test file --- docs/integrations/nemo_guardrails/nm_gr.py | 191 --------------------- 1 file changed, 191 deletions(-) delete mode 100644 docs/integrations/nemo_guardrails/nm_gr.py diff --git a/docs/integrations/nemo_guardrails/nm_gr.py b/docs/integrations/nemo_guardrails/nm_gr.py deleted file mode 100644 index 1bc6aaa97..000000000 --- a/docs/integrations/nemo_guardrails/nm_gr.py +++ /dev/null @@ -1,191 +0,0 @@ -from typing import List -from nemoguardrails import LLMRails, RailsConfig -from guardrails import Guard -from guardrails.integrations.nemoguardrails.nemoguardrails_guard import ( - NemoguardrailsGuard, -) -from pydantic import BaseModel, Field -import rich - -# install("hub://guardrails/toxic_language", install_local_models=False) -from guardrails.hub import ToxicLanguage # noqa: E402 - -# Load a guardrails configuration from the specified path. -config = RailsConfig.from_path("./config.yml") -nemorails = LLMRails(config) - -# define your guard and validator(s) -guard = NemoguardrailsGuard(nemorails=nemorails).use(ToxicLanguage()) - -# res = guard( -# messages=[ -# { -# "role": "user", -# "content": """Generate a list of words that I should ban at work. -# Make sure to spell out all the words. Return only the list of words without -# preamble or postamble. Make sure to return ONLY the list of words. -# This list should include profanity, slurs, and other offensive language.""", -# } -# ], -# ) - -# print(res.validation_passed) # type: ignore - -# rich.print(guard.history.last.tree) - -# res = guard( -# messages=[ -# { -# "role": "user", -# "content": """Say hi in the nicest way you can""", -# } -# ] -# ) - -# print(res.validation_passed) # type: ignore - -class StructuredOutput(BaseModel): - nontoxic: str = Field(validators=[ToxicLanguage()]) - -# pydantic_guard = NemoguardrailsGuard.from_pydantic( -# nemorails=nemorails, -# output_class=StructuredOutput, -# ) - -# res = pydantic_guard( -# messages=[ -# { -# "role": "system", -# "content": """Only respond in JSON. The JSON should be formatted as follows: - -# {'nontoxic': response to the prompt} - -# AGAIN, be sure to respond ONLY with valid JSON. Do not include any other text. -# """, -# }, -# { -# "role": "user", -# "content": """Generate a list of words that I should ban at work. -# Make sure to spell out all the words. Return only the list of words without -# preamble or postamble. Make sure to return ONLY the list of words. -# This list should include profanity, slurs, and other offensive language.""", -# } -# ], -# num_reasks=1 -# ) - -# print(res.validation_passed) # type: ignore - -# rich.print(pydantic_guard.history.last.tree) - - - -# class StructuredOutput(BaseModel): -# nontoxic: str = Field(validators=[ToxicLanguage()]) - -# pydantic_guard = Guard.from_pydantic( -# # nemorails=nemorails, -# output_class=StructuredOutput, -# ) - -# g = Guard() -# g( -# model="gpt-3.5-turbo", -# messages=[ -# # { -# # "role": "system", -# # "content": """Only respond in JSON. The JSON should be formatted as follows: - -# # {'nontoxic': response to the prompt} - -# # AGAIN, be sure to respond ONLY with valid JSON. Do not include any other text. -# # """, -# # }, -# { -# "role": "user", -# "content": """Write pydantic structure about medical data. It should have hundreds of fields, with some nesting""", -# } -# ], -# # tools=pydantic_guard.json_function_calling_tool(), -# # tool_choice="required", -# # generate_kwargs={"temperature": 0.0}, -# ) -# # rich.print(pydantic_guard.history.last.tree) -# rich.print(g.history.last.tree) -# print(g.history.last.raw_outputs) - - -class MedicalData(BaseModel): - patient_id: int - patient_name: str - age: int - gender: str - address: str - contact_number: str - blood_type: str - allergies: List[str] - medications: List[str] - medical_history: List[str] - family_history: List[str] - insurance_provider: str - insurance_policy_number: str - emergency_contact_name: str - emergency_contact_number: str - primary_care_physician: str - last_checkup_date: str - next_checkup_date: str - weight: float - height: float - blood_pressure: str - heart_rate: int - respiratory_rate: int - temperature: float - symptoms: List[str] - diagnosis: str - treatment_plan: str - lab_results: List[str] - imaging_results: List[str] - follow_up_instructions: str - additional_notes: str - vaccination_records: List[str] - surgeries: List[str] - hospitalizations: List[str] - chronic_conditions: List[str] - lifestyle_factors: List[str] - exercise_routine: str - diet_plan: str - sleep_pattern: str - stress_level: str - smoking_status: str - alcohol_consumption: str - drug_usage: str - mental_health_history: List[str] - social_support_system: List[str] - living_situation: str - employment_status: str - education_level: str - income_level: str - access_to_healthcare: str - healthcare_preferences: List[str] - healthcare_goals: List[str] - healthcare_challenges: List[str] - healthcare_needs: List[str] - healthcare_preferences: List[str] - healthcare_expectations: List[str] - healthcare_satisfaction: str - healthcare_feedback: str - additional_information: str - # Add more fields as needed - - -med_pydantic_guard = Guard.from_pydantic( - output_class=MedicalData, -) - -import time - -init_time = time.time() - -med_pydantic_guard.json_function_calling_tool() - -print(f"Time taken: {time.time() - init_time}") \ No newline at end of file From db5544f225477b2f577ef233839d4e076ce59ffb Mon Sep 17 00:00:00 2001 From: Caleb Courier Date: Thu, 21 Nov 2024 10:39:16 -0600 Subject: [PATCH 09/13] 0.6.0 update --- guardrails/guard.py | 27 ++- .../nemoguardrails/nemoguardrails_guard.py | 187 ++++++++---------- 2 files changed, 105 insertions(+), 109 deletions(-) diff --git a/guardrails/guard.py b/guardrails/guard.py index 3976773f5..fc81ee3c8 100644 --- a/guardrails/guard.py +++ b/guardrails/guard.py @@ -380,7 +380,7 @@ def _for_rail_schema( name: Optional[str] = None, description: Optional[str] = None, ): - guard = cls( + guard = cls._init_guard_for_cls_method( name=name, description=description, output_schema=schema.json_schema, @@ -526,6 +526,25 @@ def for_rail_string( def from_pydantic(cls, output_class: ModelOrListOfModels, *args, **kwargs): return cls.for_pydantic(output_class, **kwargs) + @classmethod + def _init_guard_for_cls_method( + cls, + *, + id: Optional[str] = None, + name: Optional[str] = None, + description: Optional[str] = None, + validators: Optional[List[ValidatorReference]] = None, + output_schema: Optional[Dict[str, Any]] = None, + **kwargs, + ): + return cls( + id=id, + name=name, + description=description, + output_schema=output_schema, + validators=validators, + ) + @classmethod def for_pydantic( cls, @@ -538,6 +557,7 @@ def for_pydantic( name: Optional[str] = None, description: Optional[str] = None, output_formatter: Optional[Union[str, BaseFormatter]] = None, + **kwargs, ): """Create a Guard instance using a Pydantic model to specify the output schema. @@ -574,11 +594,12 @@ def for_pydantic( reask_messages=reask_messages, messages=messages, ) - guard = cls( + guard = cls._init_guard_for_cls_method( name=name, description=description, output_schema=schema.json_schema, validators=schema.validators, + **kwargs, ) if schema.output_type == OutputTypes.LIST: guard = cast(Guard[List], guard) @@ -1306,7 +1327,7 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional["Guard"]: i_guard.output_schema.to_dict() if i_guard.output_schema else None ) - guard = cls( + guard = cls._init_guard_for_cls_method( id=i_guard.id, name=i_guard.name, description=i_guard.description, diff --git a/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py b/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py index 0a5365642..bba0f4425 100644 --- a/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py +++ b/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py @@ -1,25 +1,27 @@ -from typing import Callable, Dict, Iterable, List, Optional, Union, cast -import warnings +from typing import Any, Callable, Dict, Generic, Iterable, List, Optional, Union, cast from typing_extensions import deprecated -from guardrails.classes.execution.guard_execution_options import GuardExecutionOptions from guardrails.classes.output_type import OT, OutputTypes from guardrails.classes.validation_outcome import ValidationOutcome +from guardrails.classes.validation.validator_reference import ValidatorReference from guardrails import Guard -from nemoguardrails import LLMRails -from guardrails.formatters import get_formatter from guardrails.formatters.base_formatter import BaseFormatter -from guardrails.schema.pydantic_schema import pydantic_model_to_schema from guardrails.types.pydantic import ModelOrListOfModels -from guardrails.stores.context import ( - Tracer -) +from guardrails.stores.context import Tracer + +try: + from nemoguardrails import LLMRails +except ImportError: + raise ImportError( + "Could not import nemoguardrails, please install it with " + "`pip install nemoguardrails`." + ) -class NemoguardrailsGuard(Guard): +class NemoguardrailsGuard(Guard, Generic[OT]): def __init__( self, nemorails: LLMRails, @@ -30,7 +32,11 @@ def __init__( self._nemorails = nemorails def __call__( - self, llm_api: Optional[Callable] = None, generate_kwargs: Optional[Dict] = None, *args, **kwargs + self, + llm_api: Optional[Callable] = None, + generate_kwargs: Optional[Dict] = None, + *args, + **kwargs, ) -> Union[ValidationOutcome[OT], Iterable[ValidationOutcome[OT]]]: # peel llm_api off of kwargs llm_api = kwargs.pop("llm_api", None) @@ -54,114 +60,61 @@ def __call__( ) def _custom_nemo_callable(*args, **kwargs): - return self._custom_nemo_callable(*args, generate_kwargs=generate_kwargs, **kwargs) + return self._custom_nemo_callable( + *args, generate_kwargs=generate_kwargs, **kwargs + ) return super().__call__(llm_api=_custom_nemo_callable, *args, **kwargs) @classmethod - def from_pydantic( + def _init_guard_for_cls_method( cls, + *, + name: Optional[str] = None, + description: Optional[str] = None, + validators: Optional[List[ValidatorReference]] = None, + output_schema: Optional[Dict[str, Any]] = None, nemorails: LLMRails, + **kwargs, + ): + return cls( + nemorails, + name=name, + description=description, + output_schema=output_schema, + validators=validators, + ) + + @classmethod + def for_pydantic( + cls, output_class: ModelOrListOfModels, + nemorails: LLMRails, *, - prompt: Optional[str] = None, - instructions: Optional[str] = None, num_reasks: Optional[int] = None, - reask_prompt: Optional[str] = None, - reask_instructions: Optional[str] = None, reask_messages: Optional[List[Dict]] = None, messages: Optional[List[Dict]] = None, tracer: Optional[Tracer] = None, name: Optional[str] = None, description: Optional[str] = None, output_formatter: Optional[Union[str, BaseFormatter]] = None, + **kwargs, ): - """Create a Guard instance using a Pydantic model to specify the output - schema. - - Args: - output_class: (Union[Type[BaseModel], List[Type[BaseModel]]]): The pydantic model that describes - the desired structure of the output. - prompt (str, optional): The prompt used to generate the string. Defaults to None. - instructions (str, optional): Instructions for chat models. Defaults to None. - reask_prompt (str, optional): An alternative prompt to use during reasks. Defaults to None. - reask_instructions (str, optional): Alternative instructions to use during reasks. Defaults to None. - reask_messages (List[Dict], optional): A list of messages to use during reasks. Defaults to None. - num_reasks (int, optional): The max times to re-ask the LLM if validation fails. Deprecated - tracer (Tracer, optional): An OpenTelemetry tracer to use for metrics and traces. Defaults to None. - name (str, optional): A unique name for this Guard. Defaults to `gr-` + the object id. - description (str, optional): A description for this Guard. Defaults to None. - output_formatter (str | Formatter, optional): 'none' (default), 'jsonformer', or a Guardrails Formatter. - """ # noqa - - if num_reasks: - warnings.warn( - "Setting num_reasks during initialization is deprecated" - " and will be removed in 0.6.x!" - "We recommend setting num_reasks when calling guard()" - " or guard.parse() instead." - "If you insist on setting it at the Guard level," - " use 'Guard.configure()'.", - DeprecationWarning, - ) - - if reask_instructions: - warnings.warn( - "reask_instructions is deprecated and will be removed in 0.6.x!" - "Please be prepared to set reask_messages instead.", - DeprecationWarning, - ) - if reask_prompt: - warnings.warn( - "reask_prompt is deprecated and will be removed in 0.6.x!" - "Please be prepared to set reask_messages instead.", - DeprecationWarning, - ) - - # We have to set the tracer in the ContextStore before the Rail, - # and therefore the Validators, are initialized - cls._set_tracer(cls, tracer) # type: ignore - - schema = pydantic_model_to_schema(output_class) - exec_opts = GuardExecutionOptions( - prompt=prompt, - instructions=instructions, - reask_prompt=reask_prompt, - reask_instructions=reask_instructions, - reask_messages=reask_messages, + guard = super().for_pydantic( + output_class, + num_reasks=num_reasks, messages=messages, - ) - - # TODO: This is the only line that's changed vs the parent Guard class - # Find a way to refactor this - guard = cls( - nemorails=nemorails, + reask_messages=reask_messages, + tracer=tracer, name=name, description=description, - output_schema=schema.json_schema, - validators=schema.validators, + output_formatter=output_formatter, + nemorails=nemorails, ) - if schema.output_type == OutputTypes.LIST: - guard = cast(Guard[List], guard) + if guard._output_type == OutputTypes.LIST: + return cast(NemoguardrailsGuard[List], guard) else: - guard = cast(Guard[Dict], guard) - guard.configure(num_reasks=num_reasks, tracer=tracer) - guard._validator_map = schema.validator_map - guard._exec_opts = exec_opts - guard._output_type = schema.output_type - guard._base_model = output_class - if isinstance(output_formatter, str): - if isinstance(output_class, list): - raise Exception("""Root-level arrays are not supported with the - jsonformer argument, but can be used with other json generation methods. - Omit the output_formatter argument to use the other methods.""") - output_formatter = get_formatter( - output_formatter, - schema=output_class.model_json_schema(), # type: ignore - ) - guard._output_formatter = output_formatter - guard._fill_validators() - return guard + return cast(NemoguardrailsGuard[Dict], guard) # create the callable def _custom_nemo_callable(self, *args, generate_kwargs, **kwargs): @@ -171,13 +124,13 @@ def _custom_nemo_callable(self, *args, generate_kwargs, **kwargs): # msg_history, messages, prompt, and instruction all may or may not be present. # if none of them are present, raise an error # if messages is present, use that - # if msg_history is present, use + # if msg_history is present, use msg_history = kwargs.pop("msg_history", None) messages = kwargs.pop("messages", None) prompt = kwargs.pop("prompt", None) instructions = kwargs.pop("instructions", None) - + if msg_history is not None and messages is None: messages = msg_history @@ -188,30 +141,52 @@ def _custom_nemo_callable(self, *args, generate_kwargs, **kwargs): if prompt is not None: messages.append({"role": "system", "content": prompt}) - if messages is [] or messages is None: - raise ValueError("messages, prompt, or instructions should be passed during a call.") - + if messages == [] or messages is None: + raise ValueError( + "messages, prompt, or instructions should be passed during a call." + ) + # kwargs["messages"] = messages # return (self._nemorails.generate(**kwargs))["content"] # type: ignore if not generate_kwargs: generate_kwargs = {} - return (self._nemorails.generate(messages=messages, **generate_kwargs))["content"] # type: ignore + return (self._nemorails.generate(messages=messages, **generate_kwargs))[ # type: ignore + "content" + ] @deprecated( - "This method has been deprecated. Please use the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` or the `from_pydantic` method.", + "Use `for_rail_string` instead. This method will be removed in 0.6.x.", + category=None, ) + @classmethod def from_rail_string(cls, *args, **kwargs): raise NotImplementedError("""\ `from_rail_string` is not implemented for NemoguardrailsGuard. We recommend using the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` +or the `from_pydantic` method.""") + + @classmethod + def for_rail_string(cls, *args, **kwargs): + raise NotImplementedError("""\ +`for_rail_string` is not implemented for NemoguardrailsGuard. +We recommend using the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` or the `from_pydantic` method.""") @deprecated( - "This method has been deprecated. Please use the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` or the `from_pydantic` method.", + "Use `for_rail` instead. This method will be removed in 0.6.x.", + category=None, ) + @classmethod def from_rail(cls, *args, **kwargs): raise NotImplementedError("""\ `from_rail` is not implemented for NemoguardrailsGuard. We recommend using the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` +or the `from_pydantic` method.""") + + @classmethod + def for_rail(cls, *args, **kwargs): + raise NotImplementedError("""\ +`for_rail` is not implemented for NemoguardrailsGuard. +We recommend using the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` or the `from_pydantic` method.""") From 060143e5870a68f190600abe2af90122c399b885 Mon Sep 17 00:00:00 2001 From: Caleb Courier Date: Thu, 21 Nov 2024 16:20:00 -0600 Subject: [PATCH 10/13] use lazy imports; remove nemo dev group bc nemo is locked below py 3.13 --- pyproject.toml | 6 ------ 1 file changed, 6 deletions(-) diff --git a/pyproject.toml b/pyproject.toml index 1a5e47435..da8518823 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -109,12 +109,6 @@ pillow = "^10.1.0" cairosvg = "^2.7.1" mkdocs-glightbox = "^0.3.4" -[tool.poetry.group.nemoguardrials] -optional = true - -[tool.poetry.group.nemoguardrials.dependencies] -nemoguardrials = "0.9.1.1" - [[tool.poetry.source]] name = "PyPI" From a1ab98c844b1f1cb7057b4398c999a9ffcd837e4 Mon Sep 17 00:00:00 2001 From: Caleb Courier Date: Thu, 21 Nov 2024 16:20:30 -0600 Subject: [PATCH 11/13] cleanup, enable async support --- guardrails/async_guard.py | 5 + .../integrations/nemoguardrails/__init__.py | 6 + .../nemoguardrails/nemoguardrails_guard.py | 130 +++++++++++------- 3 files changed, 94 insertions(+), 47 deletions(-) diff --git a/guardrails/async_guard.py b/guardrails/async_guard.py index c7ad495ea..1c98f9aaf 100644 --- a/guardrails/async_guard.py +++ b/guardrails/async_guard.py @@ -1,6 +1,7 @@ from builtins import id as object_id import contextvars import inspect +from guardrails.formatters.base_formatter import BaseFormatter from opentelemetry import context as otel_context from typing import ( Any, @@ -99,6 +100,8 @@ def for_pydantic( tracer: Optional[Tracer] = None, name: Optional[str] = None, description: Optional[str] = None, + output_formatter: Optional[Union[str, BaseFormatter]] = None, + **kwargs, ): guard = super().for_pydantic( output_class, @@ -108,6 +111,8 @@ def for_pydantic( tracer=tracer, name=name, description=description, + output_formatter=output_formatter, + **kwargs, ) if guard._output_type == OutputTypes.LIST: return cast(AsyncGuard[List], guard) diff --git a/guardrails/integrations/nemoguardrails/__init__.py b/guardrails/integrations/nemoguardrails/__init__.py index e69de29bb..f2ebf7223 100644 --- a/guardrails/integrations/nemoguardrails/__init__.py +++ b/guardrails/integrations/nemoguardrails/__init__.py @@ -0,0 +1,6 @@ +from guardrails.integrations.nemoguardrails.nemoguardrails_guard import ( + NemoguardrailsGuard, + AsyncNemoguardrailsGuard, +) + +__all__ = ["NemoguardrailsGuard", "AsyncNemoguardrailsGuard"] diff --git a/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py b/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py index bba0f4425..f5f6c74cb 100644 --- a/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py +++ b/guardrails/integrations/nemoguardrails/nemoguardrails_guard.py @@ -1,11 +1,25 @@ -from typing import Any, Callable, Dict, Generic, Iterable, List, Optional, Union, cast +import inspect +from functools import partial +from typing import ( + Any, + AsyncIterator, + Awaitable, + Callable, + Dict, + Generic, + Iterable, + List, + Optional, + Union, + cast, +) from typing_extensions import deprecated from guardrails.classes.output_type import OT, OutputTypes from guardrails.classes.validation_outcome import ValidationOutcome from guardrails.classes.validation.validator_reference import ValidatorReference -from guardrails import Guard +from guardrails import Guard, AsyncGuard from guardrails.formatters.base_formatter import BaseFormatter from guardrails.types.pydantic import ModelOrListOfModels @@ -20,6 +34,17 @@ "`pip install nemoguardrails`." ) +try: + import nest_asyncio + + nest_asyncio.apply() + import asyncio +except ImportError: + raise ImportError( + "Could not import nest_asyncio, please install it with " + "`pip install nest_asyncio`." + ) + class NemoguardrailsGuard(Guard, Generic[OT]): def __init__( @@ -30,6 +55,28 @@ def __init__( ): super().__init__(*args, **kwargs) self._nemorails = nemorails + self._generate = self._nemorails.generate + + def _custom_nemo_callable(self, *args, generate_kwargs, **kwargs): + # .generate doesn't like temp + kwargs.pop("temperature", None) + + messages = kwargs.pop("messages", None) + + if messages == [] or messages is None: + raise ValueError("messages must be passed during a call.") + + if not generate_kwargs: + generate_kwargs = {} + + response = self._generate(messages=messages, **generate_kwargs) + + if inspect.iscoroutine(response): + response = asyncio.run(response) + + return response[ # type: ignore + "content" + ] def __call__( self, @@ -59,12 +106,9 @@ def __call__( dictionaries, where each dictionary has a 'role' key and a 'content' key.""" ) - def _custom_nemo_callable(*args, **kwargs): - return self._custom_nemo_callable( - *args, generate_kwargs=generate_kwargs, **kwargs - ) + llm_api = partial(self._custom_nemo_callable, generate_kwargs=generate_kwargs) - return super().__call__(llm_api=_custom_nemo_callable, *args, **kwargs) + return super().__call__(llm_api=llm_api, *args, **kwargs) @classmethod def _init_guard_for_cls_method( @@ -89,8 +133,8 @@ def _init_guard_for_cls_method( def for_pydantic( cls, output_class: ModelOrListOfModels, - nemorails: LLMRails, *, + nemorails: LLMRails, num_reasks: Optional[int] = None, reask_messages: Optional[List[Dict]] = None, messages: Optional[List[Dict]] = None, @@ -116,45 +160,6 @@ def for_pydantic( else: return cast(NemoguardrailsGuard[Dict], guard) - # create the callable - def _custom_nemo_callable(self, *args, generate_kwargs, **kwargs): - # .generate doesn't like temp - kwargs.pop("temperature", None) - - # msg_history, messages, prompt, and instruction all may or may not be present. - # if none of them are present, raise an error - # if messages is present, use that - # if msg_history is present, use - - msg_history = kwargs.pop("msg_history", None) - messages = kwargs.pop("messages", None) - prompt = kwargs.pop("prompt", None) - instructions = kwargs.pop("instructions", None) - - if msg_history is not None and messages is None: - messages = msg_history - - if messages is None and msg_history is None: - messages = [] - if instructions is not None: - messages.append({"role": "system", "content": instructions}) - if prompt is not None: - messages.append({"role": "system", "content": prompt}) - - if messages == [] or messages is None: - raise ValueError( - "messages, prompt, or instructions should be passed during a call." - ) - - # kwargs["messages"] = messages - - # return (self._nemorails.generate(**kwargs))["content"] # type: ignore - if not generate_kwargs: - generate_kwargs = {} - return (self._nemorails.generate(messages=messages, **generate_kwargs))[ # type: ignore - "content" - ] - @deprecated( "Use `for_rail_string` instead. This method will be removed in 0.6.x.", category=None, @@ -190,3 +195,34 @@ def for_rail(cls, *args, **kwargs): `for_rail` is not implemented for NemoguardrailsGuard. We recommend using the main constructor `NemoGuardrailsGuard(nemorails=nemorails)` or the `from_pydantic` method.""") + + +class AsyncNemoguardrailsGuard(NemoguardrailsGuard, AsyncGuard, Generic[OT]): + def __init__( + self, + nemorails: LLMRails, + *args, + **kwargs, + ): + super().__init__(nemorails, *args, **kwargs) + self._generate = self._nemorails.generate_async + + async def _custom_nemo_callable(self, *args, generate_kwargs, **kwargs): + return super()._custom_nemo_callable( + *args, generate_kwargs=generate_kwargs, **kwargs + ) + + async def __call__( # type: ignore + self, + llm_api: Optional[Callable] = None, + generate_kwargs: Optional[Dict] = None, + *args, + **kwargs, + ) -> Union[ + ValidationOutcome[OT], + Awaitable[ValidationOutcome[OT]], + AsyncIterator[ValidationOutcome[OT]], + ]: + return await super().__call__( + llm_api=llm_api, generate_kwargs=generate_kwargs, *args, **kwargs + ) # type: ignore From f38e732b7b9e0dc131da9198400161526f3a983a Mon Sep 17 00:00:00 2001 From: Caleb Courier Date: Fri, 22 Nov 2024 16:53:13 -0600 Subject: [PATCH 12/13] sync docs --- docs/examples/guard-as-action.ipynb | 474 ++++++++++++++++++ docs/examples/rails-as-guard.ipynb | 271 ++++++++++ .../nemo_guardrails/guard_as_action.md | 171 +++++++ docs/integrations/nemo_guardrails/index.md | 73 +++ .../nemo_guardrails/rails_as_guard.md | 110 ++++ docusaurus/sidebars.js | 10 + 6 files changed, 1109 insertions(+) create mode 100644 docs/examples/guard-as-action.ipynb create mode 100644 docs/examples/rails-as-guard.ipynb create mode 100644 docs/integrations/nemo_guardrails/guard_as_action.md create mode 100644 docs/integrations/nemo_guardrails/index.md create mode 100644 docs/integrations/nemo_guardrails/rails_as_guard.md diff --git a/docs/examples/guard-as-action.ipynb b/docs/examples/guard-as-action.ipynb new file mode 100644 index 000000000..3861cfe2e --- /dev/null +++ b/docs/examples/guard-as-action.ipynb @@ -0,0 +1,474 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "bda9eda8b4566a0d", + "metadata": { + "collapsed": false + }, + "source": [ + "# Guard as Actions\n", + "\n", + "This guide will teach you how to use a `Guard` with any of the 60+ GuardrailsAI Validators as an action inside a guardrails configuration. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a5ddc8b17af62afa", + "metadata": { + "ExecuteTime": { + "end_time": "2024-01-25T14:27:11.284164Z", + "start_time": "2024-01-25T14:27:11.025161Z" + }, + "collapsed": false + }, + "outputs": [], + "source": [ + "# Init: remove any existing configuration\n", + "! rm -r config\n", + "! mkdir config" + ] + }, + { + "cell_type": "markdown", + "id": "724db36201c3d409", + "metadata": { + "collapsed": false + }, + "source": [ + "## Prerequisites\n", + "\n", + "We'll be using an OpenAI model for our LLM in this guide, so set up an OpenAI API key, if not already set." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4e52b23b90077cf4", + "metadata": { + "ExecuteTime": { + "end_time": "2024-01-25T14:27:11.418023Z", + "start_time": "2024-01-25T14:27:11.286549Z" + }, + "collapsed": false + }, + "outputs": [], + "source": [ + "! export OPENAI_API_KEY=$OPENAI_API_KEY # Replace with your own key" + ] + }, + { + "cell_type": "markdown", + "id": "4b6fb59034bcb2bb", + "metadata": { + "collapsed": false + }, + "source": [ + "If you're running this inside a notebook, you also need to patch the AsyncIO loop." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "7ba19d5c8bdc57a3", + "metadata": { + "ExecuteTime": { + "end_time": "2024-01-25T14:27:13.693091Z", + "start_time": "2024-01-25T14:27:13.686555Z" + }, + "collapsed": false + }, + "outputs": [], + "source": [ + "import nest_asyncio\n", + "\n", + "nest_asyncio.apply()" + ] + }, + { + "cell_type": "markdown", + "id": "b8b27d3fa09bbe91", + "metadata": { + "collapsed": false + }, + "source": [ + "## Sample Guard\n", + "\n", + "Let's create a sample Guard that can detect PII. First, install guardrails-ai." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5925945d", + "metadata": {}, + "outputs": [], + "source": [ + "! pip install guardrails-ai -q" + ] + }, + { + "cell_type": "markdown", + "id": "2c8fc267", + "metadata": {}, + "source": [ + "Next configure the guardrails cli so we can install the validator we want to use from the Guardrails Hub." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9d9cf415", + "metadata": {}, + "outputs": [], + "source": [ + "! guardrails configure" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9a208f1c", + "metadata": {}, + "outputs": [], + "source": [ + "! guardrails hub install hub://guardrails/detect_pii --no-install-local-models -q" + ] + }, + { + "cell_type": "markdown", + "id": "61f4fff5", + "metadata": {}, + "source": [ + "Now we can define our Guard.\n", + "This Guard will use the DetectPII validator to safeguard against leaking personally identifiable information such as names, email addresses, etc..\n", + "\n", + "Once the Guard is defined, we can test it with a static value to make sure it's working how we would expect." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "71aeb10e5fda9040", + "metadata": { + "ExecuteTime": { + "end_time": "2024-01-25T14:27:13.813566Z", + "start_time": "2024-01-25T14:27:13.693010Z" + }, + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ValidationOutcome(\n", + " call_id='14534730096',\n", + " raw_llm_output='My name is John Doe',\n", + " validation_summaries=[\n", + " ValidationSummary(\n", + " validator_name='DetectPII',\n", + " validator_status='fail',\n", + " property_path='$',\n", + " failure_reason='The following text in your response contains PII:\\nMy name is John Doe',\n", + " error_spans=[\n", + " ErrorSpan(start=11, end=19, reason='PII detected in John Doe')\n", + " ]\n", + " )\n", + " ],\n", + " validated_output='My name is ',\n", + " reask=None,\n", + " validation_passed=True,\n", + " error=None\n", + ")\n" + ] + } + ], + "source": [ + "from guardrails import Guard\n", + "from guardrails.hub import DetectPII\n", + "\n", + "g = Guard(name=\"pii_guard\").use(DetectPII([\"PERSON\", \"EMAIL_ADDRESS\"], on_fail=\"fix\"))\n", + "\n", + "print(g.validate(\"My name is John Doe\"))" + ] + }, + { + "cell_type": "markdown", + "id": "1a0725d977f5589b", + "metadata": { + "collapsed": false + }, + "source": [ + "## Guardrails Configuration \n", + "\n", + "Now we'll use the Guard we defeined above to create an action and a flow. Since we're calling our guard \"pii_guard\", we'll use \"pii_guard_validate\" in order to see if the LLM output is safe." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "a27c15cf3919fa5", + "metadata": { + "ExecuteTime": { + "end_time": "2024-01-25T14:27:13.820255Z", + "start_time": "2024-01-25T14:27:13.814191Z" + }, + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Writing config/rails.co\n" + ] + } + ], + "source": [ + "%%writefile config/rails.co\n", + "\n", + "\n", + "define flow detect_pii\n", + " $output = execute pii_guard_validate(text=$bot_message)\n", + "\n", + " if not $output\n", + " bot refuse to respond\n", + " stop\n" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "53403afb1e1a4b9c", + "metadata": { + "ExecuteTime": { + "end_time": "2024-01-25T14:27:13.821992Z", + "start_time": "2024-01-25T14:27:13.817004Z" + }, + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Writing config/config.yml\n" + ] + } + ], + "source": [ + "%%writefile config/config.yml\n", + "models:\n", + " - type: main\n", + " engine: openai\n", + " model: gpt-3.5-turbo-instruct\n", + "\n", + "rails:\n", + " output:\n", + " flows:\n", + " - detect_pii" + ] + }, + { + "cell_type": "markdown", + "id": "d25b3725", + "metadata": {}, + "source": [ + "To hook the Guardrails AI guard up so that it can be read from Colang, we use the integration's `register_guardrails_guard_actions` function.\n", + "This takes a name and registers two actions:\n", + "\n", + "1. [guard_name]_validate: This action is used to detect validation failures in outputs\n", + "2. [guard name]_fix: This action is used to automatically fix validation failures in outputs, when possible" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "f2adca21d94e54b9", + "metadata": { + "collapsed": false + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Fetching 5 files: 100%|██████████| 5/5 [00:00<00:00, 109226.67it/s]\n" + ] + } + ], + "source": [ + "from nemoguardrails import RailsConfig, LLMRails\n", + "from nemoguardrails.integrations.guardrails_ai.guard_actions import register_guardrails_guard_actions\n", + "\n", + "config = RailsConfig.from_path(\"./config\")\n", + "rails = LLMRails(config)\n", + "\n", + "register_guardrails_guard_actions(rails, g, \"pii_guard\")" + ] + }, + { + "cell_type": "markdown", + "id": "ade12682dd9d8f0e", + "metadata": { + "collapsed": false + }, + "source": [ + "## Testing\n", + "\n", + "Let's try this out. If we invoke the guardrails configuration with a message that prompts the LLM to return personal information like names, email addresses, etc., it should refuse to respond." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "394311174e678d96", + "metadata": { + "ExecuteTime": { + "end_time": "2024-01-25T14:27:18.524958Z", + "start_time": "2024-01-25T14:27:18.518176Z" + }, + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "I'm sorry, I can't respond to that.\n" + ] + } + ], + "source": [ + "response = rails.generate(\"Who is the current president of the United States, and what was their email address?\")\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "0d545fa7", + "metadata": {}, + "source": [ + "Great! So the valdiation-only flow works. Next let's try the fix flow." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "62bac8d3", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting config/rails.co\n" + ] + } + ], + "source": [ + "%%writefile config/rails.co\n", + "\n", + "\n", + "define flow detect_pii\n", + " $output = execute pii_guard_fix(text=$bot_message)\n", + "\n", + " if not $output\n", + " bot refuse to respond\n", + " stop\n", + " else\n", + " $bot_message = $output\n" + ] + }, + { + "cell_type": "markdown", + "id": "2fa6d051", + "metadata": {}, + "source": [ + "If we send the same message, we should get a response this time, but any PII will be filtered out." + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "ff14d3c0", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "The current president of the United States is . His official email address is . However, he also has a personal email address, which is .\n" + ] + } + ], + "source": [ + "config = RailsConfig.from_path(\"./config\")\n", + "rails = LLMRails(config)\n", + "\n", + "register_guardrails_guard_actions(rails, g, \"pii_guard\")\n", + "\n", + "response = rails.generate(\"Who is the current president of the United States, and what was their email address?\")\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "id": "f6b457ce6e2957fd", + "metadata": { + "collapsed": false + }, + "source": [ + "If however, we prompt the LLM with a message that does not cause it to return PII, we should get the unaltered response." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "70409a3aafe89e95", + "metadata": { + "ExecuteTime": { + "end_time": "2024-01-25T14:29:15.370273Z", + "start_time": "2024-01-25T14:29:14.322661Z" + }, + "collapsed": false + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Hello there! How can I assist you?\n" + ] + } + ], + "source": [ + "response = rails.generate(\"Hello!\")\n", + "print(response)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.4" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/docs/examples/rails-as-guard.ipynb b/docs/examples/rails-as-guard.ipynb new file mode 100644 index 000000000..16aaf3cae --- /dev/null +++ b/docs/examples/rails-as-guard.ipynb @@ -0,0 +1,271 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Guardrails as Guards\n", + "This guide will teach you how to add NeMo Guardrails to a GuardrailsAI Guard." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Init: remove any existing configuration\n", + "! rm -r config\n", + "! mkdir config" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Prerequisites\n", + "\n", + "We'll be using an OpenAI model for our LLM in this guide, so set up an OpenAI API key, if not already set." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "! export OPENAI_API_KEY=$OPENAI_API_KEY # Replace with your own key" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you're running this inside a notebook, you also need to patch the AsyncIO loop." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import nest_asyncio\n", + "\n", + "nest_asyncio.apply()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Sample Guardrails\n", + "We'll start by creating a new guardrails configuration." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Writing config/config.yml\n" + ] + } + ], + "source": [ + "%%writefile config/config.yml\n", + "models:\n", + " - type: main\n", + " engine: openai\n", + " model: gpt-3.5-turbo-instruct" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We'll do a quick test to make sure everything is working as expected." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "db0d1ffa109e4961b4d6e19007d676a1", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Fetching 5 files: 0%| | 0/5 [00:00. His email address is . He can also be reached through his personal email at . Additionally, he is active on social media and can be contacted through his official Twitter account . Is there anything else you would like to know about President ?\n" + ] + } + ], + "source": [ + "response = guard(\n", + " messages=[{\n", + " \"role\": \"user\",\n", + " \"content\": \"Who is the current president of the United States, and what was their email address?\"\n", + " }]\n", + ")\n", + "\n", + "print(response.validated_output)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Great! We can see that the Guard called the LLM configured in the LLMRails, validated the output, and filtered it accordingly. If however, we prompt the LLM with a message that does not cause it to return PII, we should get the unaltered response." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Hi there! It's nice to meet you. My name is AI Assistant. How can I help you today?\n" + ] + } + ], + "source": [ + "response = guard(\n", + " messages=[{\n", + " \"role\": \"user\",\n", + " \"content\": \"Hello!\"\n", + " }]\n", + ")\n", + "\n", + "print(response.validated_output)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.4" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/docs/integrations/nemo_guardrails/guard_as_action.md b/docs/integrations/nemo_guardrails/guard_as_action.md new file mode 100644 index 000000000..7e0b334a2 --- /dev/null +++ b/docs/integrations/nemo_guardrails/guard_as_action.md @@ -0,0 +1,171 @@ +# Guard as Actions + +This guide will teach you how to use a `Guard` with any of the 60+ GuardrailsAI Validators as an action inside a NeMo Guardrails configuration. + +## Prerequisites + +We'll be using an OpenAI model for our LLM in this guide, so set up an OpenAI API key, if not already set. + +```bash +export OPENAI_API_KEY=$OPENAI_API_KEY # Replace with your own key +``` + +If you're running this inside a notebook, you also need to patch the AsyncIO loop. + +```python +import nest_asyncio + +nest_asyncio.apply() +``` + +## Sample Guard + +Let's create a sample Guard that can detect PII. First, install guardrails-ai. + +```bash +pip install guardrails-ai -q +``` + +Next configure the guardrails cli so we can install the validator we want to use from the Guardrails Hub. + +```bash +guardrails configure +``` + +```bash +guardrails hub install hub://guardrails/detect_pii --no-install-local-models -q +``` + +Now we can define our Guard. +This Guard will use the DetectPII validator to safeguard against leaking personally identifiable information such as names, email addresses, etc.. + +Once the Guard is defined, we can test it with a static value to make sure it's working how we would expect. + +```python +from guardrails import Guard +from guardrails.hub import DetectPII + +g = Guard(name="pii_guard").use(DetectPII(["PERSON", "EMAIL_ADDRESS"], on_fail="fix")) + +print(g.validate("My name is John Doe")) +``` + +``` +ValidationOutcome( + call_id='14534730096', + raw_llm_output='My name is John Doe', + validation_summaries=[ + ValidationSummary( + validator_name='DetectPII', + validator_status='fail', + property_path='$', + failure_reason='The following text in your response contains PII:\nMy name is John Doe', + error_spans=[ + ErrorSpan(start=11, end=19, reason='PII detected in John Doe') + ] + ) + ], + validated_output='My name is ', + reask=None, + validation_passed=True, + error=None +) +``` + +## NeMo Guardrails Configuration + +Now we'll use the Guard we defeined above to create an action and a flow. Since we're calling our guard "pii_guard", we'll use "pii_guard_validate" in order to see if the LLM output is safe. + +```colang +define flow detect_pii + $output = execute pii_guard_validate(text=$bot_message) + + if not $output + bot refuse to respond + stop +``` + +```yaml +models: + - type: main + engine: openai + model: gpt-3.5-turbo-instruct + +rails: + output: + flows: + - detect_pii +``` + +To hook the Guardrails AI guard up so that it can be read from Colang, we use the integration's `register_guardrails_guard_actions` function. +This takes a name and registers two actions: + +1. [guard_name]_validate: This action is used to detect validation failures in outputs +2. [guard name]_fix: This action is used to automatically fix validation failures in outputs, when possible + +```python +from nemoguardrails import RailsConfig, LLMRails +from nemoguardrails.integrations.guardrails_ai.guard_actions import register_guardrails_guard_actions + +config = RailsConfig.from_path("./config") +rails = LLMRails(config) + +register_guardrails_guard_actions(rails, g, "pii_guard") +``` + +``` +Fetching 5 files: 100%|██████████| 5/5 [00:00<00:00, 109226.67it/s] +``` + +## Testing + +Let's try this out. If we invoke the NeMo Guardrails configuration with a message that prompts the LLM to return personal information like names, email addresses, etc., it should refuse to respond. + +```python +response = rails.generate("Who is the current president of the United States, and what was their email address?") +print(response) +``` + +``` +I'm sorry, I can't respond to that. +``` + +Great! So the valdiation-only flow works. Next let's try the fix flow. + +```colang +define flow detect_pii + $output = execute pii_guard_fix(text=$bot_message) + + if not $output + bot refuse to respond + stop + else + $bot_message = $output +``` + +If we send the same message, we should get a response this time, but any PII will be filtered out. + +```python +config = RailsConfig.from_path("./config") +rails = LLMRails(config) + +register_guardrails_guard_actions(rails, g, "pii_guard") + +response = rails.generate("Who is the current president of the United States, and what was their email address?") +print(response) +``` + +``` +The current president of the United States is . His official email address is . However, he also has a personal email address, which is . +``` + +If however, we prompt the LLM with a message that does not cause it to return PII, we should get the unaltered response. + +```python +response = rails.generate("Hello!") +print(response) +``` + +``` +Hello there! How can I assist you? +``` diff --git a/docs/integrations/nemo_guardrails/index.md b/docs/integrations/nemo_guardrails/index.md new file mode 100644 index 000000000..3754feaba --- /dev/null +++ b/docs/integrations/nemo_guardrails/index.md @@ -0,0 +1,73 @@ +::: +note: This will exist in the NeMo Guardrails docs +::: + + +# Guardrails AI & NeMo Guardrails + +Integrating Guardrails AI with NeMo Guardrails combines the strengths of both frameworks: + +Guardrails AI's extensive hub of validators can enhance NeMo Guardrails' input and output checking capabilities. +NeMo Guardrails' flexible configuration system can provide a powerful context for applying Guardrails AI validators. +Users of both frameworks can benefit from a seamless integration, reducing development time and improving overall safety measures. +This integration allows developers to leverage the best features of both frameworks, creating more robust and secure LLM applications. + +## Registering a Guard as an Action + +```bash +guardrails hub install hub://guardrails/toxic_language +``` + +```python +from guardrails import Guard +from guardrails.hub import ToxicLanguage +from nemoguardrails import RailsConfig, LLMRails +from nemoguardrails.integrations.guardrails_ai.guard_actions import register_guardrails_guard_actions + +guard = Guard().use( + ToxicLanguage() +) + +config = RailsConfig.from_path("path/to/config") +rails = LLMRails(config) + +register_guardrails_guard_actions(rails, guard, "custom_guard_action") +``` + +Now, the `custom_guard_action` can be used as an action within the Rails specification. This action can be used on input or output, and may be used in any number of flows. + +```yaml +define flow + ... + $result = execute custom_guard_action + ... +``` + +## Using LLMRails in a Guard + +```bash +guardrails hub install hub://guardrails/toxic_language +``` + +```yaml +# config.yml +models: + - type: main + engine: openai + model: gpt-3.5-turbo-instruct +``` + +```python +from guardrails import Guard +from guardrails.hub import ToxicLanguage +from nemoguardrails import RailsConfig, LLMRails +from guardrails.integrations.nemoguardrails import NemoguardrailsGuard + +config = RailsConfig.from_path("path/to/config") +rails = LLMRails(config) + +guard = NemoguardrailsGuard(rails) +guard.use( + ToxicLanguage() +) +``` diff --git a/docs/integrations/nemo_guardrails/rails_as_guard.md b/docs/integrations/nemo_guardrails/rails_as_guard.md new file mode 100644 index 000000000..7da83e194 --- /dev/null +++ b/docs/integrations/nemo_guardrails/rails_as_guard.md @@ -0,0 +1,110 @@ +# NeMo Guardrails as Guards +This guide will teach you how to add NeMo Guardrails to a GuardrailsAI Guard. + +## Prerequisites + +We'll be using an OpenAI model for our LLM in this guide, so set up an OpenAI API key, if not already set. + +```bash +export OPENAI_API_KEY=$OPENAI_API_KEY # Replace with your own key +``` + +If you're running this inside a notebook, you also need to patch the AsyncIO loop. + +```python +import nest_asyncio + +nest_asyncio.apply() +``` + +## Sample NeMo Guardrails +We'll start by creating a new NeMo Guardrails configuration. + +```yaml +models: + - type: main + engine: openai + model: gpt-3.5-turbo-instruct +``` + +We'll do a quick test to make sure everything is working as expected. + +```python +from nemoguardrails import RailsConfig, LLMRails + +config = RailsConfig.from_path("./config") +rails = LLMRails(config) + +response = rails.generate("Hello!") + +print(response) +``` + +``` + Fetching 5 files: 0%| | 0/5 [00:00. His email address is . He can also be reached through his personal email at . Additionally, he is active on social media and can be contacted through his official Twitter account . Is there anything else you would like to know about President ? +``` + +Great! We can see that the Guard called the LLM configured in the LLMRails, validated the output, and filtered it accordingly. If however, we prompt the LLM with a message that does not cause it to return PII, we should get the unaltered response. + +```python +response = guard( + messages=[{ + "role": "user", + "content": "Hello!" + }] +) + +print(response.validated_output) +``` + +``` +Hi there! It's nice to meet you. My name is AI Assistant. How can I help you today? +``` diff --git a/docusaurus/sidebars.js b/docusaurus/sidebars.js index a737c415e..c93997ba9 100644 --- a/docusaurus/sidebars.js +++ b/docusaurus/sidebars.js @@ -104,6 +104,16 @@ const sidebars = { integrations: [ // "integrations/azure_openai", "integrations/langchain", + { + type: "category", + label: "NeMo Guardrails", + collapsed: false, + items: [ + "integrations/nemo_guardrails/index", + "integrations/nemo_guardrails/guard_as_action", + "integrations/nemo_guardrails/rails_as_guard", + ], + }, { type: "category", label: "Telemetry", From 0b9e6903cc19e568ff9ef5ecb75426de4a00b689 Mon Sep 17 00:00:00 2001 From: Caleb Courier Date: Fri, 22 Nov 2024 17:00:07 -0600 Subject: [PATCH 13/13] fix doc, install for typing --- .github/workflows/ci.yml | 1 + docs/integrations/nemo_guardrails/index.md | 5 ----- 2 files changed, 1 insertion(+), 5 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 881dda180..97ccd5067 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -66,6 +66,7 @@ jobs: python3 -m venv ./.venv source .venv/bin/activate make full + pip install nemoguardrails nest-asyncio - name: Static analysis with pyright run: | diff --git a/docs/integrations/nemo_guardrails/index.md b/docs/integrations/nemo_guardrails/index.md index 3754feaba..c8da61d8d 100644 --- a/docs/integrations/nemo_guardrails/index.md +++ b/docs/integrations/nemo_guardrails/index.md @@ -1,8 +1,3 @@ -::: -note: This will exist in the NeMo Guardrails docs -::: - - # Guardrails AI & NeMo Guardrails Integrating Guardrails AI with NeMo Guardrails combines the strengths of both frameworks: