-
Notifications
You must be signed in to change notification settings - Fork 16.6k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
docs: add contextualai documentation (#30050)
Thank you for contributing to LangChain! **Description:** adds ContextualAI's `langchain-contextual` package's documentation If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. --------- Co-authored-by: Chester Curme <[email protected]>
- Loading branch information
1 parent
b9746a6
commit 911accf
Showing
3 changed files
with
370 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,253 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "raw", | ||
"id": "afaf8039", | ||
"metadata": { | ||
"vscode": { | ||
"languageId": "raw" | ||
} | ||
}, | ||
"source": [ | ||
"---\n", | ||
"sidebar_label: ContextualAI\n", | ||
"---" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "e49f1e0d", | ||
"metadata": {}, | ||
"source": [ | ||
"# ChatContextual\n", | ||
"\n", | ||
"This will help you getting started with Contextual AI's Grounded Language Model [chat models](/docs/concepts/chat_models/).\n", | ||
"\n", | ||
"To learn more about Contextual AI, please visit our [documentation](https://docs.contextual.ai/).\n", | ||
"\n", | ||
"This integration requires the `contextual-client` Python SDK. Learn more about it [here](https://github.com/ContextualAI/contextual-client-python).\n", | ||
"\n", | ||
"## Overview\n", | ||
"\n", | ||
"This integration invokes Contextual AI's Grounded Language Model.\n", | ||
"\n", | ||
"### Integration details\n", | ||
"\n", | ||
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n", | ||
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n", | ||
"| [ChatContextual](https://github.com/ContextualAI//langchain-contextual) | [langchain-contextual](https://pypi.org/project/langchain-contextual/) | ❌ | beta | ❌ |  |  |\n", | ||
"\n", | ||
"### Model features\n", | ||
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n", | ||
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n", | ||
"| ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | \n", | ||
"\n", | ||
"## Setup\n", | ||
"\n", | ||
"To access Contextual models you'll need to create a Contextual AI account, get an API key, and install the `langchain-contextual` integration package.\n", | ||
"\n", | ||
"### Credentials\n", | ||
"\n", | ||
"Head to [app.contextual.ai](https://app.contextual.ai) to sign up to Contextual and generate an API key. Once you've done this set the CONTEXTUAL_AI_API_KEY environment variable:\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"import getpass\n", | ||
"import os\n", | ||
"\n", | ||
"if not os.getenv(\"CONTEXTUAL_AI_API_KEY\"):\n", | ||
" os.environ[\"CONTEXTUAL_AI_API_KEY\"] = getpass.getpass(\n", | ||
" \"Enter your Contextual API key: \"\n", | ||
" )" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "72ee0c4b-9764-423a-9dbf-95129e185210", | ||
"metadata": {}, | ||
"source": [ | ||
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", | ||
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "0730d6a1-c893-4840-9817-5e5251676d5d", | ||
"metadata": {}, | ||
"source": [ | ||
"### Installation\n", | ||
"\n", | ||
"The LangChain Contextual integration lives in the `langchain-contextual` package:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "652d6238-1f87-422a-b135-f5abbb8652fc", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"%pip install -qU langchain-contextual" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "a38cde65-254d-4219-a441-068766c0d4b5", | ||
"metadata": {}, | ||
"source": [ | ||
"## Instantiation\n", | ||
"\n", | ||
"Now we can instantiate our model object and generate chat completions.\n", | ||
"\n", | ||
"The chat client can be instantiated with these following additional settings:\n", | ||
"\n", | ||
"| Parameter | Type | Description | Default |\n", | ||
"|-----------|------|-------------|---------|\n", | ||
"| temperature | Optional[float] | The sampling temperature, which affects the randomness in the response. Note that higher temperature values can reduce groundedness. | 0 |\n", | ||
"| top_p | Optional[float] | A parameter for nucleus sampling, an alternative to temperature which also affects the randomness of the response. Note that higher top_p values can reduce groundedness. | 0.9 |\n", | ||
"| max_new_tokens | Optional[int] | The maximum number of tokens that the model can generate in the response. Minimum is 1 and maximum is 2048. | 1024 |" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from langchain_contextual import ChatContextual\n", | ||
"\n", | ||
"llm = ChatContextual(\n", | ||
" model=\"v1\", # defaults to `v1`\n", | ||
" api_key=\"\",\n", | ||
" temperature=0, # defaults to 0\n", | ||
" top_p=0.9, # defaults to 0.9\n", | ||
" max_new_tokens=1024, # defaults to 1024\n", | ||
")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "2b4f3e15", | ||
"metadata": {}, | ||
"source": [ | ||
"## Invocation\n", | ||
"\n", | ||
"The Contextual Grounded Language Model accepts additional `kwargs` when calling the `ChatContextual.invoke` method.\n", | ||
"\n", | ||
"These additional inputs are:\n", | ||
"\n", | ||
"| Parameter | Type | Description |\n", | ||
"|-----------|------|-------------|\n", | ||
"| knowledge | list[str] | Required: A list of strings of knowledge sources the grounded language model can use when generating a response. |\n", | ||
"| system_prompt | Optional[str] | Optional: Instructions the model should follow when generating responses. Note that we do not guarantee that the model follows these instructions exactly. |\n", | ||
"| avoid_commentary | Optional[bool] | Optional (Defaults to `False`): Flag to indicate whether the model should avoid providing additional commentary in responses. Commentary is conversational in nature and does not contain verifiable claims; therefore, commentary is not strictly grounded in available context. However, commentary may provide useful context which improves the helpfulness of responses. |" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "62e0dbc3", | ||
"metadata": { | ||
"tags": [] | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"# include a system prompt (optional)\n", | ||
"system_prompt = \"You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability.\"\n", | ||
"\n", | ||
"# provide your own knowledge from your knowledge-base here in an array of string\n", | ||
"knowledge = [\n", | ||
" \"There are 2 types of dogs in the world: good dogs and best dogs.\",\n", | ||
" \"There are 2 types of cats in the world: good cats and best cats.\",\n", | ||
"]\n", | ||
"\n", | ||
"# create your message\n", | ||
"messages = [\n", | ||
" (\"human\", \"What type of cats are there in the world and what are the types?\"),\n", | ||
"]\n", | ||
"\n", | ||
"# invoke the GLM by providing the knowledge strings, optional system prompt\n", | ||
"# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument\n", | ||
"ai_msg = llm.invoke(\n", | ||
" messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True\n", | ||
")\n", | ||
"\n", | ||
"print(ai_msg.content)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "2c35a9e0", | ||
"metadata": {}, | ||
"source": [ | ||
"## Chaining\n", | ||
"\n", | ||
"We can chain the Contextual Model with output parsers." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "545e1e16", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from langchain_core.output_parsers import StrOutputParser\n", | ||
"\n", | ||
"chain = llm | StrOutputParser\n", | ||
"\n", | ||
"chain.invoke(\n", | ||
" messages, knowledge=knowledge, systemp_prompt=system_prompt, avoid_commentary=True\n", | ||
")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3", | ||
"metadata": {}, | ||
"source": [ | ||
"## API reference\n", | ||
"\n", | ||
"For detailed documentation of all ChatContextual features and configurations head to the Github page: https://github.com/ContextualAI//langchain-contextual" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": ".venv", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.12.7" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 5 | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,110 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Contextual AI\n", | ||
"\n", | ||
"Contextual AI is a platform that offers state-of-the-art Retrieval-Augmented Generation (RAG) technology for enterprise applications. Our platformant models helps innovative teams build production-ready AI applications that can process millions of pages of documents with exceptional accuracy.\n", | ||
"\n", | ||
"## Grounded Language Model (GLM)\n", | ||
"\n", | ||
"The Grounded Language Model (GLM) is specifically engineered to minimize hallucinations in RAG and agentic applications. The GLM achieves:\n", | ||
"\n", | ||
"- State-of-the-art performance on the FACTS benchmark\n", | ||
"- Responses strictly grounded in provided knowledge sources\n", | ||
"\n", | ||
"## Using Contextual AI with LangChain\n", | ||
"\n", | ||
"See details [here](/docs/integrations/chat/contextual).\n", | ||
"\n", | ||
"This integration allows you to easily incorporate Contextual AI's GLM into your LangChain workflows. Whether you're building applications for regulated industries or security-conscious environments, Contextual AI provides the grounded and reliable responses your use cases demand.\n", | ||
"\n", | ||
"Get started with a free trial today and experience the most grounded language model for enterprise AI applications." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"id": "y8ku6X96sebl" | ||
}, | ||
"outputs": [ | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"According to the information available, there are two types of cats in the world:\n", | ||
"\n", | ||
"1. Good cats\n", | ||
"2. Best cats\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"import getpass\n", | ||
"import os\n", | ||
"\n", | ||
"from langchain_contextual import ChatContextual\n", | ||
"\n", | ||
"# Set credentials\n", | ||
"if not os.getenv(\"CONTEXTUAL_AI_API_KEY\"):\n", | ||
" os.environ[\"CONTEXTUAL_AI_API_KEY\"] = getpass.getpass(\n", | ||
" \"Enter your Contextual API key: \"\n", | ||
" )\n", | ||
"\n", | ||
"# intialize Contextual llm\n", | ||
"llm = ChatContextual(\n", | ||
" model=\"v1\",\n", | ||
" api_key=\"\",\n", | ||
")\n", | ||
"# include a system prompt (optional)\n", | ||
"system_prompt = \"You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability.\"\n", | ||
"\n", | ||
"# provide your own knowledge from your knowledge-base here in an array of string\n", | ||
"knowledge = [\n", | ||
" \"There are 2 types of dogs in the world: good dogs and best dogs.\",\n", | ||
" \"There are 2 types of cats in the world: good cats and best cats.\",\n", | ||
"]\n", | ||
"\n", | ||
"# create your message\n", | ||
"messages = [\n", | ||
" (\"human\", \"What type of cats are there in the world and what are the types?\"),\n", | ||
"]\n", | ||
"\n", | ||
"# invoke the GLM by providing the knowledge strings, optional system prompt\n", | ||
"# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument\n", | ||
"ai_msg = llm.invoke(\n", | ||
" messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True\n", | ||
")\n", | ||
"\n", | ||
"print(ai_msg.content)" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"colab": { | ||
"provenance": [] | ||
}, | ||
"kernelspec": { | ||
"display_name": ".venv", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.12.8" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 1 | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters