Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add observability quick start #632

Merged
merged 13 commits into from
Jan 30, 2025
10 changes: 5 additions & 5 deletions docs/evaluation/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ export OPENAI_API_KEY="<your-openai-api-key>"`),
groupId="client-language"
/>

## 3. Import dependencies
## 4. Import dependencies

<CodeTabs
tabs={[
Expand Down Expand Up @@ -85,7 +85,7 @@ const openai = new OpenAI();`,
groupId="client-language"
/>

## 4. Create a dataset
## 5. Create a dataset

<CodeTabs
tabs={[
Expand Down Expand Up @@ -164,7 +164,7 @@ await client.createExamples({
groupId="client-language"
/>

## 5. Define what you're evaluating
## 6. Define what you're evaluating

<CodeTabs
tabs={[
Expand Down Expand Up @@ -204,7 +204,7 @@ async function target(inputs: string): Promise<{ response: string }> {
groupId="client-language"
/>

## 6. Define evaluator
## 7. Define evaluator

<CodeTabs
tabs={[
Expand Down Expand Up @@ -289,7 +289,7 @@ async function accuracy({
groupId="client-language"
/>

## 7. Run and view results
## 8. Run and view results

<CodeTabs tabs={[

Expand Down
305 changes: 305 additions & 0 deletions docs/observability/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,305 @@
---
sidebar_label: Quick Start
sidebar_position: 0
table_of_contents: true
---

import {
CodeTabs,
python,
typescript,
ShellBlock,
} from "@site/src/components/InstructionsWithCode";
import { RegionalUrl } from "@site/src/components/RegionalUrls";

# Observability Quick Start

This tutorial will get you up and running with our observability SDK by showing you how to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have 3 top level sections here:

Get Started with LangSmith if you're using LangChain

LangChain integrates seamlessly with LangSmith, with no extra instrumentation needed. Learn how to start tracing with LangChain.

Get Started with LangSmith if you're using LangGraph

LangGraph integrates seamlessly with LangSmith, with no extra instrumentation needed. Learn how to start tracing with LangGraph.

Get Started instrumenting your application with with LangSmith

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having 3 headers with only a sentence under each one does not render well. I added a note instead to achieve the same purpose.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sg!

trace your application to LangSmith.

If you're already familiar with the observability SDK, or are interested in tracing more than just
LLM calls you can skip to the [next steps section](#next-steps),
or check out the [how-to guides](../observability/how_to_guides).

:::tip Trace LangChain or LangGraph Applications
If you are using [LangChain](https://python.langchain.com/docs/introduction/) or [LangGraph](https://langchain-ai.github.io/langgraph/), which both integrate seamlessly with LangSmith,
you can get started by reading the guides for tracing with [LangChain](./observability/how_to_guides/tracing/trace_with_langchain) or tracing with [LangGraph](./observability/how_to_guides/tracing/trace_with_langgraph).
:::

## 1. Install Dependencies

<CodeTabs
tabs={[
{
value: "python",
label: "Python",
language: "bash",
content: `pip install -U langsmith openai`,
},
{
value: "typescript",
label: "TypeScript",
language: "bash",
content: `yarn add langsmith openai`,
},
]}
groupId="client-language"
/>

## 2. Create an API key

To create an API key head to the <RegionalUrl text='LangSmith settings page' suffix='/settings' />. Then click **Create API Key.**

## 3. Set up your environment

<CodeTabs
tabs={[
ShellBlock(`export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY="<your-langsmith-api-key>"
# The example uses OpenAI, but it's not necessary if your code uses another LLM provider
export OPENAI_API_KEY="<your-openai-api-key>"`),
]}
groupId="client-language"
/>

## 4. Define your application

We will instrument a simple [RAG](https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-retrieval-augmented-generation-rag)
application for this tutorial, but feel free to use your own code if you'd like - just make sure
it has an LLM call!

<details>
<summary>Application Code</summary>
<CodeTabs
groupId="client-language"
tabs={[
python({ label: "Python" })`
from openai import OpenAI

openai_client = OpenAI()

# This is the retriever we will use in RAG
# This is mocked out, but it could be anything we want
def retriever(query: str):
results = ["Harrison worked at Kensho"]
return results

# This is the end-to-end RAG chain.
# It does a retrieval step then calls OpenAI
def rag(question):
docs = retriever(question)
system_message = """Answer the users question using only the provided information below:

{docs}""".format(docs="\\n".join(docs))

return openai_client.chat.completions.create(
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": question},
],
model="gpt-4o-mini",
)
`,
typescript({ label: "TypeScript" })`
import { OpenAI } from "openai";

const openAIClient = new OpenAI();

// This is the retriever we will use in RAG
// This is mocked out, but it could be anything we want
async function retriever(query: string) {
return ["This is a document"];
}

// This is the end-to-end RAG chain.
// It does a retrieval step then calls OpenAI
async function rag(question: string) {
const docs = await retriever(question);

const systemMessage =
"Answer the users question using only the provided information below:\\n\\n" +
docs.join("\\n");

return await openAIClient.chat.completions.create({
messages: [
{ role: "system", content: systemMessage },
{ role: "user", content: question },
],
model: "gpt-4o-mini",
});
}
`,
]}
/>
</details>

## 5. Trace OpenAI calls

The first thing you might want to trace is all your OpenAI calls. LangSmith makes this easy with the [`wrap_openai`](https://docs.smith.langchain.com/reference/python/wrappers/langsmith.wrappers._openai.wrap_openai_) (Python) or [`wrapOpenAI`](https://docs.smith.langchain.com/reference/js/functions/wrappers_openai.wrapOpenAI) (TypeScript) wrappers.
All you have to do is modify your code to use the wrapped client instead of using the `OpenAI` client directly.

<CodeTabs
groupId="client-language"
tabs={[
python({ label: "Python" })`
from openai import OpenAI
# highlight-next-line
from langsmith.wrappers import wrap_openai

# highlight-next-line
openai_client = wrap_openai(OpenAI())

# This is the retriever we will use in RAG
# This is mocked out, but it could be anything we want
def retriever(query: str):
results = ["Harrison worked at Kensho"]
return results

# This is the end-to-end RAG chain.
# It does a retrieval step then calls OpenAI
def rag(question):
docs = retriever(question)
system_message = """Answer the users question using only the provided information below:

{docs}""".format(docs="\\n".join(docs))

return openai_client.chat.completions.create(
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": question},
],
model="gpt-4o-mini",
)
`,
typescript({ label: "TypeScript" })`
import { OpenAI } from "openai";
// highlight-next-line
import { wrapOpenAI } from "langsmith/wrappers";

// highlight-next-line
const openAIClient = wrapOpenAI(new OpenAI());

// This is the retriever we will use in RAG
// This is mocked out, but it could be anything we want
async function retriever(query: string) {
return ["This is a document"];
}

// This is the end-to-end RAG chain.
// It does a retrieval step then calls OpenAI
async function rag(question: string) {
const docs = await retriever(question);

const systemMessage =
"Answer the users question using only the provided information below:\\n\\n" +
docs.join("\\n");

return await openAIClient.chat.completions.create({
messages: [
{ role: "system", content: systemMessage },
{ role: "user", content: question },
],
model: "gpt-4o-mini",
});
}
`,
]}
/>

Now when you call your application as follows:

```python
rag("where did harrison work")
```

This will produce a trace of just the OpenAI call in LangSmith's default tracing project. It should look something like [this](https://smith.langchain.com/public/e7b7d256-10fe-4d49-a8d5-36ca8e5af0d2/r).

![](./tutorials/static/tracing_tutorial_openai.png)

## 6. Trace entire application

You can also use the [`traceable`] decorator ([Python](https://docs.smith.langchain.com/reference/python/run_helpers/langsmith.run_helpers.traceable) or [TypeScript](https://langsmith-docs-bdk0fivr6-langchain.vercel.app/reference/js/functions/traceable.traceable)) to trace your entire application instead of just the LLM calls.

<CodeTabs
groupId="client-language"
tabs={[
python({ label: "Python" })`
from openai import OpenAI
# highlight-next-line
from langsmith import traceable
from langsmith.wrappers import wrap_openai

openai_client = wrap_openai(OpenAI())

def retriever(query: str):
results = ["Harrison worked at Kensho"]
return results

# highlight-next-line
@traceable
def rag(question):
docs = retriever(question)
system_message = """Answer the users question using only the provided information below:

{docs}""".format(docs="\\n".join(docs))

return openai_client.chat.completions.create(
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": question},
],
model="gpt-4o-mini",
)
`,
typescript({ label: "TypeScript" })`
import { OpenAI } from "openai";
// highlight-next-line
import { traceable } from "langsmith/traceable";
import { wrapOpenAI } from "langsmith/wrappers";

const openAIClient = wrapOpenAI(new OpenAI());

async function retriever(query: string) {
return ["This is a document"];
}

// highlight-next-line
const rag = traceable(async function rag(question: string) {
const docs = await retriever(question);

const systemMessage =
"Answer the users question using only the provided information below:\\n\\n" +
docs.join("\\n");

return await openAIClient.chat.completions.create({
messages: [
{ role: "system", content: systemMessage },
{ role: "user", content: question },
],
model: "gpt-4o-mini",
});
});
`,
]}
/>

Now if you call your application as follows:

```python
rag("where did harrison work")
```

This will produce a trace of just the entire pipeline (with the OpenAI call as a child run) - it should look something like [this](https://smith.langchain.com/public/2174f4e9-48ab-4f9e-a8c4-470372d976f1/r)

![](./tutorials/static/tracing_tutorial_chain.png)

## Next steps

Congratulations! If you've made it this far, you're well on your way to being an expert in observability with LangSmith.
Here are some topics you might want to explore next:

- [Trace multiturn conversations](./observability/how_to_guides/monitoring/threads)
- [Send traces to a specific project](./observability/how_to_guides/tracing/log_traces_to_project)
isahers1 marked this conversation as resolved.
Show resolved Hide resolved
- [Filter traces in a project](./observability/how_to_guides/monitoring/filter_traces_in_application)

Or you can visit the [how-to guides page](./observability/how_to_guides) to find out about all the things you can do with LangSmith observability.
3 changes: 2 additions & 1 deletion sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ const sidebars = {
type: "category",
label: "Observability",
items: [
"observability/index",
{
type: "category",
label: "Tutorials",
Expand Down Expand Up @@ -67,7 +68,7 @@ const sidebars = {
link: { type: "doc", id: "observability/concepts/index" },
},
],
link: { type: "doc", id: "observability/tutorials/index" },
link: { type: "doc", id: "observability/index" },
},
{
type: "category",
Expand Down
Loading