diff --git a/docs/evaluation/index.mdx b/docs/evaluation/index.mdx index fc129dd0..f9793010 100644 --- a/docs/evaluation/index.mdx +++ b/docs/evaluation/index.mdx @@ -52,7 +52,7 @@ export OPENAI_API_KEY=""`), groupId="client-language" /> -## 3. Import dependencies +## 4. Import dependencies -## 4. Create a dataset +## 5. Create a dataset -## 5. Define what you're evaluating +## 6. Define what you're evaluating { groupId="client-language" /> -## 6. Define evaluator +## 7. Define evaluator -## 7. Run and view results +## 8. Run and view results + +## 2. Create an API key + +To create an API key head to the . Then click **Create API Key.** + +## 3. Set up your environment + + + +## 4. Define your application + +We will instrument a simple [RAG](https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-retrieval-augmented-generation-rag) +application for this tutorial, but feel free to use your own code if you'd like - just make sure +it has an LLM call! + +
+ Application Code + +
+ +## 5. Trace OpenAI calls + +The first thing you might want to trace is all your OpenAI calls. LangSmith makes this easy with the [`wrap_openai`](https://docs.smith.langchain.com/reference/python/wrappers/langsmith.wrappers._openai.wrap_openai_) (Python) or [`wrapOpenAI`](https://docs.smith.langchain.com/reference/js/functions/wrappers_openai.wrapOpenAI) (TypeScript) wrappers. +All you have to do is modify your code to use the wrapped client instead of using the `OpenAI` client directly. + + + +Now when you call your application as follows: + +```python +rag("where did harrison work") +``` + +This will produce a trace of just the OpenAI call in LangSmith's default tracing project. It should look something like [this](https://smith.langchain.com/public/e7b7d256-10fe-4d49-a8d5-36ca8e5af0d2/r). + +![](./tutorials/static/tracing_tutorial_openai.png) + +## 6. Trace entire application + +You can also use the [`traceable`] decorator ([Python](https://docs.smith.langchain.com/reference/python/run_helpers/langsmith.run_helpers.traceable) or [TypeScript](https://langsmith-docs-bdk0fivr6-langchain.vercel.app/reference/js/functions/traceable.traceable)) to trace your entire application instead of just the LLM calls. + + + +Now if you call your application as follows: + +```python +rag("where did harrison work") +``` + +This will produce a trace of just the entire pipeline (with the OpenAI call as a child run) - it should look something like [this](https://smith.langchain.com/public/2174f4e9-48ab-4f9e-a8c4-470372d976f1/r) + +![](./tutorials/static/tracing_tutorial_chain.png) + +## Next steps + +Congratulations! If you've made it this far, you're well on your way to being an expert in observability with LangSmith. +Here are some topics you might want to explore next: + +- [Trace multiturn conversations](./observability/how_to_guides/monitoring/threads) +- [Send traces to a specific project](./observability/how_to_guides/tracing/log_traces_to_project) +- [Filter traces in a project](./observability/how_to_guides/monitoring/filter_traces_in_application) + +Or you can visit the [how-to guides page](./observability/how_to_guides) to find out about all the things you can do with LangSmith observability. diff --git a/sidebars.js b/sidebars.js index 02f8f2ef..97d7fe80 100644 --- a/sidebars.js +++ b/sidebars.js @@ -27,6 +27,7 @@ const sidebars = { type: "category", label: "Observability", items: [ + "observability/index", { type: "category", label: "Tutorials", @@ -67,7 +68,7 @@ const sidebars = { link: { type: "doc", id: "observability/concepts/index" }, }, ], - link: { type: "doc", id: "observability/tutorials/index" }, + link: { type: "doc", id: "observability/index" }, }, { type: "category",