Skip to content

Commit

Permalink
Evals docs (langchain-ai#7460)
Browse files Browse the repository at this point in the history
Still don't have good "how to's", and the guides / examples section
could be further pruned and improved, but this PR adds a couple examples
for each of the common evaluator interfaces.

- [x] Example docs for each implemented evaluator
- [x] "how to make a custom evalutor" notebook for each low level APIs
(comparison, string, agent)
- [x] Move docs to modules area
- [x] Link to reference docs for more information
- [X] Still need to finish the evaluation index page
- ~[ ] Don't have good data generation section~
- ~[ ] Don't have good how to section for other common scenarios / FAQs
like regression testing, testing over similar inputs to measure
sensitivity, etc.~
  • Loading branch information
hinthornw authored Jul 18, 2023
1 parent d875649 commit 3179ee3
Show file tree
Hide file tree
Showing 35 changed files with 2,378 additions and 1,350 deletions.
9 changes: 9 additions & 0 deletions docs/api_reference/modules/evaluation.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
Evaluation
=======================

LangChain has a number of convenient evaluation chains you can use off the shelf to grade your models' oupputs.

.. automodule:: langchain.evaluation
:members:
:undoc-members:
:inherited-members:
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
sidebar_position: 3
---
# Comparison Evaluators

import DocCardList from "@theme/DocCardList";

<DocCardList />
12 changes: 12 additions & 0 deletions docs/docs_skeleton/docs/modules/evaluation/examples/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
sidebar_position: 5
---
# Examples

🚧 _Docs under construction_ 🚧

Below are some examples for inspecting and checking different chains.

import DocCardList from "@theme/DocCardList";

<DocCardList />
28 changes: 28 additions & 0 deletions docs/docs_skeleton/docs/modules/evaluation/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
---
sidebar_position: 6
---

import DocCardList from "@theme/DocCardList";

# Evaluation

Language models can be unpredictable. This makes it challenging to ship reliable applications to production, where repeatable, useful outcomes across diverse inputs are a minimum requirement. Tests help demonstrate each component in an LLM application can produce the required or expected functionality. These tests also safeguard against regressions while you improve interconnected pieces of an integrated system. However, measuring the quality of generated text can be challenging. It can be hard to agree on the right set of metrics for your application, and it can be difficult to translate those into better performance. Furthermore, it's common to lack sufficient evaluation data adequately test the range of inputs and expected outputs for each component when you're just getting started. The LangChain community is building open source tools and guides to help address these challenges.

LangChain exposes different types of evaluators for common types of evaluation. Each type has off-the-shelf implementations you can use to get started, as well as an
extensible API so you can create your own or contribute improvements for everyone to use. The following sections have example notebooks for you to get started.

- [String Evaluators](/docs/modules/evaluation/string/): Evaluate the predicted string for a given input, usually against a reference string
- [Trajectory Evaluators](/docs/modules/evaluation/trajectory/): Evaluate the whole trajectory of agent actions
- [Comparison Evaluators](/docs/modules/evaluation/comparison/): Compare predictions from two runs on a common input


This section also provides some additional examples of how you could use these evaluators for different scenarios or apply to different chain implementations in the LangChain library. Some examples include:

- [Preference Scoring Chain Outputs](/docs/modules/evaluation/examples/comparisons): An example using a comparison evaluator on different models or prompts to select statistically significant differences in aggregate preference scores


## Reference Docs

For detailed information of the available evaluators, including how to instantiate, configure, and customize them. Check out the [reference documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.evaluation) directly.

<DocCardList />
8 changes: 8 additions & 0 deletions docs/docs_skeleton/docs/modules/evaluation/string/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
sidebar_position: 2
---
# String Evaluators

import DocCardList from "@theme/DocCardList";

<DocCardList />
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
sidebar_position: 4
---
# Trajectory Evaluators

import DocCardList from "@theme/DocCardList";

<DocCardList />
4 changes: 3 additions & 1 deletion docs/docs_skeleton/docs/modules/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,6 @@ Let chains choose which tools to use given high-level directives
#### [Memory](/docs/modules/memory/)
Persist application state between runs of a chain
#### [Callbacks](/docs/modules/callbacks/)
Log and stream intermediate steps of any chain
Log and stream intermediate steps of any chain
#### [Evaluation](/docs/modules/evaluation/)
Evaluate the performance of a chain.
301 changes: 0 additions & 301 deletions docs/extras/guides/evaluation/agent_benchmarking.ipynb

This file was deleted.

Loading

0 comments on commit 3179ee3

Please sign in to comment.