:image}`.
+Also, We set the `response_model` to `list[Item]` so that our LLM knows to extract each item.
+
+
+
+```python
+from mirascope.core import anthropic, openai, prompt_template
+
+
+@openai.call(model="gpt-4o", response_model=list[Item])
+@prompt_template(
+ """
+ SYSTEM:
+ Extract the receipt information from the image.
+
+ USER:
+ {url:image}
+ """
+)
+def extract_receipt_info_openai(url: str): ...
+
+
+@anthropic.call(
+ model="claude-3-5-sonnet-20240620", response_model=list[Item], json_mode=True
+)
+@prompt_template(
+ """
+ Extract the receipt information from the image.
+
+ {url:image}
+ """
+)
+def extract_receipt_info_anthropic(url: str): ...
+```
+
+let's get the results:
+
+
+```python
+import base64
+
+import httpx
+
+image_url = "https://www.receiptfont.com/wp-content/uploads/template-mcdonalds-1-screenshot-fit.webp"
+
+image_media_type = "image/png"
+image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
+
+print(extract_receipt_info_openai(image_url))
+
+print(extract_receipt_info_anthropic(image_url))
+```
+
+ [Item(name='Happy Meal 6 Pc', quantity=1, price=4.89), Item(name='Snack Oreo McFlurry', quantity=1, price=2.69)]
+ [Item(name='Happy Meal 6 Pc', quantity=1, price=4.89), Item(name='Snack Oreo McFlurry', quantity=1, price=2.69)]
+
+
+We see that both LLMs return the same response which gives us more confidence that the image was extracted accurately, but it is not guaranteed.
+
+
+
+- Split your Bill: Building off our example, we can upload our receipt along with a query stating who ordered what dish and have the LLM split the bill for you.
+- Content Moderation: Classify user-generated images as appropriate, inappropriate, or requiring manual review.
+- Ecommerce Product Classification: Create descriptions and features from product images.
+
+
+
+When adapting this recipe to your specific use-case, consider the following:
+
+- Refine your prompts to provide clear instructions and relevant context for your image extraction. In our example, there were sub-items that were not extracted, which depending on your situation may need to be extracted as well.
+- Experiment with different model providers and version to balance accuracy and speed.
+- Use multiple model providers to verify if results are correct.
+- Use `async` for multiple images and run the calls in parallel.
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/generating-captions.mdx b/cloud/content/docs/v1/guides/more-advanced/generating-captions.mdx
new file mode 100644
index 0000000000..dd7d1e3e42
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/generating-captions.mdx
@@ -0,0 +1,83 @@
+---
+title: Generate Captions for an Image
+description: Use multimodal LLM capabilities to generate descriptive captions for images by leveraging OpenAI's vision capabilities.
+---
+
+# Generate Captions for an Image
+
+In this recipe, we go over how to use LLMs to generate a descriptive caption set of tags for an image with OpenAI’s `gpt-4o-mini`.
+
+
+
+
+
+
+Caption generation evolved from manual human effort to machine learning techniques like Conditional Random Fields (CRFs) and Support Vector Machines (SVMs), which were time-consuming and resource-intensive. Large Language Models (LLMs) have revolutionized this field, enabling efficient multi-modal tasks through API calls and prompt engineering, dramatically improving caption generation speed and accuracy.
+
+
+## Setup
+
+Let's start by installing Mirascope and its dependencies:
+
+
+```python
+!pip install "mirascope[openai]"
+```
+
+
+```python
+import os
+
+os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+## Generate Captions
+
+
+This recipe will only work for those which support images (OpenAI, Gemini, Anthropic) as of 8/13/2024. Be sure to check if your provider has multimodal support.
+
+
+
+With OpenAI’s multimodal capabilities, image inputs are treated just like text inputs, which means we can use it as context to ask questions or make requests. For the sake of reproducibility, we will get our image from a URL to save you the hassle of having to find and download an image. The image is [a public image from BBC Science of a wolf](https://c02.purpledshub.com/uploads/sites/41/2023/01/How-to-see-the-Wolf-Moon-in-2023--4bb6bb7.jpg?w=1880&webp=1) in front of the moon.
+
+Since we can treat the image like any other text context, we can simply ask the model to caption the image:
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+url = "https://c02.purpledshub.com/uploads/sites/41/2023/01/How-to-see-the-Wolf-Moon-in-2023--4bb6bb7.jpg?w=940&webp=1"
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template("Generate a short, descriptive caption for this image: {url:image}")
+def generate_caption(url: str): ...
+
+
+response = generate_caption(url)
+print(response)
+```
+
+ A lone wolf howls into the night, silhouetted against a glowing full moon, creating a hauntingly beautiful scene that captures the wild spirit of nature.
+
+
+
+
+- Content Moderation: Classify user-generated images as appropriate, inappropriate, or requiring manual review.
+- Ecommerce Product Classification: Create descriptions and features from product images.
+- AI Assistant for People with Vision Impairments: Convert images to text, then text-to-speech so people with vision impairments can be more independent.
+
+
+
+When adapting this recipe to your specific use-case, consider the following:
+
+- Refine your prompts to provide clear instructions and relevant context for your caption generation task.
+- Experiment with different model providers and version to balance accuracy and speed.
+- Use multiple model providers to evaluate results for accuracy.
+- Use `async` for multiple images and run the calls in parallel.
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/generating-synthetic-data.mdx b/cloud/content/docs/v1/guides/more-advanced/generating-synthetic-data.mdx
new file mode 100644
index 0000000000..8212b14075
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/generating-synthetic-data.mdx
@@ -0,0 +1,283 @@
+---
+title: Generate Synthetic Data
+description: Learn techniques for using LLMs to generate realistic synthetic data in various formats including CSV, structured objects, and pandas DataFrames
+---
+
+# Generate Synthetic Data
+
+In this tutorial, we go over how to generate synthetic data for LLMs, in this case, OpenAI’s `gpt-4o-mini`. When using LLMs to synthetically generate data, it is most useful to generate non-numerical data which isn’t strictly dependent on a defined probability distribution - in those cases, it will be far easier to define a distribution and generate these points directly from the distribution.
+
+However, for:
+
+- data that needs general intelligence to be realistic
+- data that lists many items within a broad category
+- data which is language related
+
+and more, LLMs are far easier to use and yield better (or the only feasible) results.
+
+
+
+
+
+
+Large Language Models (LLMs) have emerged as powerful tools for generating synthetic data, particularly for text-based applications. Compared to traditional synthetic data generation methods, LLMs can produce more diverse, contextually rich, and human-like textual data, often with less need for domain-specific rules or statistical modeling.
+
+
+
+## Setup
+
+To set up our environment, first let's install all of the packages we will use:
+
+
+```python
+!pip install "mirascope[openai]" pandas
+```
+
+
+```python
+import os
+
+os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+## Generate Data as CSV
+
+To generate realistic, synthetic data as a csv, you can accomplish this in a single call by requesting a csv format in the prompt and describing the kind of data you would like generated.
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ Generate {num_datapoints} random but realistic datapoints of items which could
+ be in a home appliances warehouse. Output the datapoints as a csv, and your
+ response should only be the CSV.
+
+ Format:
+ Name, Price, Inventory
+
+ Name - the name of the home appliance product
+ Price - the price of an individual product, in dollars (include cents)
+ Inventory - how many are left in stock
+ """
+)
+def generate_csv_data(num_datapoints: int): ...
+
+
+print(generate_csv_data(5))
+```
+
+ Name, Price, Inventory
+ "4-Slice Toaster", 29.99, 150
+ "Stainless Steel Blender", 49.99, 75
+ "Robot Vacuum Cleaner", 199.99, 30
+ "Microwave Oven 1000W", 89.99, 50
+ "Electric Kettle", 24.99, 200
+
+
+
+Note that the prices and inventory of each item are somewhat realistic for their corresponding item, something which would be otherwise difficult to accomplish.
+
+
+## Generate Data with `response_model`
+
+Sometimes, it will be easier to integrate your datapoints into your code if they are defined as some schema, namely a Pydantic `BaseModel`. In this case, describe each column as the `description` of a `Field` in the `BaseModel` instead of the prompt, and set `response_model` to your defined schema:
+
+
+
+```python
+from pydantic import BaseModel, Field
+
+
+class HomeAppliance(BaseModel):
+ name: str = Field(description="The name of the home appliance product")
+ price: float = Field(
+ description="The price of an individual product, in dollars (include cents)"
+ )
+ inventory: int = Field(description="How many of the items are left in stock")
+
+
+@openai.call(model="gpt-4o-mini", response_model=list[HomeAppliance])
+@prompt_template(
+ """
+ Generate {num_datapoints} random but realistic datapoints of items which could
+ be in a home appliances warehouse. Output the datapoints as a list of instances
+ of HomeAppliance.
+ """
+)
+def generate_home_appliance_data(num_datapoints: int): ...
+
+
+print(generate_home_appliance_data(5))
+```
+
+ [HomeAppliance(name='Refrigerator', price=899.99, inventory=25), HomeAppliance(name='Microwave', price=129.99, inventory=50), HomeAppliance(name='Washing Machine', price=499.99, inventory=15), HomeAppliance(name='Dishwasher', price=749.99, inventory=10), HomeAppliance(name='Air Conditioner', price=349.99, inventory=30)]
+
+
+
+## Generate Data into a pandas `DataFrame`
+
+Since pandas is a common library for working with data, it’s also worth knowing how to directly create and append to a dataframe with LLMs.
+
+### Create a New `DataFrame`
+
+To create a new `DataFrame`, we define a `BaseModel` schema with a simple function to generate `DataFrame` via a list of list of data and the column names:
+
+
+
+```python
+from typing import Any, Literal
+
+import pandas as pd
+
+
+class DataFrameGenerator(BaseModel):
+ data: list[list[Any]] = Field(
+ description="the data to be inserted into the dataframe"
+ )
+ column_names: list[str] = Field(description="The names of the columns in data")
+
+ def append_dataframe(self, df: pd.DataFrame) -> pd.DataFrame:
+ return pd.concat([df, self.generate_dataframe()], ignore_index=True)
+
+ def generate_dataframe(self) -> pd.DataFrame:
+ return pd.DataFrame(dict(zip(self.column_names, self.data, strict=False)))
+
+
+@openai.call(model="gpt-4o-mini", response_model=DataFrameGenerator)
+@prompt_template(
+ """
+ Generate {num_datapoints} random but realistic datapoints of items which could
+ be in a home appliances warehouse. Generate your response as `data` and
+ `column_names`, so that a pandas DataFrame may be generated with:
+ `pd.DataFrame(data, columns=column_names)`.
+
+ Format:
+ Name, Price, Inventory
+
+ Name - the name of the home appliance product
+ Price - the price of an individual product, in dollars (include cents)
+ Inventory - how many are left in stock
+ """
+)
+def generate_df_data(num_datapoints: int): ...
+
+
+df_data = generate_df_data(5)
+df = df_data.generate_dataframe()
+print(df)
+```
+
+ Name Price Inventory
+ 0 Microwave Oven Refrigerator Blender
+ 1 79.99 899.99 49.99
+ 2 25 10 40
+
+
+### Appending to a `DataFrame`
+
+To append to a `DataFrame`, we can modify the prompt so that instead of describing the data we want to generate, we ask the LLM to match the type of data it already sees. Furthermore, we add a `append_dataframe()` function to append to an existing `DataFrame`. Finally, note that we use the generated `df` from above as the `DataFrame` to append to in the following example:
+
+
+
+
+```python
+@openai.call(model="gpt-4o-mini", response_model=DataFrameGenerator)
+@prompt_template(
+ """
+ Generate {num_datapoints} random but realistic datapoints of items which would
+ make sense to the following dataset:
+ {df}
+ Generate your response as `data` and
+ `column_names`, so that a pandas DataFrame may be generated with:
+ `pd.DataFrame(data, columns=column_names)` then appended to the existing data.
+ """
+)
+def generate_additional_df_data(num_datapoints: int, df: pd.DataFrame): ...
+
+
+df_data = generate_additional_df_data(5, df)
+df = df_data.append_dataframe(df)
+print(df)
+```
+
+ Name Price Inventory
+ 0 Microwave Oven Refrigerator Blender
+ 1 79.99 899.99 49.99
+ 2 25 10 40
+ 3 Toaster Slow Cooker Coffee Maker
+ 4 29.99 49.99 39.99
+ 5 150 80 200
+
+
+
+## Adding Constraints
+
+While you cannot successfully add complex mathematical constraints to generated data (think statistics, such as distributions and covariances), asking LLMs to abide by basic constraints will (generally) prove successful, especially with newer models. Let’s look at an example where we generate TVs where the TV price should roughly linearly correlate with TV size, and QLEDs are 2-3x more expensive than OLEDs of the same size:
+
+
+
+
+```python
+class TV(BaseModel):
+ size: int = Field(description="The size of the TV")
+ price: float = Field(description="The price of the TV in dollars (include cents)")
+ tv_type: Literal["OLED", "QLED"]
+
+
+@openai.call(model="gpt-4o-mini", response_model=list[TV])
+@prompt_template(
+ """
+ Generate {num_datapoints} random but realistic datapoints of TVs.
+ Output the datapoints as a list of instances of TV.
+
+ Make sure to abide by the following constraints:
+ QLEDS should be roughly (not exactly) 2x the price of an OLED of the same size
+ for both OLEDs and QLEDS, price should increase roughly proportionately to size
+ """
+)
+def generate_tv_data(num_datapoints: int): ...
+
+
+for tv in generate_tv_data(10):
+ print(tv)
+```
+
+ size=32 price=299.99 tv_type='OLED'
+ size=32 price=549.99 tv_type='QLED'
+ size=43 price=399.99 tv_type='OLED'
+ size=43 price=749.99 tv_type='QLED'
+ size=55 price=699.99 tv_type='OLED'
+ size=55 price=1399.99 tv_type='QLED'
+ size=65 price=999.99 tv_type='OLED'
+ size=65 price=1999.99 tv_type='QLED'
+ size=75 price=1299.99 tv_type='OLED'
+ size=75 price=2499.99 tv_type='QLED'
+
+
+To demonstrate the constraints’ being followed, you can graph the data using matplotlib, which shows the linear relationships between size and price, and QLEDs costing roughly twice as much as OLED:
+
+
+
+
+
+- Healthcare and Medical Research: Generating synthetic patient records for training machine learning models without compromising patient privacy
+- Environmental Science: Generating synthetic climate data for modeling long-term environmental changes
+- Fraud Detection Systems: Generating synthetic data of fraudulent and legitimate transactions for training fraud detection models.
+
+
+
+When adapting this recipe to your specific use-case, consider the following:
+ - Add Pydantic `AfterValidators` to constrain your synthetic data generation
+ - Verify that the synthetic data generated actually matches real-world data.
+ - Make sure no biases are present in the generated data, this can be prompt engineered.
+ - Experiment with different model providers and versions for quality.
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/knowledge-graph.mdx b/cloud/content/docs/v1/guides/more-advanced/knowledge-graph.mdx
new file mode 100644
index 0000000000..ef8432feb4
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/knowledge-graph.mdx
@@ -0,0 +1,255 @@
+---
+title: Knowledge Graph
+description: Build knowledge graphs from unstructured text. This guide demonstrates how to extract entities and relationships to create structured knowledge representations.
+---
+
+# Knowledge Graph
+
+Often times, data is messy and not always stored in a structured manner ready for use by an LLM. In this recipe, we show how to create a knowledge graph from an unstructured document using common python libraries and Mirascope using OpenAI GPT-4o-mini.
+
+
+
+
+
+
+While traditional Natural Language Processing (NLP) techniques have long been used in knowledge graphs to identify entities and relationships in unstructured text, Large Language Models (LLMs) have significantly improved this process. LLMs enhance the accuracy of entity identification and linking to knowledge graph entries, demonstrating superior ability to handle context and ambiguity compared to conventional NLP methods.
+
+
+## Setup
+
+To set up our environment, first let's install all of the packages we will use:
+
+
+```python
+!pip install "mirascope[openai]"
+# (Optional) For visualization
+!pip install matplotlib networkx
+# (Optional) For parsing HTML
+!pip install beautifulsoup4
+```
+
+
+```python
+import os
+
+os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+
+## Create the `KnowledgeGraph`
+
+The first step is to create a `KnowledgeGraph` with `Nodes` and `Edges` that represent our entities and relationships. For our simple recipe, we will use a Pydantic `BaseModel` to represent our `KnowledgeGraph`:
+
+
+
+
+```python
+from pydantic import BaseModel, Field
+
+
+class Edge(BaseModel):
+ source: str = Field(..., description="The source node of the edge")
+ target: str = Field(..., description="The target node of the edge")
+ relationship: str = Field(
+ ..., description="The relationship between the source and target nodes"
+ )
+
+
+class Node(BaseModel):
+ id: str = Field(..., description="The unique identifier of the node")
+ type: str = Field(..., description="The type or label of the node")
+ properties: dict | None = Field(
+ ..., description="Additional properties and metadata associated with the node"
+ )
+
+
+class KnowledgeGraph(BaseModel):
+ nodes: list[Node] = Field(..., description="List of nodes in the knowledge graph")
+ edges: list[Edge] = Field(..., description="List of edges in the knowledge graph")
+```
+
+
+Our `Edge` represents connections between nodes, with attributes for the source node, target node, and the relationship between them. While our `Node` defines nodes with an ID, type, and optional properties. Our `KnowledgeGraph` then aggregates these nodes and edges into a comprehensive knowledge graph.
+
+Now that we have our schema defined, it's time to create our knowledge graph.
+
+## Creating the knowledge graph
+
+We start off with engineering our prompt, prompting the LLM to create a knowledge graph based on the user query. Then we are taking a [Wikipedia](https://en.wikipedia.org/wiki/Large_language_model) article and converting the raw text into a structured knowledge graph.
+The command below will download the article to your local machine by using the `curl` command. If you don't have `curl` installed, you can download the article manually from the link above and save it as `wikipedia.html`.
+
+
+```python
+!curl https://en.wikipedia.org/wiki/Large_language_model -o wikipedia.html
+```
+
+
+
+```python
+from bs4 import BeautifulSoup
+from mirascope.core import openai, prompt_template
+
+
+def get_text_from_html(file_path: str) -> str:
+ with open(file_path) as file:
+ html_text = file.read()
+ return BeautifulSoup(html_text, "html.parser").get_text()
+
+
+@openai.call(model="gpt-4o-mini", response_model=KnowledgeGraph)
+@prompt_template(
+ """
+ SYSTEM:
+ Your job is to create a knowledge graph based on the text and user question.
+
+ The article:
+ {text}
+
+ Example:
+ John and Jane Doe are siblings. Jane is 25 and 5 years younger than John.
+ Node(id="John Doe", type="Person", properties={{"age": 30}})
+ Node(id="Jane Doe", type="Person", properties={{"age": 25}})
+ Edge(source="John Doe", target="Jane Doe", relationship="Siblings")
+
+ USER:
+ {question}
+ """
+)
+def generate_knowledge_graph(
+ question: str, file_name: str
+) -> openai.OpenAIDynamicConfig:
+ text = get_text_from_html(file_name)
+ return {"computed_fields": {"text": text}}
+
+
+question = "What are the pitfalls of using LLMs?"
+
+kg = generate_knowledge_graph(question, "wikipedia.html")
+print(kg)
+```
+
+```
+nodes=[Node(id='Large Language Models', type='Large Language Model', properties=None), Node(id='Data Cleaning Issues', type='Pitfall', properties=None), Node(id='Bias Inheritance', type='Pitfall', properties=None), Node(id='Hallucinations', type='Pitfall', properties=None), Node(id='Limited Understanding', type='Pitfall', properties=None), Node(id='Dependence on Training Data', type='Pitfall', properties=None), Node(id='Security Risks', type='Pitfall', properties=None), Node(id='Stereotyping', type='Pitfall', properties=None), Node(id='Political Bias', type='Pitfall', properties=None)] edges=[Edge(source='Large Language Models', target='Data Cleaning Issues', relationship='has pitfall'), Edge(source='Large Language Models', target='Bias Inheritance', relationship='has pitfall'), Edge(source='Large Language Models', target='Hallucinations', relationship='has pitfall'), Edge(source='Large Language Models', target='Limited Understanding', relationship='has pitfall'), Edge(source='Large Language Models', target='Dependence on Training Data', relationship='has pitfall'), Edge(source='Large Language Models', target='Security Risks', relationship='has pitfall'), Edge(source='Large Language Models', target='Stereotyping', relationship='has pitfall'), Edge(source='Large Language Models', target='Political Bias', relationship='has pitfall')]
+```
+
+We engineer our prompt by giving examples of how the properties should be filled out and use Mirascope's `DynamicConfig` to pass in the article. While it seems silly in this context, there may be multiple documents that you may want to conditionally pass in depending on the query. This can include text chunks from a Vector Store or data from a Database.
+
+After we generated our knowledge graph, it is time to create our `run` function
+
+
+
+```python
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ SYSTEM:
+ Answer the following question based on the knowledge graph.
+
+ Knowledge Graph:
+ {knowledge_graph}
+
+ USER:
+ {question}
+ """
+)
+def run(question: str, knowledge_graph: KnowledgeGraph): ...
+```
+
+We define a simple `run` function that answers the users query based on the knowledge graph. Combining knowledge graphs with semantic search will lead to the LLM having better context to address complex questions.
+
+
+
+```python
+print(run(question, kg))
+```
+
+ The pitfalls of using Large Language Models (LLMs) include:
+
+ 1. Data Cleaning Issues
+ 2. Bias Inheritance
+ 3. Hallucinations
+ 4. Limited Understanding
+ 5. Dependence on Training Data
+ 6. Security Risks
+ 7. Stereotyping
+ 8. Political Bias
+
+
+## Render your graph
+
+Optionally, to visualize the knowledge graph, we use networkx and matplotlib to draw the edges and nodes.
+
+
+
+
+```python
+import matplotlib.pyplot as plt
+import networkx as nx
+
+
+def render_graph(kg: KnowledgeGraph):
+ G = nx.DiGraph()
+
+ for node in kg.nodes:
+ G.add_node(node.id, label=node.type, **(node.properties or {}))
+
+ for edge in kg.edges:
+ G.add_edge(edge.source, edge.target, label=edge.relationship)
+
+ plt.figure(figsize=(15, 10))
+ pos = nx.spring_layout(G)
+
+ nx.draw_networkx_nodes(G, pos, node_size=2000, node_color="lightblue")
+ nx.draw_networkx_edges(G, pos, arrowstyle="->", arrowsize=20)
+ nx.draw_networkx_labels(G, pos, font_size=12, font_weight="bold")
+
+ edge_labels = nx.get_edge_attributes(G, "label")
+ nx.draw_networkx_edge_labels(G, pos, edge_labels=edge_labels, font_color="red")
+
+ plt.title("Knowledge Graph Visualization", fontsize=15)
+ plt.show()
+
+
+question = "What are the pitfalls of using LLMs?"
+render_graph(kg)
+```
+
+```
+Matplotlib is building the font cache; this may take a moment.
+```
+
+
+
+
+
+
+
+
+
+Enhance your Q&A:
+Customer support system uses knowledge graph containing information about products to answer questions.
+Example: "Does the Mirascope phone support fast charging?" The knowledge graph has a node "Mirascope smartphone" and searches "support" edge to find fast charging and returns results for the LLM to use.
+
+Supply Chain Optimization:
+A knowledge graph could represent complex relationships between suppliers, manufacturing plants, distribution centers, products, and transportation routes.
+Example: How would a 20% increase in demand for a mirascope affect our inventory needs and shipping costs? Use knowledge graph to trace the mirascope toy, calculate inventory, and then estimate shipping costs.
+
+Healthcare Assistant:
+Assuming no PII or HIPPA violation, build a knowledge graph from patient remarks.
+Example: "Mary said help, I've fallen". Build up a knowledge graph from comments and use an LLM to scan the node "Mary" for any worrying activity. Have the LLM alert Healthcare employees that there may be an emergency.
+
+
+When adapting this recipe, consider:
+
+- Combining knowledge graph with Text Embeddings for both structured search and semantic search, depending on your requirements.
+- Store your knowledge graph in a database / cache for faster retrieval.
+- Experiment with different LLM models, some may be better than others for generating the knowledge graph.
+- Turn the example into an Agentic workflow, giving it access to tools such as web search so the LLM can call tools to update its own knowledge graph to answer any question.
+- Adding Pydantic `AfterValidators` to prevent duplicate Node IDs.
+
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/llm-validation-with-retries.mdx b/cloud/content/docs/v1/guides/more-advanced/llm-validation-with-retries.mdx
new file mode 100644
index 0000000000..1acc2ea372
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/llm-validation-with-retries.mdx
@@ -0,0 +1,227 @@
+---
+title: LLM Validation With Retries
+description: Use LLMs for complex validation tasks and automatically reinsert validation errors into subsequent calls to improve results with Mirascope's Tenacity integration.
+---
+
+# LLM Validation With Retries
+
+This recipe demonstrates how to leverage Large Language Models (LLMs) -- specifically Anthropic's Claude 3.5 Sonnet -- to perform automated validation on any value. We'll cover how to use **LLMs for complex validation tasks**, how to integrate this with Pydantic's validation system, and how to leverage [Tenacity](https://tenacity.readthedocs.io/en/latest/) to automatically **reinsert validation errors** back into an LLM call to **improve results**.
+
+
+
+
+
+
+While traditional validation tools like type checkers or Pydantic are limited to hardcoded rules (such as variable types or arithmetic), LLMs allow for much more nuanced and complex validation. This approach can be particularly useful for validating natural language inputs or complex data structures where traditional rule-based validation falls short.
+
+
+## Setup
+
+Let's start by installing Mirascope and its dependencies:
+
+
+```python
+!pip install "mirascope[anthropic,tenacity]"
+```
+
+
+```python
+import os
+
+os.environ["ANTHROPIC_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+## Basic LLM Validation
+
+Let's start with a simple example of using an LLM to check for spelling and grammatical errors in a text snippet:
+
+
+```python
+from mirascope.core import anthropic, prompt_template
+from pydantic import BaseModel, Field
+
+
+class SpellingAndGrammarCheck(BaseModel):
+ has_errors: bool = Field(
+ description="Whether the text has typos or grammatical errors"
+ )
+
+
+@anthropic.call(
+ model="claude-3-5-sonnet-20240620",
+ response_model=SpellingAndGrammarCheck,
+ json_mode=True,
+)
+@prompt_template(
+ """
+ Does the following text have any typos or grammatical errors? {text}
+ """
+)
+def check_for_errors(text: str): ...
+
+
+text = "Yestday I had a gret time!"
+response = check_for_errors(text)
+assert response.has_errors
+```
+
+
+## Pydantic's AfterValidator
+
+We can use Pydantic's [`AfterValidator`](https://docs.pydantic.dev/latest/api/functional_validators/#pydantic.functional_validators.AfterValidator) to integrate our LLM-based validation directly into a Pydantic model:
+
+
+
+```python
+from typing import Annotated
+
+from mirascope.core import anthropic, prompt_template
+from pydantic import AfterValidator, BaseModel, ValidationError
+
+
+@anthropic.call(
+ model="claude-3-5-sonnet-20240620",
+ response_model=SpellingAndGrammarCheck,
+ json_mode=True,
+)
+@prompt_template(
+ """
+ Does the following text have any typos or grammatical errors? {text}
+ """
+)
+def check_for_errors(text: str): ...
+
+
+class TextWithoutErrors(BaseModel):
+ text: Annotated[
+ str,
+ AfterValidator(
+ lambda t: t
+ if not (check_for_errors(t)).has_errors
+ else (_ for _ in ()).throw(ValueError("Text contains errors"))
+ ),
+ ]
+
+
+valid = TextWithoutErrors(text="This is a perfectly written sentence.")
+
+try:
+ invalid = TextWithoutErrors(
+ text="I walkd to supermarket and i picked up some fish?"
+ )
+except ValidationError as e:
+ print(f"Validation error: {e}")
+```
+
+ Validation error: 1 validation error for TextWithoutErrors
+ text
+ Value error, Text contains errors [type=value_error, input_value='I walkd to supermarket a... i picked up some fish?', input_type=str]
+ For further information visit https://errors.pydantic.dev/2.8/v/value_error
+
+
+
+## Reinsert Validation Errors For Improved Performance
+
+One powerful technique for enhancing LLM generations is to automatically reinsert validation errors into subsequent calls. This approach allows the LLM to learn from its previous mistakes as few-shot examples and improve it's output in real-time. We can achieve this using Mirascope's integration with Tenacity, which collects `ValidationError` messages for easy insertion into the prompt.
+
+
+
+```python
+from typing import Annotated
+
+from mirascope.core import anthropic, prompt_template
+from mirascope.integrations.tenacity import collect_errors
+from pydantic import AfterValidator, BaseModel, Field, ValidationError
+from tenacity import retry, stop_after_attempt
+
+
+class SpellingAndGrammarCheck(BaseModel):
+ has_errors: bool = Field(
+ description="Whether the text has typos or grammatical errors"
+ )
+
+
+@anthropic.call(
+ model="claude-3-5-sonnet-20240620",
+ response_model=SpellingAndGrammarCheck,
+ json_mode=True,
+)
+@prompt_template(
+ """
+ Does the following text have any typos or grammatical errors? {text}
+ """
+)
+def check_for_errors(text: str): ...
+
+
+class GrammarCheck(BaseModel):
+ text: Annotated[
+ str,
+ AfterValidator(
+ lambda t: t
+ if not (check_for_errors(t)).has_errors
+ else (_ for _ in ()).throw(ValueError("Text still contains errors"))
+ ),
+ ] = Field(description="The corrected text with proper grammar")
+ explanation: str = Field(description="Explanation of the corrections made")
+
+
+@retry(stop=stop_after_attempt(3), after=collect_errors(ValidationError))
+@anthropic.call(
+ "claude-3-5-sonnet-20240620", response_model=GrammarCheck, json_mode=True
+)
+@prompt_template(
+ """
+ {previous_errors}
+
+ Correct the grammar in the following text.
+ If no corrections are needed, return the original text.
+ Provide an explanation of any corrections made.
+
+ Text: {text}
+ """
+)
+def correct_grammar(
+ text: str, *, errors: list[ValidationError] | None = None
+) -> anthropic.AnthropicDynamicConfig:
+ previous_errors = f"Previous Errors: {errors}" if errors else "No previous errors."
+ return {"computed_fields": {"previous_errors": previous_errors}}
+
+
+try:
+ text = "I has went to the store yesterday and buyed some milk."
+ result = correct_grammar(text)
+ print(f"Corrected text: {result.text}")
+ print(f"Explanation: {result.explanation}")
+except ValidationError:
+ print("Failed to correct grammar after 3 attempts")
+```
+
+ Corrected text: I went to the store yesterday and bought some milk.
+ Explanation: The sentence has been corrected as follows: 'has went' was changed to 'went' (simple past tense of 'go'), and 'buyed' was changed to 'bought' (past tense of 'buy'). The subject-verb agreement was also fixed by changing 'I has' to 'I'.
+
+
+
+
+- Code Review: Validate code snippets for best practices and potential bugs.
+- Data Quality Checks: Validate complex data structures for consistency and completeness.
+- Legal Document Validation: Check legal documents for compliance with specific regulations.
+- Medical Record Validation: Ensure medical records are complete and consistent.
+- Financial Report Validation: Verify financial reports for accuracy and compliance with accounting standards.
+
+
+
+
+When adapting this recipe to your specific use-case, consider the following:
+
+- Tailor the prompts to provide clear instructions and relevant context for your specific validation tasks.
+- Balance the trade-off between validation accuracy and performance, especially when implementing retries.
+- Implement proper error handling and logging for production use.
+- Consider caching validation results for frequently validated items to improve performance.
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/named-entity-recognition.mdx b/cloud/content/docs/v1/guides/more-advanced/named-entity-recognition.mdx
new file mode 100644
index 0000000000..7e046ba61e
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/named-entity-recognition.mdx
@@ -0,0 +1,371 @@
+---
+title: Named Entity Recognition
+description: Implement Named Entity Recognition (NER) using LLMs with various levels of nested entity recognition to identify and classify named entities in text.
+---
+
+# Named Entity Recognition
+
+This guide demonstrates techniques to perform Named Entity Recognition (NER) using Large Language Models (LLMs) with various levels of nested entity recognition. We'll use Groq's llama-3.1-8b-instant model, but you can adapt this approach to other models with similar capabilities.
+
+
+
+
+
+
+Named Entity Recognition is a subtask of information extraction that seeks to locate and classify named entities in text into predefined categories such as person names, organizations, locations, etc. LLMs have revolutionized NER by enabling more context-aware and hierarchical entity recognition, going beyond traditional rule-based or statistical methods.
+
+
+
+It's worth noting that there are models that are trained specifically for NER (such as GLiNER). These models are often much smaller and cheapr and can often get better results for the right tasks. LLMs should generally be reserved for quick and dirty prototyping for NER or for tasks that may require a more nuanced, open-ended language-based approach. For example, an NER system that accepts user input to guide the system by be easier to build using LLMs than a traditionally trained NER-specific model.
+
+
+## Setup
+
+To set up our environment, first let's install all of the packages we will use:
+
+
+```python
+!pip install "mirascope[groq]" pytest
+!pip install ipytest # For running pytest in Jupyter Notebooks
+```
+
+
+```python
+import os
+
+os.environ["GROQ_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+## Simple NER
+
+We'll implement NER with different levels of complexity: simple and nested entity recognition. Let's start with the simple version:
+
+
+
+```python
+from __future__ import annotations # noqa: F404
+
+import textwrap
+
+from mirascope.core import groq, prompt_template
+from pydantic import BaseModel, Field
+
+unstructured_text = """
+Apple Inc., the tech giant founded by Steve Jobs and Steve Wozniak, recently announced a partnership with OpenAI, the artificial intelligence research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. This collaboration aims to enhance Siri, Apple's virtual assistant, which competes with Amazon's Alexa and Google Assistant, a product of Alphabet Inc.'s Google division. The joint project will be led by Apple's AI chief John Giannandrea, a former Google executive, and will take place at Apple Park, the company's headquarters in Cupertino, California.
+"""
+
+
+class SimpleEntity(BaseModel):
+ entity: str = Field(description="The entity found in the text")
+ label: str = Field(
+ description="The label of the entity (e.g., PERSON, ORGANIZATION, LOCATION)"
+ )
+
+
+@groq.call(
+ model="llama-3.1-8b-instant",
+ response_model=list[SimpleEntity],
+ json_mode=True,
+ call_params={"temperature": 0.0},
+)
+def simple_ner(text: str) -> str:
+ return f"Extract the entities from this text: {text}"
+
+
+print("Simple NER Results:")
+simple_result = simple_ner(unstructured_text)
+for entity in simple_result:
+ print(f"Entity: {entity.entity}, Label: {entity.label}")
+```
+
+ Simple NER Results:
+ Entity: Apple Inc., Label: ORGANIZATION
+ Entity: Steve Jobs, Label: PERSON
+ Entity: Steve Wozniak, Label: PERSON
+ Entity: OpenAI, Label: ORGANIZATION
+ Entity: OpenAI LP, Label: ORGANIZATION
+ Entity: OpenAI Inc., Label: ORGANIZATION
+ Entity: Amazon, Label: ORGANIZATION
+ Entity: Google, Label: ORGANIZATION
+ Entity: Alphabet Inc., Label: ORGANIZATION
+ Entity: John Giannandrea, Label: PERSON
+ Entity: Apple Park, Label: LOCATION
+ Entity: Cupertino, Label: LOCATION
+ Entity: California, Label: LOCATION
+
+
+
+In this example, we're extracting entities that have just the entity's text and label. However, entities often have relationships that are worth extracting and understanding.
+
+## Nested NER
+
+Now, let's implement a more sophisticated version that can handle nested entities:
+
+
+
+```python
+class NestedEntity(BaseModel):
+ entity: str = Field(description="The entity found in the text")
+ label: str = Field(
+ description="The label of the entity (e.g., PERSON, ORGANIZATION, LOCATION)"
+ )
+ parent: str | None = Field(
+ description="The parent entity if this entity is nested within another entity",
+ default=None,
+ )
+ children: list[NestedEntity] = Field(
+ default_factory=list, description="Nested entities within this entity"
+ )
+
+
+@groq.call(
+ model="llama-3.1-8b-instant",
+ response_model=list[NestedEntity],
+ json_mode=True,
+ call_params={"temperature": 0.0},
+)
+@prompt_template(
+ """
+ Identify all named entities in the following text, including deeply nested entities.
+ For each entity, provide its label and any nested entities within it.
+
+ Guidelines:
+ 1. Identify entities of types PERSON, ORGANIZATION, LOCATION, and any other relevant types.
+ 2. Capture hierarchical relationships between entities.
+ 3. Include all relevant information, even if it requires deep nesting.
+ 4. Be thorough and consider all possible entities and their relationships.
+
+ Example:
+ Text: "John Smith, the CEO of Tech Innovations Inc., a subsidiary of Global Corp, announced a new product at their headquarters in Silicon Valley."
+ Entities:
+ - Entity: "John Smith", Label: "PERSON", Parent: None
+ Children:
+ - Entity: "Tech Innovations Inc.", Label: "ORGANIZATION", Parent: "John Smith"
+ Children:
+ - Entity: "Global Corp", Label: "ORGANIZATION", Parent: "Tech Innovations Inc."
+ - Entity: "Silicon Valley", Label: "LOCATION", Parent: None
+
+ Now, analyze the following text: {text}
+ """
+)
+def nested_ner(text: str): ...
+
+
+print("\nNested NER Results:")
+improved_result = nested_ner(unstructured_text)
+
+
+def print_nested_entities(entities, level=0):
+ for entity in entities:
+ indent = " " * level
+ entity_info = (
+ f"Entity: {entity.entity}, Label: {entity.label}, Parent: {entity.parent}"
+ )
+ print(textwrap.indent(entity_info, indent))
+ if entity.children:
+ print_nested_entities(entity.children, level + 1)
+
+
+print_nested_entities(improved_result)
+```
+
+
+ Nested NER Results:
+ Entity: Steve Jobs, Label: PERSON, Parent: None
+ Entity: Apple Inc., Label: ORGANIZATION, Parent: Steve Jobs
+ Entity: Steve Wozniak, Label: PERSON, Parent: Apple Inc.
+ Entity: Apple Park, Label: LOCATION, Parent: Apple Inc.
+ Entity: Cupertino, Label: LOCATION, Parent: Apple Park
+ Entity: California, Label: LOCATION, Parent: Cupertino
+ Entity: Steve Wozniak, Label: PERSON, Parent: None
+ Entity: Apple Inc., Label: ORGANIZATION, Parent: Steve Wozniak
+ Entity: Apple Inc., Label: ORGANIZATION, Parent: None
+ Entity: John Giannandrea, Label: PERSON, Parent: Apple Inc.
+ Entity: Apple Park, Label: LOCATION, Parent: Apple Inc.
+ Entity: Cupertino, Label: LOCATION, Parent: Apple Park
+ Entity: California, Label: LOCATION, Parent: Cupertino
+ Entity: OpenAI, Label: ORGANIZATION, Parent: Apple Inc.
+ Entity: OpenAI LP, Label: ORGANIZATION, Parent: OpenAI
+ Entity: OpenAI Inc., Label: ORGANIZATION, Parent: OpenAI
+ Entity: John Giannandrea, Label: PERSON, Parent: None
+ Entity: Apple Inc., Label: ORGANIZATION, Parent: John Giannandrea
+ Entity: Apple Park, Label: LOCATION, Parent: None
+ Entity: Cupertino, Label: LOCATION, Parent: Apple Park
+ Entity: California, Label: LOCATION, Parent: Cupertino
+ Entity: Cupertino, Label: LOCATION, Parent: None
+ Entity: California, Label: LOCATION, Parent: Cupertino
+ Entity: California, Label: LOCATION, Parent: None
+ Entity: OpenAI, Label: ORGANIZATION, Parent: None
+ Entity: OpenAI LP, Label: ORGANIZATION, Parent: OpenAI
+ Entity: OpenAI Inc., Label: ORGANIZATION, Parent: OpenAI
+ Entity: OpenAI LP, Label: ORGANIZATION, Parent: None
+ Entity: OpenAI Inc., Label: ORGANIZATION, Parent: None
+ Entity: Amazon, Label: ORGANIZATION, Parent: None
+ Entity: Alexa, Label: PRODUCT, Parent: Amazon
+ Entity: Alexa, Label: PRODUCT, Parent: None
+ Entity: Google, Label: ORGANIZATION, Parent: None
+ Entity: Google Assistant, Label: PRODUCT, Parent: Google
+ Entity: Alphabet Inc., Label: ORGANIZATION, Parent: Google
+ Entity: Google Assistant, Label: PRODUCT, Parent: None
+ Entity: Alphabet Inc., Label: ORGANIZATION, Parent: None
+
+
+
+## Testing
+
+To ensure robustness, it's crucial to test the NER system with diverse scenarios. Here's a function to run multiple test cases:
+
+
+
+```python
+import ipytest # noqa: E402
+import pytest # noqa: E402
+
+ipytest.autoconfig()
+
+
+test_cases = [
+ (
+ """
+ The multinational conglomerate Alphabet Inc., parent company of Google, has acquired
+ DeepMind, a leading AI research laboratory based in London. DeepMind's founder,
+ Demis Hassabis, will join Google Brain, a division of Google AI, as Chief AI Scientist.
+ This move strengthens Alphabet's position in the AI field, challenging competitors like
+ OpenAI, which is backed by Microsoft, and Facebook AI Research, a part of Meta Platforms Inc.
+ """,
+ [
+ NestedEntity(
+ entity="Alphabet Inc.",
+ label="ORGANIZATION",
+ parent=None,
+ children=[
+ NestedEntity(
+ entity="Google",
+ label="ORGANIZATION",
+ parent="Alphabet Inc.",
+ children=[
+ NestedEntity(
+ entity="Google Brain",
+ label="ORGANIZATION",
+ parent="Google",
+ children=[],
+ ),
+ NestedEntity(
+ entity="Google AI",
+ label="ORGANIZATION",
+ parent="Google",
+ children=[
+ NestedEntity(
+ entity="Google Brain",
+ label="ORGANIZATION",
+ parent="Google AI",
+ children=[],
+ )
+ ],
+ ),
+ ],
+ ),
+ NestedEntity(
+ entity="DeepMind",
+ label="ORGANIZATION",
+ parent="Alphabet Inc.",
+ children=[
+ NestedEntity(
+ entity="Demis Hassabis",
+ label="PERSON",
+ parent="DeepMind",
+ children=[],
+ )
+ ],
+ ),
+ ],
+ ),
+ NestedEntity(entity="London", label="LOCATION", parent=None, children=[]),
+ NestedEntity(
+ entity="Demis Hassabis", label="PERSON", parent=None, children=[]
+ ),
+ NestedEntity(
+ entity="OpenAI",
+ label="ORGANIZATION",
+ parent=None,
+ children=[
+ NestedEntity(
+ entity="Microsoft",
+ label="ORGANIZATION",
+ parent="OpenAI",
+ children=[],
+ )
+ ],
+ ),
+ NestedEntity(
+ entity="Facebook AI Research",
+ label="ORGANIZATION",
+ parent=None,
+ children=[
+ NestedEntity(
+ entity="Meta Platforms Inc.",
+ label="ORGANIZATION",
+ parent="Facebook AI Research",
+ children=[],
+ )
+ ],
+ ),
+ NestedEntity(
+ entity="Meta Platforms Inc.",
+ label="ORGANIZATION",
+ parent=None,
+ children=[],
+ ),
+ NestedEntity(
+ entity="Microsoft", label="ORGANIZATION", parent=None, children=[]
+ ),
+ ],
+ ),
+]
+
+
+@pytest.mark.parametrize("text,expected_output", test_cases)
+def test_nested_ner(text: str, expected_output: list[NestedEntity]):
+ output = nested_ner(text)
+ assert len(output) == len(expected_output)
+ for entity, expected_entity in zip(output, expected_output, strict=False):
+ assert entity.model_dump() == expected_entity.model_dump()
+
+
+ipytest.run() # Run the tests in Jupyter Notebook
+```
+
+
+It's important to heavily test any system before you put it in practice. The above example demonstrates how to test such a method (`nested_ner` in this case), but it only shows a single input/output pair for brevity.
+
+We strongly encourage you to write far more robust tests in your applications with many more test cases. This is why our examples uses `@pytest.mark.parametrize` to easily include additional test cases.
+
+## Further Improvements
+
+This Named Entity Recognition (NER) system leverages the power of LLMs to perform context-aware, hierarchical entity extraction with various levels of nesting. It can identify complex relationships between entities, making it suitable for a wide range of applications.
+
+
+
+- Information Extraction: Extracting structured information from unstructured text data.
+- Question Answering: Identifying entities relevant to a given question.
+- Document Summarization: Summarizing documents by extracting key entities and relationships.
+- Sentiment Analysis: Analyzing sentiment towards specific entities or topics.
+
+
+
+When adapting this recipe to your specific use-case, consider the following:
+
+- Prompt customization to guide the model towards specific entity types or relationships.
+- Fine-tuning the model on domain-specific data for better accuracy in particular fields.
+- Implementing a confidence score for each identified entity.
+- Integrating with a knowledge base to enhance entity disambiguation.
+- Developing a post-processing step to refine and validate the LLM's output.
+- Exploring ways to optimize performance for real-time applications.
+
+By leveraging the power of LLMs and the flexibility of the Mirascope library, you can create sophisticated NER systems that go beyond traditional approaches, enabling more nuanced and context-aware entity recognition for various applications.
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/o1-style-thinking.mdx b/cloud/content/docs/v1/guides/more-advanced/o1-style-thinking.mdx
new file mode 100644
index 0000000000..96fc246a27
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/o1-style-thinking.mdx
@@ -0,0 +1,253 @@
+---
+title: o1 Style Thinking
+description: Learn how to implement Chain-of-Thought reasoning with LLMs to solve complex tasks by breaking them down into logical steps.
+---
+
+# o1 Style Thinking
+
+In this recipe, we will show how to achieve Chain-of-Thought Reasoning.
+This makes LLMs to breakdown the task in multiple steps and generate a coherent output allowing to solve complex tasks in logical steps.
+
+
+
+
+
+
+ Large Language Models (LLMs) are known to generate text that is coherent and fluent. However, they often struggle with tasks that require multi-step reasoning or logical thinking. In this recipe, we will show how to use Mirascope to guide the LLM to break down the task into multiple steps and generate a coherent output.
+
+
+
+
+## Setup
+
+To set up our environment, first let's install all of the packages we will use:
+
+
+
+
+
+
+```python
+!pip install "mirascope[groq]"
+!pip install datetime
+```
+
+
+```python
+# Set the appropriate API key for the provider you're using
+# Here we are using GROQ_API_KEY
+
+export GROQ_API_KEY="Your API Key"
+```
+
+# Without Chain-of-Thought Reasoning
+
+We will begin by showing how a typical LLM performs on a task that requires multi-step reasoning. In this example, we will ask the model to generate a count the number of `s`s in the word `Mississssippi` (Yes it has 7`s`'s). We will use the `llama-3.1-8b-instant` for this example.
+
+
+```python
+from datetime import datetime
+
+from mirascope.core import groq
+
+history: list[dict[str, str]] = []
+
+
+@groq.call("llama-3.1-8b-instant")
+def generate_answer(question: str) -> str:
+ return f"Generate an answer to this question: {question}"
+
+
+def run() -> None:
+ question: str = "how many s's in the word mississssippi"
+ response: str = generate_answer(question)
+ print(f"(User): {question}")
+ print(f"(Assistant): {response}")
+ history.append({"role": "user", "content": question})
+ history.append({"role": "assistant", "content": response})
+
+
+run()
+```
+
+ (User): how many s's in the word mississssippi
+ (Assistant): There are 5 s's in the word "Mississippi".
+
+
+In this example, the zero-shot method is used to generate the output. The model is not provided with any additional information or context to help it generate the output. The model is only given the input prompt and asked to generate the output.
+
+This is not so effective when there is a logcial task to be performed.
+
+Now let's see how the model performs on this task when it can reason using Chain-of-Thought Reasoning.
+
+# With Chain of Thought Reasoning
+
+
+```python
+from typing import Literal
+
+from mirascope.core import groq
+from pydantic import BaseModel, Field
+
+
+history: list[dict] = []
+
+
+class COTResult(BaseModel):
+ title: str = Field(..., desecription="The title of the step")
+ content: str = Field(..., description="The output content of the step")
+ next_action: Literal["continue", "final_answer"] = Field(
+ ..., description="The next action to take"
+ )
+
+
+@groq.call("llama-3.3-70b-versatile", json_mode=True, response_model=COTResult)
+def cot_step(prompt: str, step_number: int, previous_steps: str) -> str:
+ return f"""
+ You are an expert AI assistant that explains your reasoning step by step.
+ For this step, provide a title that describes what you're doing, along with the content.
+ Decide if you need another step or if you're ready to give the final answer.
+
+ Guidelines:
+ - Use AT MOST 5 steps to derive the answer.
+ - Be aware of your limitations as an LLM and what you can and cannot do.
+ - In your reasoning, include exploration of alternative answers.
+ - Consider you may be wrong, and if you are wrong in your reasoning, where it would be.
+ - Fully test all other possibilities.
+ - YOU ARE ALLOWED TO BE WRONG. When you say you are re-examining
+ - Actually re-examine, and use another approach to do so.
+ - Do not just say you are re-examining.
+
+ IMPORTANT: Do not use code blocks or programming examples in your reasoning. Explain your process in plain language.
+
+ This is step number {step_number}.
+
+ Question: {prompt}
+
+ Previous steps:
+ {previous_steps}
+ """
+
+
+@groq.call("llama-3.3-70b-versatile")
+def final_answer(prompt: str, reasoning: str) -> str:
+ return f"""
+ Based on the following chain of reasoning, provide a final answer to the question.
+ Only provide the text response without any titles or preambles.
+ Retain any formatting as instructed by the original prompt, such as exact formatting for free response or multiple choice.
+
+ Question: {prompt}
+
+ Reasoning:
+ {reasoning}
+
+ Final Answer:
+ """
+
+
+def generate_cot_response(
+ user_query: str,
+) -> tuple[list[tuple[str, str, float]], float]:
+ steps: list[tuple[str, str, float]] = []
+ total_thinking_time: float = 0.0
+ step_count: int = 1
+ reasoning: str = ""
+ previous_steps: str = ""
+
+ while True:
+ start_time: datetime = datetime.now()
+ cot_result = cot_step(user_query, step_count, previous_steps)
+ end_time: datetime = datetime.now()
+ thinking_time: float = (end_time - start_time).total_seconds()
+
+ steps.append(
+ (
+ f"Step {step_count}: {cot_result.title}",
+ cot_result.content,
+ thinking_time,
+ )
+ )
+ total_thinking_time += thinking_time
+
+ reasoning += f"\n{cot_result.content}\n"
+ previous_steps += f"\n{cot_result.content}\n"
+
+ if cot_result.next_action == "final_answer" or step_count >= 5:
+ break
+
+ step_count += 1
+
+ # Generate final answer
+ start_time = datetime.now()
+ final_result: str = final_answer(user_query, reasoning).content
+ end_time = datetime.now()
+ thinking_time = (end_time - start_time).total_seconds()
+ total_thinking_time += thinking_time
+
+ steps.append(("Final Answer", final_result, thinking_time))
+
+ return steps, total_thinking_time
+
+
+def display_cot_response(
+ steps: list[tuple[str, str, float]], total_thinking_time: float
+) -> None:
+ for title, content, thinking_time in steps:
+ print(f"{title}:")
+ print(content.strip())
+ print(f"**Thinking time: {thinking_time:.2f} seconds**\n")
+
+ print(f"**Total thinking time: {total_thinking_time:.2f} seconds**")
+
+
+def run() -> None:
+ question: str = "How many s's are in the word 'mississssippi'?"
+ print("(User):", question)
+ # Generate COT response
+ steps, total_thinking_time = generate_cot_response(question)
+ display_cot_response(steps, total_thinking_time)
+
+ # Add the interaction to the history
+ history.append({"role": "user", "content": question})
+ history.append(
+ {"role": "assistant", "content": steps[-1][1]}
+ ) # Add only the final answer to the history
+
+
+# Run the function
+
+run()
+```
+
+ (User): How many s's are in the word 'mississssippi'?
+ Step 1: Initial Assessment and Counting:
+ To count the number of 's's in the word 'mississssippi', I will first notice that the 's's appear together in two groups. This makes it easier to count. The first group contains 1 's', and the second group contains 4 's's. Additionally, there is 1 more 's' separate from these groups. By combining the counts from these groups and the additional 's', I arrive at a preliminary total of 6 's's. However, upon reviewing the options, I realize that I must consider the possibility of an error in my initial assessment.
+ **Thinking time: 1.13 seconds**
+
+ Step 2: Re-examining the Count of 's's in the Word 'mississssippi':
+ I will recount the 's's in the word 'mississssippi'. Upon re-examination, I notice the groups of 's's are actually 'm-i-s-s-i-s-sss-ss-i-pp-i'. Recounting the groups, I still find two 's's separate from the groups and 4 in the last group. I still find 1 's' in a separate group at the start. Combining them I get 7 's's in the word. This seems correct, however, I must explore any possible alternative count. In considering my count, I consider the alternative the individual 's's in 'mississssippi' are not as grouped, but separate. I manually recount: the first 's', then 4 's's, plus 2 's's at the end of the groups gives me 7. Upon consideration, this approach still indicates there are 7 's's.
+ **Thinking time: 1.21 seconds**
+
+ Step 3: Validating the Count of 's's in the Word 'mississssippi':
+ Considering the count I arrived at in the previous steps, I notice that I must further ensure the count is correct. There are no other apparent alternative methods to consider. Upon reflection on my approach, my confidence in my methods is high enough to proceed. However, this confidence does not exclude the possibility of an error.
+ **Thinking time: 0.74 seconds**
+
+ Final Answer:
+ There are 7 's's in the word 'mississssippi'.
+ **Thinking time: 0.42 seconds**
+
+ **Total thinking time: 3.50 seconds**
+
+
+As demonstrated in the COT Reasoning example, we can guide the model to break down the task into multiple steps and generate a coherent output. This allows the model to solve complex tasks in logical steps.
+However, this requires multiple calls to the model, which may be expensive in terms of cost and time.
+Also model may not always identify the correct steps to solve the task, hence is not deterministic.
+
+# Conclusion
+Chain of Thought Reasoning is a powerful technique that allows LLMs to solve complex tasks in logical steps. However, it requires multiple calls to the model and may not always identify the correct steps to solve the task. This technique can be useful when the task requires multi-step reasoning or logical thinking.
+
+Care should be taken to ensure that the model is guided correctly and that the output is coherent and accurate.
diff --git a/cloud/content/docs/v1/guides/more-advanced/pii-scrubbing.mdx b/cloud/content/docs/v1/guides/more-advanced/pii-scrubbing.mdx
new file mode 100644
index 0000000000..54ce3fe3bd
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/pii-scrubbing.mdx
@@ -0,0 +1,166 @@
+---
+title: PII Scrubbing
+description: Create a system to detect and redact personally identifiable information (PII) from text documents using a locally-hosted LLM for enhanced privacy.
+---
+
+# PII Scrubbing
+
+In this recipe, we go over how to detect Personal Identifiable Information, or PII and redact it from your source. Whether your source is from a database, a document, or spreadsheet, it is important prevent PII from leaving your system. We will be using Ollama for data privacy.
+
+
+
+
+
+
+Prior to Natural Language Processing (NLP) and Named Entity Recognition (NER) techniques, scrubbing or redacting sensitive information was a time-consuming manual task. LLMs have improved on this by being able to understand context surrounding sensitive information.
+
+
+## Setup
+
+Let's start by installing Mirascope and its dependencies:
+
+
+```python
+!pip install "mirascope[openai]"
+```
+
+
+```python
+import os
+
+os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+## Create your prompt
+
+The first step is to grab the definition of PII for our prompt to use. Note that in this example we will be using US Labor Laws so be sure to use your countries definition. We can access the definition [here](https://www.dol.gov/general/ppii).
+
+
+```python
+from mirascope.core import openai, prompt_template
+from openai import OpenAI
+
+PII_DEFINITION = """
+Any representation of information that permits the identity of an individual to whom
+the information applies to be reasonably inferred by either direct or indirect means.
+Further, PII is defined as information: (i) that directly identifies an
+individual (e.g., name, address, social security number or other identifying
+number or code, telephone number, email address, etc.) or (ii) by which an agency
+intends to identify specific individuals in conjunction with other data elements,
+i.e., indirect identification. (These data elements may include a combination of gender,
+race, birth date, geographic indicator, and other descriptors). Additionally,
+information permitting the physical or online contacting of a specific individual is
+the same as personally identifiable information. This information can be maintained
+in either paper, electronic or other media.
+"""
+
+
+@openai.call(
+ model="llama3.1",
+ client=OpenAI(base_url="http://localhost:11434/v1", api_key="ollama"),
+ json_mode=True,
+ response_model=bool,
+)
+@prompt_template(
+ """
+ SYSTEM:
+ You are an expert at identifying personally identifiable information (PII).
+ Using the following definition of PII,
+ determine if the article contains PII with True or False?
+
+ Definition of PII: {PII_DEFINITION}
+
+ USER: {article}
+ """
+)
+def check_if_pii_exists(article: str) -> openai.OpenAIDynamicConfig:
+ return {"computed_fields": {"PII_DEFINITION": PII_DEFINITION}}
+```
+
+
+Using Mirascope’s `response_model` we first detect if PII exists and return a `bool` , this will determine our next steps.
+
+## Verify the prompt quality
+
+We will be using a fake document to test the accuracy of our prompt:
+
+
+
+```python
+PII_ARTICLE = """
+John Doe, born on 12/07/1985, resides at 123 Ruecker Harbor in Goodwinshire, WI.
+His Social Security number is 325-21-4386 and he can be reached at (123) 456-7890.
+"""
+
+does_pii_exist = check_if_pii_exists(PII_ARTICLE)
+print(does_pii_exist)
+```
+
+ True
+
+
+
+## Redact PII
+
+For articles that are flagged as containing PII, we now need to redact that information if we are still planning on sending that document. We create another prompt specific to redacting data by provide an example for the LLM to use:
+
+
+
+```python
+@openai.call(
+ model="llama3.1",
+ client=OpenAI(base_url="http://localhost:11434/v1", api_key="ollama"),
+)
+@prompt_template(
+ """
+ SYSTEM:
+ You are an expert at redacting personally identifiable information (PII).
+ Replace the PII in the following article with context words.
+
+ If PII exists in the article, replace it with context words. For example, if the
+ phone number is 123-456-7890, replace it with [PHONE_NUMBER].
+
+ USER: {article}
+ """
+)
+def scrub_pii(article: str): ...
+
+
+def run():
+ does_pii_exist = check_if_pii_exists(PII_ARTICLE)
+ print(does_pii_exist)
+ # Output:
+ # True
+ if does_pii_exist:
+ return scrub_pii(PII_ARTICLE)
+ else:
+ return "No PII found in the article."
+
+
+print(run())
+```
+
+ True
+ [NAME], born on [BIRTH_DATE], resides at [ADDRESS] in [CITY], [STATE]. His [IDENTIFICATION_NUMBER] is [SOCIAL_SECURITY_NUMBER] and he can be reached at [PHONE_NUMBER].
+
+
+
+
+- Medical Records: Iterate on the above recipe and scrub any PII when sharing patient data for research.
+- Legal Documents: Court documents and legal filings frequently contain sensitive information that needs to be scrubbed before public release.
+- Corporate Financial Reports: Companies may need to scrub proprietary financial data or trade secrets when sharing reports with external auditors or regulators.
+- Social Media Content Moderation: Automatically scrub or blur out personal information like phone numbers or addresses posted in public comments.
+
+
+
+When adapting this recipe to your specific use-case, consider the following:
+
+ - Use a larger model hosted on prem or in a private location to prevent data leaks.
+ - Refine the prompts for specific types of information you want scrubbed.
+ - Run the `check_if_pii_exists` call after scrubbing PII to check if all PII has been scrubbed.
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/query-plan.mdx b/cloud/content/docs/v1/guides/more-advanced/query-plan.mdx
new file mode 100644
index 0000000000..cf34446202
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/query-plan.mdx
@@ -0,0 +1,265 @@
+---
+title: Query Plan
+description: Learn how to create and execute query plans with LLMs by breaking down complex questions into smaller, more manageable queries for more accurate results.
+---
+
+# Query Plan
+
+This recipe shows how to use LLMs — in this case, Anthropic’s Claude 3.5 Sonnet — to create a query plan. Using a query plan is a great way to get more accurate results by breaking down a complex question into multiple smaller questions.
+
+
+
+
+
+## Setup
+
+Let's start by installing Mirascope and its dependencies:
+
+
+```python
+!pip install "mirascope[anthropic]"
+```
+
+
+```python
+import os
+
+os.environ["ANTHROPIC_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+## Create your Query
+
+To construct our Query Plan, we first need to define the individual queries that will comprise it using a Pydantic BaseModel:
+
+
+```python
+from pydantic import BaseModel, Field
+
+
+class Query(BaseModel):
+ id: int = Field(..., description="ID of the query, this is auto-incremented")
+ question: str = Field(
+ ...,
+ description="The broken down question to be answered to answer the main question",
+ )
+ dependencies: list[int] = Field(
+ description="List of sub questions that need to be answered before asking this question",
+ )
+ tools: list[str] = Field(
+ description="List of tools that should be used to answer the question"
+ )
+```
+
+
+Each query is assigned a unique ID, which can reference other queries for dependencies. We also provide necessary tools and the relevant portion of the broken-down question to each query.
+
+## Create our tool
+
+For the purposes of this recipe, we will define some dummy data. This tool should be replaced by web_search, a database query, or other forms of pulling data.
+
+
+
+```python
+import json
+
+
+def get_weather_by_year(year: int):
+ """Made up data to get Tokyo weather by year"""
+ if year == 2020:
+ data = {
+ "jan": 42,
+ "feb": 43,
+ "mar": 49,
+ "apr": 58,
+ "may": 66,
+ "jun": 72,
+ "jul": 78,
+ "aug": 81,
+ "sep": 75,
+ "oct": 65,
+ "nov": 55,
+ "dec": 47,
+ }
+ elif year == 2021:
+ data = {
+ "jan": 45,
+ "feb": 48,
+ "mar": 52,
+ "apr": 60,
+ "may": 68,
+ "jun": 74,
+ "jul": 80,
+ "aug": 83,
+ "sep": 77,
+ "oct": 67,
+ "nov": 57,
+ "dec": 49,
+ }
+ else:
+ data = {
+ "jan": 48,
+ "feb": 52,
+ "mar": 56,
+ "apr": 64,
+ "may": 72,
+ "jun": 78,
+ "jul": 84,
+ "aug": 87,
+ "sep": 81,
+ "oct": 71,
+ "nov": 61,
+ "dec": 53,
+ }
+ return json.dumps(data)
+```
+
+
+## Define our Query Planner
+
+Let us prompt our LLM call to create a query plan for a particular question:
+
+
+
+```python
+from mirascope.core import anthropic, prompt_template
+
+
+@anthropic.call(
+ model="claude-3-5-sonnet-20240620", response_model=list[Query], json_mode=True
+)
+@prompt_template(
+ """
+ SYSTEM:
+ You are an expert at creating a query plan for a question.
+ You are given a question and you need to create a query plan for it.
+ You need to create a list of queries that can be used to answer the question.
+
+ You have access to the following tool:
+ - get_weather_by_year
+ USER:
+ {question}
+ """
+)
+def create_query_plan(question: str): ...
+```
+
+
+We set the `response_model` to the `Query` object we just defined. We also prompt the call to add tools as necessary to the individual `Query`. Now we make a call to the LLM:
+
+
+
+```python
+query_plan = create_query_plan("Compare the weather in Tokyo from 2020 to 2022")
+print(query_plan)
+```
+
+ [Query(id=1, question='What was the weather like in Tokyo in 2020?', dependencies=[], tools=['get_weather_by_year']), Query(id=2, question='What was the weather like in Tokyo in 2021?', dependencies=[], tools=['get_weather_by_year']), Query(id=3, question='What was the weather like in Tokyo in 2022?', dependencies=[], tools=['get_weather_by_year']), Query(id=4, question='Compare the weather data for Tokyo from 2020 to 2022', dependencies=[1, 2, 3], tools=[])]
+
+
+
+We can see our `list[Query]` and their respective subquestions and tools needed to answer the main question. We can also see that the final question depends on the answers from the previous queries.
+
+## Executing our Query Plan
+
+Now that we have our list of queries, we can iterate on each of the subqueries to answer our main question:
+
+
+
+```python
+from anthropic.types import MessageParam
+
+
+@anthropic.call(model="claude-3-5-sonnet-20240620")
+@prompt_template(
+ """
+ MESSAGES:
+ {history}
+ USER:
+ {question}
+ """
+)
+def run(
+ question: str, history: list[MessageParam], tools: list[str]
+) -> anthropic.AnthropicDynamicConfig:
+ tools_fn = [eval(tool) for tool in tools]
+ return {"tools": tools_fn}
+
+
+def execute_query_plan(query_plan: list[Query]):
+ results = {}
+ for query in query_plan:
+ history = []
+ for dependency in query.dependencies:
+ result = results[dependency]
+ history.append({"role": "user", "content": result["question"]})
+ history.append({"role": "assistant", "content": result["content"]})
+ result = run(query.question, history, query.tools)
+ if tool := result.tool:
+ output = tool.call()
+ results[query.id] = {"question": query.question, "content": output}
+ else:
+ return result.content
+ return results
+```
+
+
+Using Mirascope’s `DynamicConfig` , we can pass in the tools from the query plan into our LLM call. We also add history to the calls that have dependencies.
+
+Now we run `execute_query_plan`:
+
+
+
+```python
+result = execute_query_plan(query_plan)
+print(result)
+```
+
+ Comparing the weather data for Tokyo from 2020 to 2022, we can observe the following trends:
+
+ 1. Overall warming trend:
+ - There's a consistent increase in temperatures across all months from 2020 to 2022.
+ - The average annual temperature has risen each year.
+
+ 2. Monthly comparisons:
+ - January: 42°F (2020) → 45°F (2021) → 48°F (2022)
+ - July: 78°F (2020) → 80°F (2021) → 84°F (2022)
+ - December: 47°F (2020) → 49°F (2021) → 53°F (2022)
+
+ 3. Seasonal patterns:
+ - Winters (Dec-Feb) have become milder each year.
+ - Summers (Jun-Aug) have become hotter each year.
+ - Spring and autumn months also show warming trends.
+
+ 4. Extreme temperatures:
+ - The hottest month has consistently been August, with temperatures increasing from 81°F (2020) to 87°F (2022).
+ - The coldest month has consistently been January, with temperatures rising from 42°F (2020) to 48°F (2022).
+
+ 5. Year-to-year changes:
+ - The temperature increase from 2020 to 2021 was generally smaller than the increase from 2021 to 2022.
+ - 2022 shows the most significant warming compared to previous years.
+
+ In summary, the data indicates a clear warming trend in Tokyo from 2020 to 2022, with each year being warmer than the last across all seasons.
+
+
+
+
+- Enhanced ChatBot: Provide higher quality and more accurate answers by using a query plan to answer complex questions.
+- Database Administrator: Translate layperson requests into a query plan, then execute SQL commands to efficiently retrieve or manipulate data, fulfilling the user's requirements.
+- Customer support: Take a user request and turn it into a query plan for easy to follow and simple instructions for troubleshooting.
+
+
+
+When adapting this recipe to your specific use-case, consider the following:
+
+ - Agentic: Turn this example into a more flexible Agent which has access to a query plan tool.
+ - Multiple providers: Use multiple LLM providers to verify whether the extracted information is accurate and not hallucination.
+ - Implement Pydantic `ValidationError` and Tenacity `retry` to improve reliability and accuracy.
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/removing-semantic-duplicates.mdx b/cloud/content/docs/v1/guides/more-advanced/removing-semantic-duplicates.mdx
new file mode 100644
index 0000000000..364c0db1eb
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/removing-semantic-duplicates.mdx
@@ -0,0 +1,178 @@
+---
+title: Removing Semantic Duplicates
+description: Learn how to use LLMs to identify and remove semantically equivalent items from lists and objects, solving a longstanding problem in natural language processing.
+---
+
+# Removing Semantic Duplicates
+
+ In this recipe, we show how to use LLMs — in this case, OpenAI's `gpt-4o-mini` — to answer remove semantic duplicates from lists and objects.
+
+
+
+
+
+
+Semantic deduplication, or the removal of duplicates which are equivalent in meaning but not in data, has been a longstanding problem in NLP. LLMs which have the ability to comprehend context, semantics, and implications within that text trivializes this problem.
+
+
+## Setup
+
+Let's start by installing Mirascope and its dependencies:
+
+
+```python
+!pip install "mirascope[openai]"
+```
+
+
+```python
+import os
+
+os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+## Deduplicating a List
+
+To start, assume we have a some entries of movie genres with semantic duplicates:
+
+
+```python
+movie_genres = [
+ "sci-fi",
+ "romance",
+ "love story",
+ "action",
+ "horror",
+ "heist",
+ "crime",
+ "science fiction",
+ "fantasy",
+ "scary",
+]
+```
+
+To deduplicate this list, we’ll extract a schema containing `genres`, the deduplicated list, and `duplicates`, a list of all duplicate items. The reason for having `duplicates` in our schema is that LLM extractions can be inconsistent, even with the most recent models - forcing it to list the duplicate items helps it reason through the call and produce a more accurate answer.
+
+
+
+```python
+from pydantic import BaseModel, Field
+
+
+class DeduplicatedGenres(BaseModel):
+ duplicates: list[list[str]] = Field(
+ ..., description="A list containing lists of semantically equivalent items"
+ )
+ genres: list[str] = Field(
+ ..., description="The list of genres with semantic duplicates removed"
+ )
+```
+
+
+We can now set this schema as our response model in a Mirascope call:
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+
+@openai.call(model="gpt-4o-mini", response_model=DeduplicatedGenres)
+@prompt_template(
+ """
+ SYSTEM:
+ Your job is to take a list of movie genres and clean it up by removing items
+ which are semantically equivalent to one another. When coming across multiple items
+ which refer to the same genre, keep the genre name which is most commonly used.
+ For example, "sci-fi" and "science fiction" are the same genre.
+
+ USER:
+ {genres}
+ """
+)
+def deduplicate_genres(genres: list[str]): ...
+
+
+response = deduplicate_genres(movie_genres)
+assert isinstance(response, DeduplicatedGenres)
+print(response.genres)
+print(response.duplicates)
+```
+
+ ['action', 'horror', 'heist', 'crime', 'fantasy', 'scary']
+ [['sci-fi', 'science fiction'], ['love story', 'romance']]
+
+
+
+Just like with a list of strings, we can create a schema of `DeduplicatedBooks` and set it as the response model, with a modified prompt to account for the different types of differences we see:
+
+
+
+```python
+class Book(BaseModel):
+ title: str
+ author: str
+ genre: str
+
+
+duplicate_books = [
+ Book(title="The War of the Worlds", author="H. G. Wells", genre="scifi"),
+ Book(title="War of the Worlds", author="H.G. Wells", genre="science fiction"),
+ Book(title="The Sorcerer's stone", author="J. K. Rowling", genre="fantasy"),
+ Book(
+ title="Harry Potter and The Sorcerer's stone",
+ author="J. K. Rowling",
+ genre="fantasy",
+ ),
+ Book(title="The Name of the Wind", author="Patrick Rothfuss", genre="fantasy"),
+ Book(title="'The Name of the Wind'", author="Patrick Rofuss", genre="fiction"),
+]
+
+
+@openai.call(model="gpt-4o", response_model=list[Book])
+@prompt_template(
+ """
+ SYSTEM:
+ Your job is to take a database of books and clean it up by removing items which are
+ semantic duplicates. Look out for typos, formatting differences, and categorizations.
+ For example, "Mistborn" and "Mistborn: The Final Empire" are the same book
+ but "Mistborn: Shadows of Self" is not.
+ Then return all the unique books.
+
+ USER:
+ {books}
+ """
+)
+def deduplicate_books(books: list[Book]): ...
+
+
+books = deduplicate_books(duplicate_books)
+assert isinstance(books, list)
+for book in books:
+ print(book)
+```
+
+ title='The War of the Worlds' author='H. G. Wells' genre='scifi'
+ title='War of the Worlds' author='H.G. Wells' genre='science fiction'
+
+
+
+
+- Customer Relationship Management (CRM): Maintaining a single, accurate view of each customer.
+- Database Management: Removing duplicate records to maintain data integrity and improve query performance
+- Email: Clean up digital assets by removing duplicate attachments, emails.
+
+
+
+When adapting this recipe to your specific use-case, consider the following:
+
+- Refine your prompts to provide clear instructions and examples tailored to your requirements.
+- Experiment with different model providers and version to balance accuracy and speed.
+- Use multiple model providers to evaluate whether all duplicates have bene removed.
+- Add more information if possible to get better accuracy, e.g. some books might have similar names but are released in different years.
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/search-with-sources.mdx b/cloud/content/docs/v1/guides/more-advanced/search-with-sources.mdx
new file mode 100644
index 0000000000..9715e4fb74
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/search-with-sources.mdx
@@ -0,0 +1,195 @@
+---
+title: Search with Sources
+description: Learn how to use LLMs to answer questions using web search while providing source citations to verify accuracy and reduce hallucinations.
+---
+
+# Search with Sources
+
+This recipe shows how to use LLMs — in this case, GPT 4o mini — to answer questions using the web. Since LLMs often time hallucinate answers, it is important to fact check and verify the accuracy of the answer.
+
+
+
+
+
+
+Users of Large Language Models (LLMs) often struggle to distinguish between factual content and potential hallucinations, leading to time-consuming fact-checking. By implementing source citation requirements, LLMs need to rely on verified information, thereby enhancing the accuracy of its responses and reducing the need for manual verification.
+
+
+## Setup
+
+To set up our environment, first let's install all of the packages we will use:
+
+
+```python
+!pip install "mirascope[openai]" beautifulsoup4
+```
+
+
+```python
+import os
+
+os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+
+We will need an API key for search:
+
+- [Nimble API Key](https://nimbleway.com/) or alternatively directly from Google [Custom Search](https://developers.google.com/custom-search/v1/introduction/) API.
+
+## Creating Google Search tool
+
+We use [Nimble](https://nimbleway.com/) since they provide an easy-to-use API for searching, but an alternative you can use is Google's Custom Search API. We first want to grab all the urls that are relevant to answer our question and then we take the contents of those urls, like so:
+
+
+```python
+import requests
+from bs4 import BeautifulSoup
+
+NIMBLE_TOKEN = "YOUR_NIMBLE_API_KEY"
+
+
+def nimble_google_search(query: str):
+ """
+ Use Nimble to get information about the query using Google Search.
+ """
+ url = "https://api.webit.live/api/v1/realtime/serp"
+ headers = {
+ "Authorization": f"Basic {NIMBLE_TOKEN}",
+ "Content-Type": "application/json",
+ }
+ search_data = {
+ "parse": True,
+ "query": query,
+ "search_engine": "google_search",
+ "format": "json",
+ "render": True,
+ "country": "US",
+ "locale": "en",
+ }
+ response = requests.get(url, json=search_data, headers=headers)
+ data = response.json()
+ results = data["parsing"]["entities"]["OrganicResult"]
+ urls = [result.get("url", "") for result in results]
+ search_results = {}
+ for url in urls:
+ content = get_content(url)
+ search_results[url] = content
+ return search_results
+
+
+def get_content(url: str):
+ data = []
+ response = requests.get(url)
+ content = response.content
+ soup = BeautifulSoup(content, "html.parser")
+ paragraphs = soup.find_all("p")
+ for paragraph in paragraphs:
+ data.append(paragraph.text)
+ return "\n".join(data)
+```
+
+
+Now that we have created our tool, it’s time to create our LLM call.
+
+## Creating the first call
+
+For this call, we force the LLM to always use its tool which we will later chain.
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+
+@openai.call(
+ model="gpt-4o-mini",
+ tools=[nimble_google_search],
+ call_params={"tool_choice": "required"},
+)
+@prompt_template(
+ """
+ SYSTEM:
+ You are a an expert at finding information on the web.
+ Use the `nimble_google_search` function to find information on the web.
+ Rewrite the question as needed to better find information on the web.
+
+ USER:
+ {question}
+ """
+)
+def search(question: str): ...
+```
+
+
+We ask the LLM to rewrite the question to make it more suitable for search.
+
+Now that we have the necessary data to answer the user query and their sources, it’s time to extract all that information into a structured format using `response_model`
+
+## Extracting Search Results with Sources
+
+As mentioned earlier, it is important to fact check all answers in case of hallucination, and the first step is to ask the LLM to cite its sources:
+
+
+
+```python
+from pydantic import BaseModel, Field
+
+
+class SearchResponse(BaseModel):
+ sources: list[str] = Field(description="The sources of the results")
+ answer: str = Field(description="The answer to the question")
+
+
+@openai.call(model="gpt-4o-mini", response_model=list[SearchResponse])
+@prompt_template(
+ """
+ SYSTEM:
+ Extract the question, results, and sources to answer the question based on the results.
+
+ Results:
+ {results}
+
+ USER:
+ {question}
+ """
+)
+def extract(question: str, results: str): ...
+```
+
+
+and finally we create our `run` function to execute our chain:
+
+
+
+```python
+def run(question: str):
+ response = search(question)
+ if tool := response.tool:
+ output = tool.call()
+ result = extract(question, output)
+ return result
+
+
+print(run("What is the average price of a house in the United States?"))
+```
+
+
+
+- Journalism Assistant: Have the LLM do some research to quickly pull verifiable sources for blog posts and news articles.
+- Education: Find and cite articles to help write academic papers.
+- Technical Documentation: Generate code snippets and docs referencing official documentation.
+
+
+
+When adapting this recipe, consider:
+ - Adding [Tenacity](https://tenacity.readthedocs.io/en/latest/) `retry` for more a consistent extraction.
+ - Use an LLM with web search tool to evaluate whether the answer produced is in the source.
+ - Experiment with different model providers and version for quality and accuracy of results.
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/speech-transcription.mdx b/cloud/content/docs/v1/guides/more-advanced/speech-transcription.mdx
new file mode 100644
index 0000000000..c7320c897c
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/speech-transcription.mdx
@@ -0,0 +1,181 @@
+---
+title: Transcribing Speech
+description: Use multimodal LLMs like Google's Gemini to transcribe speech from audio files and extract additional metadata
+---
+
+# Transcribing Speech
+
+In this recipe, we go over how to transcribe the speech from an audio file using Gemini 1.5 Flash’s audio capabilities.
+
+
+
+
+
+
+LLMs have significantly advanced speech transcription beyond traditional machine learning techniques, by improved handling of diverse accents and languages, and the ability to incorporate context for more precise transcriptions. Additionally, LLMs can leverage feedback loops to continuously improve their performance and correct errors through simple prompting.
+
+
+## Setup
+
+Let's start by installing Mirascope and its dependencies:
+
+
+```python
+!pip install "mirascope[gemini]"
+```
+
+
+```python
+import os
+
+os.environ["GOOGLE_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+## Transcribing Speech using Gemini
+
+With Gemini’s multimodal capabilities, audio input is treated just like text input, which means we can use it as context to ask questions. We will use an audio clip provided by Google of [a countdown of the Apollo Launch](https://storage.googleapis.com/generativeai-downloads/data/Apollo-11_Day-01-Highlights-10s.mp3). Note that if you use your own URL, Gemini currently has a byte limit of `20971520` when not using their file system.
+
+Since we can treat the audio like any other text context, we can create a transcription simply by inserting the audio into the prompt and asking for a transcription:
+
+
+```python
+import os
+
+from google.generativeai import configure
+from mirascope.core import gemini, prompt_template
+
+configure(api_key=os.environ["GOOGLE_API_KEY"])
+
+apollo_url = "https://storage.googleapis.com/generativeai-downloads/data/Apollo-11_Day-01-Highlights-10s.mp3"
+
+
+@gemini.call(model="gemini-1.5-flash")
+@prompt_template(
+ """
+ Transcribe the content of this speech:
+ {url:audio}
+ """
+)
+def transcribe_speech_from_url(url: str): ...
+
+
+response = transcribe_speech_from_url(apollo_url)
+
+print(response)
+```
+
+ 10 9 8 We have a goal for main engine start. We have a main engine start.
+
+
+
+## Tagging audio
+
+We can start by creating a Pydantic Model with the content we want to analyze:
+
+
+
+```python
+from typing import Literal
+
+from pydantic import BaseModel, Field
+
+
+class AudioTag(BaseModel):
+ audio_quality: Literal["Low", "Medium", "High"] = Field(
+ ...,
+ description="""The quality of the audio file.
+ Low - unlistenable due to severe static, distortion, or other imperfections
+ Medium - Audible but noticeable imperfections
+ High - crystal clear sound""",
+ )
+ imperfections: list[str] = Field(
+ ...,
+ description="""A list of the imperfections affecting audio quality, if any.
+ Common imperfections are static, distortion, background noise, echo, but include
+ all that apply, even if not listed here""",
+ )
+ description: str = Field(
+ ..., description="A one sentence description of the audio content"
+ )
+ primary_sound: str = Field(
+ ...,
+ description="""A quick description of the main sound in the audio,
+ e.g. `Male Voice`, `Cymbals`, `Rainfall`""",
+ )
+```
+
+Now we make our call passing in our `AudioTag` into the `response_model` field:
+
+
+
+```python
+@gemini.call(model="gemini-1.5-flash", response_model=AudioTag, json_mode=True)
+@prompt_template(
+ """
+ Analyze this audio file
+ {url:audio}
+
+ Give me its audio quality (low, medium, high), a list of its audio flaws (if any),
+ a quick description of the content of the audio, and the primary sound in the audio.
+ Use the tool call passed into the API call to fill it out.
+ """
+)
+def analyze_audio(url: str): ...
+
+
+response = analyze_audio(apollo_url)
+print(response)
+```
+
+ audio_quality='Medium' imperfections=['Background noise'] description='A countdown from ten with a male voice announcing "We have a go for main engine start"' primary_sound='Male Voice'
+
+
+
+## Speaker Diarization
+
+Now let's look at an audio file with multiple people talking. For the purposes of this recipe, I grabbed a snippet from Creative Commons[https://www.youtube.com/watch?v=v0l-u0ZUOSI], around 1:15 in the video and giving Gemini the audio file.
+
+
+
+```python
+with open("YOUR_MP3_HERE", "rb") as file:
+ data = file.read()
+
+ @gemini.call(model="gemini-1.5-flash")
+ @prompt_template(
+ """
+ Transcribe the content of this speech adding speaker tags
+ for example:
+ Person 1: hello
+ Person 2: good morning
+
+
+ {data:audio}
+ """
+ )
+ def transcribe_speech_from_file(data: bytes): ...
+
+ response = transcribe_speech_from_file(data)
+ print(response)
+```
+
+
+
+
+- Subtitles and Closed Captions: Automatically generate subtitles for same and different languages for accessibility.
+- Meetings: Transcribe meetings for future reference or summarization.
+- Voice Assistant: Transcription is the first step to answering voice requests.
+
+
+
+When adapting this recipe to your specific use-case, consider the following:
+
+- Split your audio file into multiple chunks and run the transcription in parallel.
+- Compare results with traditional machine learning techniques.
+- Experiment with the prompt by giving it some context before asking to transcribe the audio.
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/support-ticket-routing.mdx b/cloud/content/docs/v1/guides/more-advanced/support-ticket-routing.mdx
new file mode 100644
index 0000000000..fc071255e8
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/support-ticket-routing.mdx
@@ -0,0 +1,286 @@
+---
+title: Support Ticket Routing
+description: Build a system that analyzes support tickets, categorizes issues, and routes them to the appropriate department with relevant context for efficient resolution.
+---
+
+# Support Ticket Routing
+
+This recipe shows how to take an incoming support ticket/call transcript then use an LLM to summarize the issue and route it to the correct person.
+
+
+
+
+
+
+Traditional machine learning techniques like text classification were previously used to solve routing. LLMs have enhanced routing by being able to better interpret nuances of inquiries as well as using client history and knowledge of the product to make more informed decisions.
+
+
+## Setup
+
+Let's start by installing Mirascope and its dependencies:
+
+
+```python
+!pip install "mirascope[openai]"
+```
+
+
+```python
+import os
+
+os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+## Imitating a Company's Database/Functionality
+
+
+For both privacy and functionality purposes, these data types and functions in no way represent how a company's API should actually look like. However, extrapolate on these gross oversimplifications to see how the LLM would interact with the company's API.
+
+
+### User
+
+Let’s create a `User` class to represent a customer as well as the function `get_user_by_email()` to imitate how one might search for the user in the database with some identifying information:
+
+
+```python
+from pydantic import BaseModel, Field
+
+
+class User(BaseModel):
+ name: str
+ email: str
+ past_purchases: list[str]
+ past_charges: list[float]
+ payment_method: str
+ password: str
+ security_question: str
+ security_answer: str
+
+
+def get_user_by_email(email: str):
+ if email == "johndoe@gmail.com":
+ return User(
+ name="John Doe",
+ email="johndoe@gmail.com",
+ past_purchases=["TV", "Microwave", "Chair"],
+ past_charges=[349.99, 349.99, 99.99, 44.99],
+ payment_method="AMEX 1234 1234 1234 1234",
+ password="password1!",
+ security_question="Childhood Pet Name",
+ security_answer="Piddles",
+ )
+ else:
+ return None
+```
+
+### Data Pulling Functions
+
+Let’s also define some basic functions that one might expect a company to have for specific situations. `get_sale_items()` gets the items currently on sale, `get_rewards()` gets the rewards currently available to a user, `get_billing_details()` returns user data related to billing, and `get_account_details()` returns user data related to their account.
+
+
+
+
+```python
+def get_sale_items():
+ return "Sale items: we have a monitor at half off for $80!"
+
+
+def get_rewards(user: User):
+ if sum(user.past_charges) > 300:
+ return "Rewards: for your loyalty, you get 10% off your next purchase!"
+ else:
+ return "Rewards: you have no rewards available right now."
+
+
+def get_billing_details(user: User):
+ return {
+ "user_email": user.email,
+ "user_name": user.name,
+ "past_purchases": user.past_purchases,
+ "past_charges": user.past_charges,
+ }
+
+
+def get_account_details(user: User):
+ return {
+ "user_email": user.email,
+ "user_name": user.name,
+ "password": user.password,
+ "security_question": user.security_question,
+ "security_answer": user.security_answer,
+ }
+```
+
+
+### Routing to Agent
+
+Since we don’t have an actual endpoint to route to a live agent, let’s use this function `route_to_agent()` as a placeholder:
+
+
+
+
+```python
+from typing import Literal
+
+
+def route_to_agent(
+ agent_type: Literal["billing", "sale", "support"], summary: str
+) -> None:
+ """Routes the call to an appropriate agent with a summary of the issue."""
+ print(f"Routed to: {agent_type}\nSummary:\n{summary}")
+```
+
+
+## Handling the Ticket
+
+To handle the ticket, we will classify the issue of the ticket in one call, then use the classification to gather the corresponding context for a second call.
+
+### Classify the Transcript
+
+Assume we have a basic transcript from the customer’s initial interactions with a support bot where they give some identifying information and their issue. We define a Pydantic `BaseModel` schema to classify the issue as well as grab the identifying information. `calltype` classifies the transcript into one of the three categories `billing`, `sale`, and `support`, and `user_email` will grab their email, assuming that’s what the bot asks for. The `reasoning` field will not be used, but forcing the LLM to give a reasoning for its classification choice aids in extraction accuracy, which can be shaky:
+
+
+
+
+```python
+class CallClassification(BaseModel):
+ calltype: Literal["billing", "sale", "support"] = Field(
+ ...,
+ description="""The classification of the customer's issue into one of the 3:
+ 'billing' for an inquiry about charges or payment methods,
+ 'sale' for making a purchase,
+ 'support' for general FAQ or account-related questions""",
+ )
+ reasoning: str = Field(
+ ...,
+ description="""A brief description of why the customer's issue fits into the\
+ chosen category""",
+ )
+ user_email: str = Field(..., description="email of the user in the chat")
+```
+
+
+And we can extract information into this schema with the call `classify_transcript()`:
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+
+@openai.call(model="gpt-4o-mini", response_model=CallClassification)
+@prompt_template(
+ """
+ Classify the following transcript between a customer and the service bot:
+ {transcript}
+ """
+)
+def classify_transcript(transcript: str): ...
+```
+
+### Provide Ticket-Specific Context
+
+Now, depending on the output of `classify_transcript()`, we would want to provide different context to the next call - namely, a `billing` ticket would necessitate the details from `get_billing_details()`, a `sale` ticket would want the output of `get_sale_items()` and `get_rewards()`, and a `support_ticket` would require `get_account_details`. We define a second call `handle_ticket()` which calls `classify_transcript()` and calls the correct functions for the scenario via dynamic configuration:
+
+
+
+```python
+@openai.call(model="gpt-4o-mini", tools=[route_to_agent])
+@prompt_template(
+ """
+ SYSTEM:
+ You are an intermediary between a customer's interaction with a support chatbot and
+ a real life support agent. Organize the context so that the agent can best
+ facilitate the customer, but leave in details or raw data that the agent would need
+ to verify a person's identity or purchase. Then, route to the appropriate agent.
+
+ USER:
+ {context}
+ """
+)
+def handle_ticket(transcript: str) -> openai.OpenAIDynamicConfig:
+ context = transcript
+ call_classification = classify_transcript(transcript)
+ user = get_user_by_email(call_classification.user_email)
+ if isinstance(user, User):
+ if call_classification.calltype == "billing":
+ context += str(get_billing_details(user))
+ elif call_classification.calltype == "sale":
+ context += get_sale_items()
+ context += get_rewards(user)
+ elif call_classification.calltype == "support":
+ context += str(get_account_details(user))
+ else:
+ context = "This person cannot be found in our system."
+
+ return {"computed_fields": {"context": context}}
+```
+
+
+And there you have it! Let’s see how `handle_ticket` deals with each of the following transcripts:
+
+
+
+```python
+billing_transcript = """
+BOT: Please enter your email.
+CUSTOMER: johndoe@gmail.com
+BOT: What brings you here today?
+CUSTOMER: I purchased a TV a week ago but the charge is showing up twice on my bank \
+statement. Can I get a refund?
+"""
+
+sale_transcript = """
+BOT: Please enter your email.
+CUSTOMER: johndoe@gmail.com
+BOT: What brings you here today?
+CUSTOMER: I'm looking to buy a new monitor. Any discounts available?
+"""
+
+support_transcript = """
+BOT: Please enter your email.
+CUSTOMER: johndoe@gmail.com
+BOT: What brings you here today?
+CUSTOMER: I forgot my site password and I'm also locked out of my email, how else can I
+verify my identity?
+"""
+
+for transcript in [billing_transcript, sale_transcript, support_transcript]:
+ response = handle_ticket(transcript)
+ if tool := response.tool:
+ tool.call()
+```
+
+ Routed to: billing
+ Summary:
+ Customer John Doe (email: johndoe@gmail.com) is requesting a refund for a double charge for a TV purchase made a week ago. The customer shows two charges of $349.99 on their bank statement.
+ Routed to: sale
+ Summary:
+ Customer johndoe@gmail.com is interested in purchasing a new monitor and wants to know about discounts. There is a monitor available at half off for $80 and the customer is eligible for an additional 10% off for loyalty rewards.
+ Routed to: support
+ Summary:
+ Customer John Doe (johndoe@gmail.com) forgot their site password and is locked out of their email. They are asking for alternative ways to verify their identity. Security question: Childhood Pet Name, Answer: Piddles.
+
+
+
+
+
+- IT Help Desk: Have LLM determine whether the user request is level 1, 2, or 3 support and route accordingly
+- Software-as-a-Service (SaaS) Companies: A question about how to use a specific feature might be routed to the product support team, while an issue with account access could be sent to the account management team.
+
+
+
+When adapting this recipe to your specific use-case, consider the following:
+
+ - Update the `response_model` to more accurately reflect your use-case.
+ - Implement Pydantic `ValidationError` and Tenacity `retry` to improve reliability and accuracy.
+ - Evaluate the quality of extraction by using another LLM to verify classification accuracy.
+ - Use a local model like Ollama to protect company or other sensitive data.
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/text-classification.mdx b/cloud/content/docs/v1/guides/more-advanced/text-classification.mdx
new file mode 100644
index 0000000000..bf46bba148
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/text-classification.mdx
@@ -0,0 +1,241 @@
+---
+title: Text Classification
+description: Implement binary and multi-class text classification using LLMs, with examples for spam detection and sentiment analysis that outperform traditional machine learning methods.
+---
+
+# Text Classification
+
+In this recipe we’ll explore using Mirascope to implement binary classification, multi-class classification, and various other extensions of these classification techniques — specifically using Python the OpenAI API. We will also compare these solutions with more traditional machine learning and Natural Language Processing (NLP) techniques.
+
+
+
+
+
+
+
+ Text classification is a fundamental classification problem and NLP task that involves categorizing text documents into predefined classes or categories. Historically this has required training text classifiers through more traditional machine learning methods. Large Language Models (LLMs) have revolutionized this field, making sophisticated classification tasks accessible through simple API calls and thoughtful prompt engineering.
+
+
+## Setup
+
+Let's start by installing Mirascope and its dependencies:
+
+
+```python
+!pip install "mirascope[openai]"
+```
+
+
+```python
+import os
+
+os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+## Binary Classification: Spam Detection
+
+Binary classification involves categorizing text into one of two classes. We'll demonstrate this by creating a spam detector that classifies text as either spam or not spam.
+
+For binary classification, we can extract a boolean value by setting `response_model=bool` and prompting the model to classify the text:
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+
+@openai.call("gpt-4o-mini", response_model=bool)
+def classify_spam(text: str) -> str:
+ return f"Classify the following text as spam or not spam: {text}"
+
+
+text = "Would you like to buy some cheap viagra?"
+label = classify_spam(text)
+assert label is True # This text is classified as spam
+
+text = "Hi! It was great meeting you today. Let's stay in touch!"
+label = classify_spam(text)
+assert label is False # This text is classified as not spam
+```
+
+## Multi-Class Classification: Sentiment Analysis
+
+Multi-class classification extends the concept to scenarios where we need to categorize text into one of several classes. We'll demonstrate this with a sentiment analysis task.
+
+First, we define an `Enum` to represent our sentiment labels:
+
+
+
+```python
+from enum import Enum
+
+
+class Sentiment(Enum):
+ NEGATIVE = "negative"
+ NEUTRAL = "neutral"
+ POSITIVE = "positive"
+```
+
+Then, we set `response_model=Sentiment` to analyze the sentiment of the given text:
+
+
+```python
+from mirascope.core import openai
+
+
+@openai.call("gpt-4o-mini", response_model=Sentiment)
+def classify_sentiment(text: str) -> str:
+ return f"Classify the sentiment of the following text: {text}"
+
+
+text = "I hate this product. It's terrible."
+label = classify_sentiment(text)
+assert label == Sentiment.NEGATIVE
+
+text = "I don't feel strongly about this product."
+label = classify_sentiment(text)
+assert label == Sentiment.NEUTRAL
+
+text = "I love this product. It's amazing!"
+label = classify_sentiment(text)
+assert label == Sentiment.POSITIVE
+```
+
+## Classification with Reasoning
+
+So far we've demonstrated using simple types like `bool` and `Enum` for classification, but we can extend this approach using Pydantic's `BaseModel` class to extract additional information beyond just the classification label.
+
+For example, we can gain insight to the LLMs reasoning for the classified label simply by including a reasoning field in our response model and updating the prompt:
+
+
+
+```python
+from enum import Enum
+
+from mirascope.core import openai
+from pydantic import BaseModel
+
+
+class Sentiment(Enum):
+ NEGATIVE = "negative"
+ NEUTRAL = "neutral"
+ POSITIVE = "positive"
+
+
+class SentimentWithReasoning(BaseModel):
+ reasoning: str
+ sentiment: Sentiment
+
+
+@openai.call("gpt-4o-mini", response_model=SentimentWithReasoning)
+@prompt_template(
+ """
+ Classify the sentiment of the following text: {text}.
+ Explain your reasoning for the classified sentiment.
+ """
+)
+def classify_sentiment_with_reasoning(text: str): ...
+
+
+text = "I would recommend this product if it were cheaper..."
+response = classify_sentiment_with_reasoning(text)
+print(f"Sentiment: {response.sentiment}")
+print(f"Reasoning: {response.reasoning}")
+```
+
+ Sentiment: Sentiment.NEUTRAL
+ Reasoning: The text expresses a positive sentiment towards the product because the speaker is willing to recommend it. However, the mention of 'if it were cheaper' introduces a condition that makes the overall sentiment appear somewhat negative, as it suggests dissatisfaction with the current price. Therefore, the sentiment can be classified as neutral, as it acknowledges both a positive recommendation but also a negative aspect regarding pricing.
+
+
+## Handling Uncertainty
+
+When dealing with LLMs for classification tasks, it's important to account for cases where the model might be uncertain about its prediction. We can modify our approach to include a certainty score and handle cases where the model's confidence is below a certain threshold.
+
+
+
+```python
+from pydantic import BaseModel, Field
+
+
+class SentimentAnalysisWithCertainty(BaseModel):
+ sentiment: Sentiment
+ certainty: float = Field(..., ge=0, le=1)
+ reasoning: str
+
+
+class SentimentWithCertainty(BaseModel):
+ sentiment: Sentiment
+ reasoning: str
+ certainty: float
+
+
+@openai.call("gpt-4o-mini", response_model=SentimentWithCertainty)
+@prompt_template(
+ """
+ Classify the sentiment of the following text: {text}
+ Explain your reasoning for the classified sentiment.
+ Also provide a certainty score between 0 and 1, where 1 is absolute certainty.
+ """
+)
+def classify_sentiment_with_certainty(text: str): ...
+
+
+text = "This is the best product ever. And the worst."
+response = classify_sentiment_with_certainty(text)
+if response.certainty > 0.8:
+ print(f"Sentiment: {response.sentiment}")
+ print(f"Reasoning: {response.reasoning}")
+ print(f"Certainty: {response.certainty}")
+else:
+ print("The model is not certain enough about the classification.")
+```
+
+ The model is not certain enough about the classification.
+
+
+
+
+- Content ModerationClassify user-generated content as appropriate, inappropriate, or requiring manual review
+- Customer Support TriageCategorize incoming support tickets by urgency or department.
+- News Article CategorizationClassify news articles into topics (e.g. politics, sports, technology, etc)
+- Intent RecognitionIdentify user intent in chatbot interactions (e.g. make a purchase, ask for help, etc.)
+- Email ClassificationSort emails into categories like personal, work-related, promotional, or urgent
+
+
+
+When adapting this recipe to your specific use-case, consider the following:
+
+- Refine your prompts to provide clear instructions and relevant context for your specific classification task.
+- Experiment with different model providers and version to balance accuracy and speed.
+- Implement error handling and fallback mechanisms for cases where the model's classification is uncertain.
+- Consider using a combination of classifiers for more complex categorization tasks.
+
+## Comparison with Traditional Machine Learning Models
+
+Training text classification models requires a much more involved workflow:
+
+- Preprocessing:
+ - Read in data, clean and standardize it, and split it into training, validation, and test datasets
+- Feature Extraction:
+ - Basic: bag of words, TF-IDF
+ - Advanced: word embeddings, contextual embeddings
+- Classification Algorithm / Machine Learning Algorithm:
+ - Basic: Naive Bayes, logistic regression, linear classifiers
+ - Advanced: Neural networks, transformers (e.g. BERT)
+- Model Training:
+ - Train on training data and validate on validation data, adjusting batch size and epochs.
+ - Things like activation layers and optimizers can greatly impact the quality of the final trained model
+- Model Evaluation:
+ - Evaluate model quality on the test dataset using metrics such as F1-score, recall, precision, accuracy — whichever metric best suits your use-case
+
+Many frameworks such as TensorFlow and PyTorch make implementing such workflows easier, but it is still far more involved that the approach we showed in the beginning using Mirascope.
+
+If you’re interested in taking a deeper dive into this more traditional approach, the [TensorFlow IMDB Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification_with_hub) tutorial is a great place to start.
+
+
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/text-summarization.mdx b/cloud/content/docs/v1/guides/more-advanced/text-summarization.mdx
new file mode 100644
index 0000000000..9523d3ff96
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/text-summarization.mdx
@@ -0,0 +1,308 @@
+---
+title: Text Summarization
+description: Learn techniques for text summarization with LLMs, from simple prompts to more advanced approaches using outlines and segmentation for more coherent and comprehensive summaries.
+---
+
+# Text Summarization
+
+In this recipe, we show some techniques to improve an LLM’s ability to summarize a long text from simple (e.g. `"Summarize this text: {text}..."`) to more complex prompting and chaining techniques. We will use OpenAI’s GPT-4o-mini model (128k input token limit), but you can use any model you’d like to implement these summarization techniques, as long as they have a large context window.
+
+
+
+
+
+
+
+ Large Language Models (LLMs) have revolutionized text summarization by enabling more coherent and contextually aware abstractive summaries. Unlike earlier models that primarily extracted or rearranged existing sentences, LLMs can generate novel text that captures the essence of longer documents while maintaining readability and factual accuracy.
+
+
+
+## Setup
+
+Let's start by installing Mirascope and its dependencies:
+
+
+```python
+!pip install "mirascope[openai]"
+```
+
+
+```python
+import os
+
+os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+## Simple Call
+
+For our examples, we’ll use the [Wikipedia article on python](https://en.wikipedia.org/wiki/Python_(programming_language)). We will be referring to this article as `wikipedia-python.txt`.
+
+The command below will download the article to your local machine by using the `curl` command. If you don't have `curl` installed, you can download the article manually from the link above and save it as `wikipedia-python.html`.
+
+
+```python
+!curl "https://en.wikipedia.org/wiki/Python_(programming_language)" -o wikipedia-python.html
+```
+
+ % Total % Received % Xferd Average Speed Time Time Time Current
+ Dload Upload Total Spent Left Speed
+ 100 651k 100 651k 0 0 506k 0 0:00:01 0:00:01 --:--:-- 506k
+
+
+Install beautifulsoup4 to parse the HTML file.
+
+
+```python
+!pip install beautifulsoup4
+```
+
+We will be using a simple call as our baseline:
+
+
+```python
+from bs4 import BeautifulSoup
+from mirascope.core import openai, prompt_template
+
+
+def get_text_from_html(file_path: str) -> str:
+ with open(file_path) as file:
+ html_text = file.read()
+ return BeautifulSoup(html_text, "html.parser").get_text()
+
+
+text = get_text_from_html("wikipedia-python.html")
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ Summarize the following text:
+ {text}
+ """
+)
+def simple_summarize_text(text: str): ...
+
+
+print(simple_summarize_text(text))
+```
+
+ Python is a high-level, general-purpose programming language designed for code readability and simplicity. Created by Guido van Rossum and first released in 1991, Python supports multiple programming paradigms, including procedural, object-oriented, and functional programming. Its design emphasizes dynamic typing, garbage collection, and a comprehensive standard library, often referred to as having a "batteries included" philosophy.
+
+ Key milestones in Python's history include the release of Python 2.0 in 2000 and Python 3.0 in 2008, which introduced major changes and was not fully backwards compatible with Python 2.x. The latter version aimed to improve language simplicity and efficiency, while the support for Python 2.7 officially ended in 2020.
+
+ Python's unique syntax utilizes significant whitespace for block delimiters, avoids complex punctuation, and provides an intuitive style suited for beginner and expert programmers alike.
+
+ Python boasts a vast ecosystem with numerous libraries and frameworks that extend its capabilities, particularly in areas like web development, data analysis, scientific computing, and machine learning, making it a popular choice among developers globally. It is consistently ranked among the top programming languages due to its versatility and community support. The language is influenced by numerous predecessors and has significantly impacted the development of many new languages.
+
+ Recent versions have focused on performance boosts, improved error reporting, and maintaining compatibility with prior code while moving forward with enhancements. Overall, Python's design philosophy, rich features, and strong community have contributed to its widespread adoption across various domains.
+
+
+LLMs excel at summarizing shorter texts, but they often struggle with longer documents, failing to capture the overall structure while sometimes including minor, irrelevant details that detract from the summary's coherence and relevance.
+
+One simple update we can make is to improve our prompt by providing an initial outline of the text then adhere to this outline to create its summary.
+
+# Simple Call with Outline
+
+This prompt engineering technique is an example of [Chain of Thought](https://www.promptingguide.ai/techniques/cot) (CoT), forcing the model to write out its thinking process. It also involves little work and can be done by modifying the text of the single call. With an outline, the summary is less likely to lose the general structure of the text.
+
+
+
+```python
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ Summarize the following text by first creating an outline with a nested structure,
+ listing all major topics in the text with subpoints for each of the major points.
+ The number of subpoints for each topic should correspond to the length and
+ importance of the major point within the text. Then create the actual summary using
+ the outline.
+ {text}
+ """
+)
+def summarize_text_with_outline(text: str): ...
+
+
+print(summarize_text_with_outline(text))
+```
+
+ ### Outline
+
+ 1. **Introduction**
+ - Overview of Python
+ - Characteristics (high-level, general-purpose, dynamic typing)
+ - Popularity and usage
+
+ 2. **History**
+ - Development by Guido van Rossum
+ - Key milestones (Python 0.9.0, 2.0, 3.0)
+ - Transition from Python 2 to 3
+ - Release management and versions
+
+ 3. **Design Philosophy and Features**
+ - Multi-paradigm programming language
+ - Emphasis on readability and simplicity
+ - The Zen of Python
+ - Extensibility and modularity
+ - Language clarity vs. functionality
+
+ 4. **Syntax and Semantics**
+ - Code readability features
+ - Usage of indentation for block delimitation
+ - Types of statements and control flow constructs
+ - Expressions and operator usage
+
+ 5. **Programming Examples**
+ - Simple example programs demonstrating Python features
+
+ 6. **Libraries**
+ - Overview of Python’s standard library
+ - Third-party packages and Python Package Index (PyPI)
+ - Use cases in various domains (web, data analysis, etc.)
+
+ 7. **Development Environments**
+ - Integrated Development Environment (IDE) options
+ - Other programming shells
+
+ 8. **Implementations**
+ - Overview of CPython as the reference implementation
+ - Other adaptations like PyPy, MicroPython, etc.
+ - Cross-compilation to other languages
+
+ 9. **Development Process**
+ - Python Enhancement Proposal (PEP) process
+ - Community input and version management
+
+ 10. **Popularity**
+ - Rankings in programming language communities
+ - Major organizations using Python
+
+ 11. **Uses of Python**
+ - Application in web development, data science, machine learning, etc.
+ - Adoption in various industries and problem domains
+
+ 12. **Languages Influenced by Python**
+ - Overview of languages that took inspiration from Python’s design
+
+ ### Summary
+
+ Python is a high-level, multi-paradigm programming language renowned for its readability and extensive standard library. Developed by Guido van Rossum, Python was first released in 1991, evolving significantly with the introduction of versions 2.0 and 3.0, transitioning away from Python 2's legacy features.
+
+ The design philosophy of Python emphasizes clarity and simplicity, captured in the guiding principles known as the Zen of Python, which advocate for beautiful, explicit, and straightforward code while also allowing for extensibility through modules. This modularity has made Python popular for adding programmable interfaces to applications.
+
+ Python’s syntax is intentionally designed to enhance readability, utilizing indentation to define code blocks, a practice that differentiates it from many other languages which rely on braces or keywords. The language supports various programming constructs including statements for control flow, exception handling, and function definitions.
+
+ Moreover, Python offers a robust standard library known as "batteries included," and hosts a thriving ecosystem of third-party packages on PyPI, catering to diverse applications ranging from web development to data analytics and machine learning.
+
+ Various Integrated Development Environments (IDEs) and shells facilitate Python development, while CPython serves as the primary reference implementation, with alternatives like PyPy enhancing performance through just-in-time compilation.
+
+ Development of Python is community-driven through the Python Enhancement Proposal (PEP) process, which encourages input on new features and code standards. Python consistently ranks among the most popular programming languages and is widely adopted in major industries, influencing numerous other programming languages with its design principles.
+
+
+By providing an outline, we enable the LLM to better adhere to the original article's structure, resulting in a more coherent and representative summary.
+
+For our next iteration, we'll explore segmenting the document by topic, requesting summaries for each section, and then composing a comprehensive summary using both the outline and these individual segment summaries.
+
+## Segment then Summarize
+
+This more comprehensive approach not only ensures that the model adheres to the original text's structure but also naturally produces a summary whose length is proportional to the source document, as we combine summaries from each subtopic.
+
+To apply this technique, we create a `SegmentedSummary` Pydantic `BaseModel` to contain the outline and section summaries, and extract it in a chained call from the original summarize_text() call:
+
+
+
+```python
+from pydantic import BaseModel, Field
+
+
+class SegmentedSummary(BaseModel):
+ outline: str = Field(
+ ...,
+ description="A high level outline of major sections by topic in the text",
+ )
+ section_summaries: list[str] = Field(
+ ..., description="A list of detailed summaries for each section in the outline"
+ )
+
+
+@openai.call(model="gpt-4o", response_model=SegmentedSummary)
+@prompt_template(
+ """
+ Extract a high level outline and summary for each section of the following text:
+ {text}
+ """
+)
+def summarize_by_section(text): ...
+
+
+@openai.call(model="gpt-4o")
+@prompt_template(
+ """
+ The following contains a high level outline of a text along with summaries of a
+ text that has been segmented by topic. Create a composite, larger summary by putting
+ together the summaries according to the outline.
+ Outline:
+ {outline}
+
+ Summaries:
+ {summaries}
+ """
+)
+def summarize_text_chaining(text: str) -> openai.OpenAIDynamicConfig:
+ segmented_summary = summarize_by_section(text)
+ return {
+ "computed_fields": {
+ "outline": segmented_summary.outline,
+ "summaries": segmented_summary.section_summaries,
+ }
+ }
+
+
+print(summarize_text_chaining(text))
+```
+
+ Python was created in the late 1980s by Guido van Rossum as a successor to the ABC programming language. The first version, Python 0.9.0, was released in 1991. Major versions like Python 2.0 in 2000 and Python 3.0 in 2008 introduced significant changes. Python 2.7.18 was the last release of Python 2, while Python 3.x continues to evolve.
+
+ Python is designed to emphasize code readability, using significant indentation. It supports multiple paradigms, including procedural, object-oriented, and functional programming. Its comprehensive standard library and dynamic typing are notable features.
+
+ Python syntax uses indentation to define blocks, rather than curly braces or keywords. It includes various statements and control flows like assignment, if, for, while, try, except, and more. The language allows dynamic typing and has a robust set of built-in methods and operators for handling different data types.
+
+ A Hello, World! program and a factorial calculation program demonstrate Python's straightforward syntax and readability. The language is known for its simplicity and ease of use, making it accessible for beginners and powerful for experienced programmers.
+
+ The standard library in Python is vast, offering modules for internet protocols, string operations, web services tools, and operating system interfaces. The Python Package Index (PyPI) hosts a large collection of third-party modules for various tasks.
+
+ Python supports several integrated development environments (IDEs) like PyCharm and Jupyter Notebook. It also offers basic tools like IDLE for beginners. Many advanced IDEs provide additional features like auto-completion, debugging, and syntax highlighting.
+
+ CPython is the reference implementation, but there are others like PyPy, Jython, and IronPython. Variants such as MicroPython are designed for microcontrollers. Some older implementations like Unladen Swallow are no longer actively maintained.
+
+ Python's development is guided by a community-driven process, primarily through Python Enhancement Proposals (PEPs). The language's development is led by the Python Software Foundation and a steering council elected by core developers.
+
+ Python offers tools like Sphinx and pydoc for generating API documentation. These tools help in creating comprehensive and accessible documentation for various Python projects.
+
+ Named after the British comedy group Monty Python, the language includes various playful references to the group in its documentation and culture. The term Pythonic is often used to describe idiomatic Python code.
+
+ Python is highly popular, consistently ranking among the top programming languages. It is widely used by major organizations like Google, NASA, and Facebook, and has a strong presence in the scientific and data analysis communities.
+
+ Python is versatile, used for web development, scientific computing, data analysis, artificial intelligence, and more. Frameworks like Django and Flask support web development, while libraries like NumPy and SciPy enable scientific research.
+
+ Python has influenced many programming languages, including Julia, Swift, and Go, borrowing its design philosophy and syntax to various extents.
+
+
+
+
+- Meeting Notes: Convert meeting from speech-to-text then summarize the text for reference.
+- Education: Create study guides or slides from textbook material using summaries.
+- Productivity: Summarize email chains, slack threads, word documents for your day-to-day.
+
+
+
+When adapting this recipe to your specific use-case, consider the following:
+ - Refine your prompts to provide clear instructions and relevant context for text summarization.
+ - Experiment with different model providers and version to balance quality and speed.
+ - Provide a feedback loop, use an LLM to evaluate the quality of the summary based on a criteria and feed that back into the prompt for refinement.
+
+
diff --git a/cloud/content/docs/v1/guides/more-advanced/text-translation.mdx b/cloud/content/docs/v1/guides/more-advanced/text-translation.mdx
new file mode 100644
index 0000000000..2358098907
--- /dev/null
+++ b/cloud/content/docs/v1/guides/more-advanced/text-translation.mdx
@@ -0,0 +1,630 @@
+---
+title: Text Translation
+description: Build advanced language translation systems. This guide demonstrates techniques for improving translation quality through parametrization and multi-provider approaches.
+---
+
+# Text Translation
+
+This guide introduces advanced techniques for translating from English to Japanese. Due to significant differences in their origins, grammatical structures, and cultural backgrounds, generating natural Japanese through mechanical translation is extremely challenging. The methods presented here are applicable not only to English-Japanese translation but also to translation between any structurally different languages.
+
+We will explore innovative approaches to improve translation quality using Large Language Models (LLMs). Specifically, we will introduce three techniques:
+
+1. Parametrized Translation
+2. Multi-Pass Translation
+3. Multi-Provider Translation
+
+These techniques can be applied to various LLMs, including OpenAI's GPT-4, Anthropic's Claude, and Google's Gemini.
+
+
+
+
+
+
+Translation between English and Japanese presents several unique challenges:
+
+- Context Dependency: Japanese is a high-context language where much information is conveyed implicitly.
+- Grammatical Structure Differences: Japanese follows an SOV structure, while English follows an SVO structure.
+- Subject Handling: In Japanese, it's common to omit the subject when it's clear from context.
+- Honorifics and Polite Expressions: Japanese has multiple levels of honorifics that need to be appropriately chosen based on social context.
+- Idiomatic Expressions and Cultural Nuances: Both languages have unique idioms and culturally rooted expressions.
+
+
+
+
+## Setup
+
+Let's start by installing Mirascope and its dependencies:
+
+
+```python
+!pip install "mirascope[openai, anthropic, gemini]"
+```
+
+
+```python
+import os
+
+os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
+os.environ["ANTHROPIC_API_KEY"] = "YOUR_API_KEY"
+os.environ["GOOGLE_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+## Translation Techniques Leveraging LLMs
+
+### Parametrized Translation
+
+Parametrized translation introduces parameters such as tone and target audience into the translation process to generate more appropriate and context-aware translations.
+
+
+```python
+import asyncio
+from collections.abc import Callable
+from enum import StrEnum
+
+from mirascope.core import BasePrompt, openai, prompt_template
+
+
+class Audience(StrEnum):
+ general = "general"
+ professional = "professional"
+ academic = "academic"
+ friendly = "friendly"
+ formal = "formal"
+
+
+class Tone(StrEnum):
+ neutral = "neutral"
+ positive = "positive"
+ negative = "negative"
+ professional = "professional"
+ casual = "casual"
+
+
+@prompt_template(
+ """
+ SYSTEM:
+ You are a professional translator who is a native Japanese speaker.
+ Translate the English text into natural Japanese without changing the content.
+ Please make sure that the text is easy to read for everyone by using appropriate paragraphs so that it does not become difficult to read.
+
+ Please consider the following parameters in your translation:
+
+ tone: {tone}
+ The tone can be one of the following:
+ - neutral: Maintain a balanced and impartial tone
+ - positive: Use upbeat and optimistic language
+ - negative: Convey a more critical or pessimistic view
+ - professional: Use formal and business-appropriate language
+ - casual: Use informal, conversational language
+
+ audience: {audience}
+ The audience can be one of the following:
+ - general: For the general public, use common terms and clear explanations
+ - professional: For industry professionals, use appropriate jargon and technical terms
+ - academic: For scholarly contexts, use formal language and cite sources if necessary
+ - friendly: For casual, familiar contexts, use warm and approachable language
+ - formal: For official or ceremonial contexts, use polite and respectful language
+
+ Adjust your translation style based on the specified tone and audience to ensure the message is conveyed appropriately.
+
+ USER:
+ text: {text}
+ """
+)
+class ParametrizedTranslatePrompt(BasePrompt):
+ tone: Tone
+ audience: Audience
+ text: str
+
+ async def translate(self, call: Callable, model: str) -> str:
+ response = await self.run_async(call(model))
+ return response.content
+
+
+text = """
+The old codger, a real bootstrapper, had been burning the candle at both ends trying to make his pie-in-the-sky business idea fly. He'd been spinning his wheels for months, barking up the wrong tree with his half-baked marketing schemes.
+"""
+prompt = ParametrizedTranslatePrompt(
+ text=text, tone=Tone.negative, audience=Audience.general
+)
+prompt_translation = prompt.translate(call=openai.call, model="gpt-4o-mini")
+
+# In Jupyter Notebook, use the following code to run async functions:
+parametrized_translation = await prompt_translation
+
+# In Python script, use the following code:
+# parametrized_translation = asyncio.run(prompt_translation)
+
+print(f"Parametrized with translation: {parametrized_translation}")
+```
+
+ Parametrized with translation: その老いぼれは、本当に根性のある人だったが、自分の夢のビジネスアイデアを実現させようと、無理をして頑張り続けていた。何ヶ月も無駄に過ごし、出来の悪いマーケティング計画に手をこまねいて、全く的外れな努力をしていた。
+
+
+
+This technique allows translators to adjust translations according to the target audience and required tone.
+
+### Multi-Pass Translation
+
+Multi-Pass translation involves repeating the same translation process multiple times, evaluating and improving the translation at each pass.
+
+
+
+```python
+from collections.abc import Callable
+from contextlib import contextmanager
+from enum import StrEnum
+
+from mirascope.core import BasePrompt, openai, prompt_template
+from pydantic import BaseModel, Field
+
+
+class Audience(StrEnum):
+ general = "general"
+ professional = "professional"
+ academic = "academic"
+ friendly = "friendly"
+ formal = "formal"
+
+
+class Tone(StrEnum):
+ neutral = "neutral"
+ positive = "positive"
+ negative = "negative"
+ professional = "professional"
+ casual = "casual"
+
+
+@prompt_template(
+ """
+ SYSTEM:
+ You are a professional translator who is a native Japanese speaker.
+ Translate the English text into natural Japanese without changing the content.
+ Please make sure that the text is easy to read for everyone by using appropriate paragraphs so that it does not become difficult to read.
+
+ Please consider the following parameters in your translation:
+
+ tone: {tone}
+ The tone can be one of the following:
+ - neutral: Maintain a balanced and impartial tone
+ - positive: Use upbeat and optimistic language
+ - negative: Convey a more critical or pessimistic view
+ - professional: Use formal and business-appropriate language
+ - casual: Use informal, conversational language
+
+ audience: {audience}
+ The audience can be one of the following:
+ - general: For the general public, use common terms and clear explanations
+ - professional: For industry professionals, use appropriate jargon and technical terms
+ - academic: For scholarly contexts, use formal language and cite sources if necessary
+ - friendly: For casual, familiar contexts, use warm and approachable language
+ - formal: For official or ceremonial contexts, use polite and respectful language
+
+ Adjust your translation style based on the specified tone and audience to ensure the message is conveyed appropriately.
+
+ USER:
+ text: {text}
+ """
+)
+class ParametrizedTranslatePrompt(BasePrompt):
+ tone: Tone
+ audience: Audience
+ text: str
+
+ async def translate(self, call: Callable, model: str) -> str:
+ response = await self.run_async(call(model))
+ return response.content
+
+
+class Evaluation(BaseModel):
+ clarity: float = Field(
+ ..., description="The clarity of the translation, ranging from 0 to 10."
+ )
+ naturalness: float = Field(
+ ..., description="The naturalness of the translation, ranging from 0 to 10."
+ )
+ consistency: float = Field(
+ ..., description="The consistency of the translation, ranging from 0 to 10."
+ )
+ grammatical_correctness: float = Field(
+ ...,
+ description="The grammatical correctness of the translation, ranging from 0 to 10.",
+ )
+ lexical_appropriateness: float = Field(
+ ...,
+ description="The lexical appropriateness of the translation, ranging from 0 to 10.",
+ )
+ subject_clarity: float = Field(
+ ...,
+ description="The clarity of the subject in the translation, ranging from 0 to 10.",
+ )
+ word_order_naturalness: float = Field(
+ ...,
+ description="Maintenance of natural Japanese word order (0 to 10). Evaluates whether the English word order is not directly applied.",
+ )
+ subject_handling: float = Field(
+ ...,
+ description="Appropriate handling of subjects (0 to 10). Evaluates whether unnecessary subjects are avoided and appropriately omitted.",
+ )
+ modifier_placement: float = Field(
+ ...,
+ description="Appropriate placement of modifiers (0 to 10). Evaluates whether natural Japanese modification relationships are maintained.",
+ )
+ sentence_length_appropriateness: float = Field(
+ ...,
+ description="Appropriateness of sentence length (0 to 10). Evaluates whether long sentences are properly divided when necessary.",
+ )
+ context_dependent_expression: float = Field(
+ ...,
+ description="Appropriateness of context-dependent expressions (0 to 10). Evaluates whether the level of honorifics and politeness is suitable for the target audience and situation.",
+ )
+ implicit_meaning_preservation: float = Field(
+ ...,
+ description="Preservation of implicit meanings (0 to 10). Evaluates whether implicit meanings and cultural nuances from the original English text are appropriately conveyed.",
+ )
+
+
+@prompt_template(
+ """
+ SYSTEM:
+ You are a professional translator who is a native Japanese speaker.
+ Please evaluate the following translation and provide feedback on how it can be improved.
+
+ USER:
+ original_text: {original_text}
+ translation_text: {translation_text}
+ """
+)
+class EvaluateTranslationPrompt(BasePrompt):
+ original_text: str
+ translation_text: str
+
+ async def evaluate(self, call: Callable, model: str) -> Evaluation:
+ response = await self.run_async(call(model, response_model=Evaluation))
+ return response
+
+
+@prompt_template("""
+ SYSTEM:
+ Your task is to improve the quality of a translation from English into Japanese.
+ You will be provided with the original text, the translated text, and an evaluation of the translation.
+ All evaluation criteria will be scores between 0 and 10.
+
+ The translation you are improving was intended to adhere to the desired tone and audience:
+ tone: {tone}
+ audience: {audience}
+
+ You improved translation MUST also adhere to this desired tone and audience.
+
+ Output ONLY the improved translation.
+
+ USER:
+ original text: {original_text}
+ translation: {translation_text}
+ evaluation: {evaluation}
+""")
+class ImproveTranslationPrompt(BasePrompt):
+ tone: Tone
+ audience: Audience
+ original_text: str
+ translation_text: str
+ evaluation: Evaluation
+
+ async def improve_translation(self, call: Callable, model: str) -> str:
+ response = await self.run_async(call(model))
+ return response.content
+
+
+text = """
+The old codger, a real bootstrapper, had been burning the candle at both ends trying to make his pie-in-the-sky business idea fly. He'd been spinning his wheels for months, barking up the wrong tree with his half-baked marketing schemes.
+"""
+
+
+@contextmanager
+def print_progress_message(model: str, count: int):
+ print(f"{model=} Multi-pass translation start times: {count}")
+ yield
+ print(f"{model=} Multi-pass translation end times: {count}")
+
+
+async def multi_pass_translation(
+ original_text: str,
+ tone: Tone,
+ audience: Audience,
+ pass_count: int,
+ call: Callable,
+ model: str,
+) -> str:
+ with print_progress_message(model, 1):
+ parametrized_translate_prompt = ParametrizedTranslatePrompt(
+ text=original_text, tone=tone, audience=audience
+ )
+ translation_text = await parametrized_translate_prompt.translate(call, model)
+
+ for current_count in range(2, pass_count + 1):
+ with print_progress_message(model, current_count):
+ evaluate_translation_prompt = EvaluateTranslationPrompt(
+ original_text=original_text, translation_text=translation_text
+ )
+ evaluation = await evaluate_translation_prompt.evaluate(call, model)
+ improve_translation_prompt = ImproveTranslationPrompt(
+ original_text=original_text,
+ translation_text=translation_text,
+ tone=tone,
+ audience=audience,
+ evaluation=evaluation,
+ )
+ translation_text = await improve_translation_prompt.improve_translation(
+ call, model
+ )
+ return translation_text
+
+
+multi_pass_translation_task = multi_pass_translation(
+ text,
+ tone=Tone.casual,
+ audience=Audience.general,
+ pass_count=3,
+ call=openai.call,
+ model="gpt-4o-mini",
+)
+
+# In Jupyter Notebook, use the following code to run async functions:
+multi_pass_result = await multi_pass_translation_task
+
+# In Python script, use the following code:
+# multi_pass_result = asyncio.run(multi_pass_translation
+
+print(f"Multi-pass translation result: {multi_pass_result}")
+```
+
+ model='gpt-4o-mini' Multi-pass translation start times: 1
+ model='gpt-4o-mini' Multi-pass translation end times: 1
+ model='gpt-4o-mini' Multi-pass translation start times: 2
+ model='gpt-4o-mini' Multi-pass translation end times: 2
+ model='gpt-4o-mini' Multi-pass translation start times: 3
+ model='gpt-4o-mini' Multi-pass translation end times: 3
+ Multi-pass translation result: そのおじいさんは、ほんとうの自力で成功を目指して、昼も夜も頑張っていたんだ。夢のようなビジネスアイデアを実現しようと奮闘していたけど、何ヶ月も無駄に時間を使っていただけだったんだ。ちょっと変なマーケティングプランに手を出しちゃって、完全に間違った道に進んでいたんだよ。
+
+
+
+This technique allows for gradual improvement in various aspects such as grammar, vocabulary, and style, resulting in more natural and accurate translations.
+
+
+### Multi-Provider Translation
+
+Multi-Provider translation involves using multiple LLM providers in parallel and comparing their results.
+
+Install the required other provider's dependencies using the following command:
+
+
+```python
+!pip install ipywidgets # for Jupyter Notebook
+```
+
+
+```python
+from collections.abc import Callable
+from contextlib import contextmanager
+from enum import StrEnum
+
+from mirascope.core import BasePrompt, anthropic, gemini, openai, prompt_template
+from pydantic import BaseModel, Field
+
+
+class Audience(StrEnum):
+ general = "general"
+ professional = "professional"
+ academic = "academic"
+ friendly = "friendly"
+ formal = "formal"
+
+
+class Tone(StrEnum):
+ neutral = "neutral"
+ positive = "positive"
+ negative = "negative"
+ professional = "professional"
+ casual = "casual"
+
+
+@prompt_template(
+ """
+ SYSTEM:
+ You are a professional translator who is a native Japanese speaker.
+ Translate the English text into natural Japanese without changing the content.
+ Please make sure that the text is easy to read for everyone by using appropriate paragraphs so that it does not become difficult to read.
+
+ Please consider the following parameters in your translation:
+
+ tone: {tone}
+ The tone can be one of the following:
+ - neutral: Maintain a balanced and impartial tone
+ - positive: Use upbeat and optimistic language
+ - negative: Convey a more critical or pessimistic view
+ - professional: Use formal and business-appropriate language
+ - casual: Use informal, conversational language
+
+ audience: {audience}
+ The audience can be one of the following:
+ - general: For the general public, use common terms and clear explanations
+ - professional: For industry professionals, use appropriate jargon and technical terms
+ - academic: For scholarly contexts, use formal language and cite sources if necessary
+ - friendly: For casual, familiar contexts, use warm and approachable language
+ - formal: For official or ceremonial contexts, use polite and respectful language
+
+ Adjust your translation style based on the specified tone and audience to ensure the message is conveyed appropriately.
+
+ USER:
+ text: {text}
+ """
+)
+class ParametrizedTranslatePrompt(BasePrompt):
+ tone: Tone
+ audience: Audience
+ text: str
+
+ async def translate(self, call: Callable, model: str) -> str:
+ response = await self.run_async(call(model))
+ return response.content
+
+
+class Evaluation(BaseModel):
+ clarity: float = Field(
+ ..., description="The clarity of the translation, ranging from 0 to 10."
+ )
+ naturalness: float = Field(
+ ..., description="The naturalness of the translation, ranging from 0 to 10."
+ )
+ consistency: float = Field(
+ ..., description="The consistency of the translation, ranging from 0 to 10."
+ )
+ grammatical_correctness: float = Field(
+ ...,
+ description="The grammatical correctness of the translation, ranging from 0 to 10.",
+ )
+ lexical_appropriateness: float = Field(
+ ...,
+ description="The lexical appropriateness of the translation, ranging from 0 to 10.",
+ )
+ subject_clarity: float = Field(
+ ...,
+ description="The clarity of the subject in the translation, ranging from 0 to 10.",
+ )
+ word_order_naturalness: float = Field(
+ ...,
+ description="Maintenance of natural Japanese word order (0 to 10). Evaluates whether the English word order is not directly applied.",
+ )
+ subject_handling: float = Field(
+ ...,
+ description="Appropriate handling of subjects (0 to 10). Evaluates whether unnecessary subjects are avoided and appropriately omitted.",
+ )
+ modifier_placement: float = Field(
+ ...,
+ description="Appropriate placement of modifiers (0 to 10). Evaluates whether natural Japanese modification relationships are maintained.",
+ )
+ sentence_length_appropriateness: float = Field(
+ ...,
+ description="Appropriateness of sentence length (0 to 10). Evaluates whether long sentences are properly divided when necessary.",
+ )
+ context_dependent_expression: float = Field(
+ ...,
+ description="Appropriateness of context-dependent expressions (0 to 10). Evaluates whether the level of honorifics and politeness is suitable for the target audience and situation.",
+ )
+ implicit_meaning_preservation: float = Field(
+ ...,
+ description="Preservation of implicit meanings (0 to 10). Evaluates whether implicit meanings and cultural nuances from the original English text are appropriately conveyed.",
+ )
+
+
+text = """
+The old codger, a real bootstrapper, had been burning the candle at both ends trying to make his pie-in-the-sky business idea fly. He'd been spinning his wheels for months, barking up the wrong tree with his half-baked marketing schemes.
+"""
+
+
+async def multi_pass_translation(
+ original_text: str,
+ tone: Tone,
+ audience: Audience,
+ pass_count: int,
+ call: Callable,
+ model: str,
+) -> str:
+ with print_progress_message(model, 1):
+ parametrized_translate_prompt = ParametrizedTranslatePrompt(
+ text=original_text, tone=tone, audience=audience
+ )
+ translation_text = await parametrized_translate_prompt.translate(call, model)
+
+ for current_count in range(2, pass_count + 1):
+ with print_progress_message(model, current_count):
+ evaluate_translation_prompt = EvaluateTranslationPrompt(
+ original_text=original_text, translation_text=translation_text
+ )
+ evaluation = await evaluate_translation_prompt.evaluate(call, model)
+ improve_translation_prompt = ImproveTranslationPrompt(
+ original_text=original_text,
+ translation_text=translation_text,
+ tone=tone,
+ audience=audience,
+ evaluation=evaluation,
+ )
+ translation_text = await improve_translation_prompt.improve_translation(
+ call, model
+ )
+ return translation_text
+
+
+async def multi_provider_translation(
+ original_text: str,
+ tone: Tone,
+ audience: Audience,
+ pass_count: int,
+ call_models: list[tuple[Callable, str]],
+) -> None:
+ results = []
+ for call, model in call_models:
+ results.append(
+ multi_pass_translation(
+ original_text,
+ tone,
+ audience,
+ pass_count,
+ call,
+ model,
+ )
+ )
+ translations = await asyncio.gather(*results)
+ print("Translations:")
+ for (_, model), translation_text in zip(call_models, translations, strict=True):
+ print(f"Model: {model}, Translation: {translation_text}")
+
+
+multi_provider_translation_task = multi_provider_translation(
+ text,
+ tone=Tone.professional,
+ audience=Audience.academic,
+ pass_count=2,
+ call_models=[
+ (openai.call, "gpt-4o-mini"),
+ (anthropic.call, "claude-3-5-sonnet-20240620"),
+ (gemini.call, "gemini-1.5-flash"),
+ ],
+)
+
+# In Jupyter Notebook, use the following code to run async functions:
+await multi_provider_translation_task
+
+# In Python script, use the following code:
+# asyncio.run(multi_provider_translation_task)
+```
+
+ model='gpt-4o-mini' Multi-pass translation start times: 1
+ model='claude-3-5-sonnet-20240620' Multi-pass translation start times: 1
+ model='gemini-1.5-flash' Multi-pass translation start times: 1
+ model='gemini-1.5-flash' Multi-pass translation end times: 1
+ model='gemini-1.5-flash' Multi-pass translation start times: 2
+ model='gpt-4o-mini' Multi-pass translation end times: 1
+ model='gpt-4o-mini' Multi-pass translation start times: 2
+ model='gemini-1.5-flash' Multi-pass translation end times: 2
+ model='claude-3-5-sonnet-20240620' Multi-pass translation end times: 1
+ model='claude-3-5-sonnet-20240620' Multi-pass translation start times: 2
+ model='gpt-4o-mini' Multi-pass translation end times: 2
+ model='claude-3-5-sonnet-20240620' Multi-pass translation end times: 2
+ Translations:
+ Model: gpt-4o-mini, Translation: その年配の男性は、本当の自力で成り上がった人物であり、夢のビジネスアイデアを実現するために徹夜で努力していました。しかし、彼は何ヶ月も無駄にし、未熟なマーケティング戦略で誤った方向に進んでいました。
+ Model: claude-3-5-sonnet-20240620, Translation: 高齢の実業家は、真の独立自営者として、非現実的なビジネス構想の実現に向けて昼夜を問わず懸命に努力していた。数ヶ月にわたり、彼は効果的な進展を見せることなく時間を費やし、不十分な準備状態のマーケティング戦略を用いて誤った方向性に労力を注いでいた。この状況は、彼の努力が必ずしも適切な方向に向けられていなかったことを示唆している。
+ Model: gemini-1.5-flash, Translation: その老人は、自力で成功しようと、実現不可能な夢のようなビジネスアイデアに固執し、夜遅くまで働き詰めでした。彼は、行き当たりばったりのマーケティング戦略で何ヶ月も無駄な努力を重ねてきました。
+
+
+
+This method allows for comparison of translation results from different models, enabling the selection of the most appropriate translation.
+
+## Conclusion
+
+The techniques demonstrated in this recipe can help to significantly improve the quality and efficiency of English-Japanese translation through parametrization, multi-pass, and multi-provider techniques. By combining these methods, it effectively handles complex translation tasks and flexibly addresses diverse translation needs.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/chain-of-verification.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/chain-of-verification.mdx
new file mode 100644
index 0000000000..791d5c7080
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/chain-of-verification.mdx
@@ -0,0 +1,156 @@
+---
+title: Chain of Verification: Enhancing LLM Accuracy through Self-Verification
+description: Implement Chain of Verification to improve LLM accuracy by generating verification questions about initial responses and answering them systematically
+---
+
+# Chain of Verification: Enhancing LLM Accuracy through Self-Verification
+
+This recipe demonstrates how to implement the Chain of Verification technique using Large Language Models (LLMs) with Mirascope. Chain of Verification is a prompt engineering method that enhances an LLM's accuracy by generating and answering verification questions based on its initial response.
+
+
+
+
+
+
+Chain of Verification is a prompt engineering technique where one takes a prompt and its initial LLM response then generates a checklist of questions that can be used to verify the initial answer. Each of these questions are then answered individually with separate LLM calls, and the results of each verification question are used to edit the final answer. LLMs are often more truthful when asked to verify a particular fact rather than use it in their own answer, so this technique is effective in reducing hallucinations.
+
+
+
+## Implementation
+
+Let's implement the Chain of Verification technique using Mirascope:
+
+
+
+
+
+
+
+```python
+import asyncio
+
+from mirascope.core import openai, prompt_template
+from pydantic import BaseModel, Field
+
+
+@openai.call("gpt-4o-mini")
+def call(query: str) -> str:
+ return query
+
+
+class VerificationQuestions(BaseModel):
+ questions: list[str] = Field(
+ ...,
+ description="""A list of questions that verifies if the response
+ answers the original query correctly.""",
+ )
+
+
+@openai.call("gpt-4o-mini", response_model=VerificationQuestions)
+@prompt_template(
+ """
+ SYSTEM:
+ You will be given a query and a response to the query.
+ Take the relevant statements in the response and rephrase them into questions so
+ that they can be used to verify that they satisfy the original query.
+ USER:
+ Query:
+ {query}
+
+ Response:
+ {response}
+ """
+)
+def get_verification_questions(query: str, response: str): ...
+
+
+@openai.call("gpt-4o-mini")
+async def answer(query: str) -> str:
+ return f"Concisely answer the following question: {query}"
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ Here is the original query:
+ {query}
+
+ Here is an initial response to the query:
+ {response}
+
+ Here is some fact checking on the response:
+ {verification_q_and_a:list}
+
+ Using the knowledge you learned from verification, re-answer the original query.
+ """
+)
+async def cov_call(query: str) -> openai.OpenAIDynamicConfig:
+ response = call(query).content
+ verification_questions = get_verification_questions(query, response).questions
+ tasks = [answer(question) for question in verification_questions]
+ responses = await asyncio.gather(*tasks)
+ verification_answers = [response.content for response in responses]
+ verification_q_and_a = [
+ [f"Q:{q}", f"A:{a}"]
+ for q, a in zip(verification_questions, verification_answers)
+ ]
+ return {
+ "computed_fields": {
+ "response": response,
+ "verification_q_and_a": verification_q_and_a,
+ }
+ }
+
+
+async def chain_of_verification(query: str):
+ response = await cov_call(query=query)
+ # Uncomment to see intermediate responses
+ # print(response.user_message_param["content"])
+ return response
+
+
+query = "Name 5 politicians born in New York."
+
+print(await chain_of_verification(query=query))
+```
+
+ Here are five politicians who were born in New York:
+
+ 1. **Theodore Roosevelt** - 26th President of the United States, born in New York City, New York.
+ 2. **Al Smith** - Governor of New York and Democratic presidential candidate, born in New York City, New York.
+ 3. **Andrew Cuomo** - Former Governor of New York, born in Queens, New York City.
+ 4. **Franklin D. Roosevelt** - 32nd President of the United States, born in Hyde Park, New York.
+ 5. **Donald Trump** - 45th President of the United States, born in Queens, New York City.
+
+ These individuals have all made significant contributions to American politics and governance. Note that Hillary Clinton, while a prominent politician, was actually born in Chicago, Illinois.
+
+
+As we can see, the Chain of Verification process has successfully identified and corrected an error in the initial response (Hillary Clinton's birthplace), demonstrating its effectiveness in improving accuracy.
+
+## Benefits and Considerations
+
+The Chain of Verification implementation offers several advantages:
+
+1. Improved accuracy by systematically verifying initial responses.
+2. Reduction of hallucinations and factual errors in LLM outputs.
+3. Transparent fact-checking process that can be easily audited.
+
+When implementing this technique, consider:
+
+- Balancing the number of verification questions with response time and computational cost.
+- Tailoring the verification question generation process to your specific use case.
+- Implementing error handling for cases where verification reveals significant discrepancies.
+
+When adapting this recipe to your specific use-case, consider:
+
+- Customizing the verification question generation process for your domain.
+- Implementing a feedback loop to continuously improve the verification process based on user feedback or expert review.
+- Combining Chain of Verification with other techniques like Chain of Thought for even more robust reasoning capabilities.
+- Experimenting with different LLM models for various stages of the verification process to optimize for accuracy and efficiency.
+
+By leveraging Mirascope's `call` decorator, response models, and dynamic configuration, you can easily implement and customize the Chain of Verification technique to enhance your LLM's accuracy across a wide range of applications.
+
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/decomposed-prompting.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/decomposed-prompting.mdx
new file mode 100644
index 0000000000..b8a287f543
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/decomposed-prompting.mdx
@@ -0,0 +1,164 @@
+---
+title: Decomposed Prompting: Enhancing LLM Problem-Solving with Tool-Based Subproblems
+description: Explore Decomposed Prompting (DECOMP), a technique that breaks down complex problems into smaller subproblems and utilizes tools to solve each step, enhancing an LLM's problem-solving capabilities.
+---
+
+# Decomposed Prompting: Enhancing LLM Problem-Solving with Tool-Based Subproblems
+
+This recipe demonstrates how to implement the Decomposed Prompting (DECOMP) technique using Large Language Models (LLMs) with Mirascope. DECOMP is a prompt engineering method that enhances an LLM's problem-solving capabilities by breaking down complex problems into subproblems and utilizing tools to solve each step.
+
+
+
+
+
+
+Decomposed Prompting (DECOMP) is an extension of least-to-most whereby tools are used to execute each subproblem in the problem solving process. A pre-trained call (in our case, a one shot prompt) demonstrates how to break a problem down into subproblems within the context of its given tool calls, and the output of each tool call is added to the chat's history until the problem is solved. Just like least-to-most, DECOMP shows improvements on mathematical reasoning and symbolic manipulation tasks, with better results than least-to-most.
+
+
+## Implementation
+
+Let's implement the Decomposed Prompting technique using Mirascope:
+
+
+
+
+```python
+import json
+
+from mirascope.core import openai, prompt_template
+from openai.types.chat import ChatCompletionMessageParam
+from pydantic import BaseModel, Field
+
+
+class Problem(BaseModel):
+ subproblems: list[str] = Field(
+ ..., description="The subproblems that the original problem breaks down into"
+ )
+
+
+@openai.call(model="gpt-4o", response_model=Problem)
+@prompt_template(
+ """
+ Your job is to break a problem into subproblems so that it may be solved
+ step by step, using at most one function call at each step.
+
+ You have access to the following functions which you can use to solve a
+ problem:
+ split: split a string into individual words
+ substring: get the ith character of a single string.
+ concat: concatenate some number of strings.
+
+ Here is an example of how it would be done for the problem: Get the first two
+ letters of the phrase 'Round Robin' with a period and space in between them.
+ Steps:
+ split 'Round Robin' into individual words
+ substring the 0th char of 'Round'
+ substring the 0th char of 'Robin'
+ concat ['R', '.', ' ', 'R']
+
+ Now, turn this problem into subtasks:
+ {query}
+ """
+)
+def break_into_subproblems(query: str): ...
+
+
+def split_string_to_words(string: str) -> str:
+ """Splits a string into words."""
+ return json.dumps(string.split())
+
+
+def substring(index: int, string: str) -> str:
+ """Gets the character at the index of a string."""
+ return string[index]
+
+
+def concat(strings: list[str]) -> str:
+ """Concatenates some number of strings."""
+ return "".join(strings)
+
+
+@openai.call(model="gpt-4o-mini", tools=[split_string_to_words, substring, concat])
+@prompt_template(
+ """
+ SYSTEM: You are being fed subproblems to solve the actual problem: {query}
+ MESSAGES: {history}
+ """
+)
+def solve_next_step(history: list[ChatCompletionMessageParam], query: str): ...
+
+
+def decomposed_prompting(query: str):
+ problem = break_into_subproblems(query=query)
+ response = None
+ history: list[ChatCompletionMessageParam] = []
+ for subproblem in problem.subproblems:
+ history.append({"role": "user", "content": subproblem})
+ response = solve_next_step(history, query)
+ history.append(response.message_param)
+ if tool := response.tool:
+ output = tool.call()
+ history += response.tool_message_params([(tool, output)])
+ response = solve_next_step(history, query)
+
+ # This should never return another tool call in DECOMP so don't recurse
+ history.append(response.message_param)
+ return response
+
+
+query = """Take the last letters of the words in "Augusta Ada King" and concatenate them using a space."""
+
+
+print(decomposed_prompting(query))
+```
+
+ The concatenated characters are "aag".
+
+
+This implementation consists of several key components:
+
+1. `Problem` class: Defines the structure for breaking down a problem into subproblems.
+2. `break_into_subproblems`: Uses GPT-4o-mini to break the main problem into subproblems.
+3. Tool functions: `split`, `substring`, and `concat` for manipulating strings.
+4. `solve_next_step`: Uses GPT-3.5-turbo to solve each subproblem, utilizing the available tools.
+5. `decomposed_prompting`: Orchestrates the entire process, solving subproblems sequentially and maintaining conversation history.
+
+## Benefits and Considerations
+
+The Decomposed Prompting implementation offers several advantages:
+
+1. Improved problem-solving capabilities for complex tasks.
+2. Better handling of multi-step problems that require different operations.
+3. Increased transparency in the problem-solving process.
+4. Potential for solving problems that are beyond the scope of a single LLM call.
+
+When implementing this technique, consider:
+
+- Carefully designing the set of available tools to cover a wide range of problem-solving needs.
+- Balancing the complexity of subproblems with the capabilities of the chosen LLM.
+- Implementing error handling and recovery mechanisms for cases where a subproblem solution fails.
+- Optimizing the prompt for breaking down problems to ensure effective decomposition.
+
+
+
+- Code Generation: Break down complex programming tasks into smaller, manageable steps.
+- Data Analysis: Decompose complex data analysis queries into a series of data manipulation and calculation steps.
+- Natural Language Processing: Break down complex NLP tasks like sentiment analysis or named entity recognition into subtasks.
+- Automated Reasoning: Solve complex logical or mathematical problems by breaking them into simpler, solvable steps.
+- Task Planning: Create detailed, step-by-step plans for complex projects or processes.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Tailoring the available tools to your domain for better performance.
+- Implementing a feedback loop to refine the problem decomposition process based on solution accuracy.
+- Combining Decomposed Prompting with other techniques like Chain of Thought for even more powerful problem-solving capabilities.
+- Developing a mechanism to handle interdependencies between subproblems.
+
+By leveraging Mirascope's `call` decorator, response models, and dynamic configuration, you can easily implement and customize the Decomposed Prompting technique to enhance your LLM's problem-solving capabilities across a wide range of applications.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/demonstration-ensembling.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/demonstration-ensembling.mdx
new file mode 100644
index 0000000000..0f3f0e371e
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/demonstration-ensembling.mdx
@@ -0,0 +1,190 @@
+---
+title: Demonstration Ensembling: Enhancing LLM Responses with Aggregated Examples
+description: Improve LLM responses by implementing Demonstration Ensembling to combine outputs from multiple random subsets of examples
+---
+
+# Demonstration Ensembling: Enhancing LLM Responses with Aggregated Examples
+
+Demonstration Ensembling is a prompt engineering technique which comprises of taking an aggregate of multiple responses, each of which have been trained on a random subset of examples from the example pool. This recipe demonstrates how to implement Demonstration Ensembling using Large Language Models (LLMs) with Mirascope.
+
+
+
+- Code Generation: Break down complex programming tasks into smaller, manageable steps.
+- Data Analysis: Decompose complex data analysis queries into a series of data manipulation and calculation steps.
+- Natural Language Processing: Break down complex NLP tasks like sentiment analysis or named entity recognition into subtasks.
+- Automated Reasoning: Solve complex logical or mathematical problems by breaking them into simpler, solvable steps.
+- Task Planning: Create detailed, step-by-step plans for complex projects or processes.
+
+
+
+## Implementation
+
+Let's implement the Demonstration Ensembling technique using Mirascope:
+
+
+
+```python
+import asyncio
+import random
+from typing import TypedDict
+
+from mirascope.core import openai, prompt_template
+
+
+class QA(TypedDict):
+ question: str
+ answer: str
+
+
+qa_examples: list[QA] = [
+ QA(
+ question="What are your company's core values?",
+ answer="Our company's core values are integrity, innovation, customer-centricity, and teamwork. We believe that maintaining these values is crucial to achieving our mission and vision.",
+ ),
+ QA(
+ question="How do you handle conflicts in the workplace?",
+ answer="We handle conflicts by promoting open communication, understanding different perspectives, and seeking mutually beneficial solutions. We have clear policies and trained mediators to assist in resolving conflicts effectively.",
+ ),
+ QA(
+ question="Can you describe a time when you exceeded a client's expectations?",
+ answer="Certainly. Recently, we completed a project ahead of schedule and under budget. We also provided additional insights and recommendations that significantly benefited the client, earning their gratitude and loyalty.",
+ ),
+ QA(
+ question="How do you ensure continuous improvement within the team?",
+ answer="We ensure continuous improvement by encouraging regular training, fostering a culture of feedback, and implementing agile methodologies. We also review our processes regularly to identify areas for enhancement.",
+ ),
+ QA(
+ question="What strategies do you use to stay ahead of industry trends?",
+ answer="We stay ahead of industry trends by investing in research and development, attending industry conferences, and maintaining strong relationships with thought leaders. We also encourage our team to engage in continuous learning and innovation.",
+ ),
+ QA(
+ question="How do you measure the success of a project?",
+ answer="We measure the success of a project by evaluating key performance indicators such as client satisfaction, budget adherence, timeline compliance, and the quality of the deliverables. Post-project reviews help us to identify successes and areas for improvement.",
+ ),
+ QA(
+ question="What is your approach to diversity and inclusion?",
+ answer="Our approach to diversity and inclusion involves creating a welcoming environment for all employees, offering diversity training, and implementing policies that promote equality. We value diverse perspectives as they drive innovation and growth.",
+ ),
+ QA(
+ question="How do you manage remote teams effectively?",
+ answer="We manage remote teams effectively by leveraging technology for communication and collaboration, setting clear goals, and maintaining regular check-ins. We also ensure that remote employees feel included and supported.",
+ ),
+ QA(
+ question="What are your short-term and long-term goals for the company?",
+ answer="In the short term, our goals include expanding our market reach and enhancing our product offerings. In the long term, we aim to become industry leaders by driving innovation and achieving sustainable growth.",
+ ),
+ QA(
+ question="How do you handle feedback from clients or customers?",
+ answer="We handle feedback by listening actively, responding promptly, and taking necessary actions to address concerns. We view feedback as an opportunity for improvement and strive to exceed our clients' expectations continuously.",
+ ),
+]
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ Here are some examples that demonstrate the voice to use in a corporate setting.
+ {examples:lists}
+
+ With these examples, answer the following question:
+ {query}
+ """
+)
+async def answer(query: str) -> openai.OpenAIDynamicConfig:
+ random_indices = random.sample(range(len(qa_examples)), 3)
+ examples = [
+ [
+ f"Question: {qa_examples[i]['question']}",
+ f"Answer: {qa_examples[i]['answer']}",
+ ]
+ for i in random_indices
+ ]
+ return {"computed_fields": {"examples": examples}}
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ Take the following responses from an LLM and aggregate/average them into
+ one answer.
+ {responses}
+ """
+)
+async def aggregate_answers(
+ query: str, num_responses: int
+) -> openai.OpenAIDynamicConfig:
+ tasks = [answer(query) for _ in range(num_responses)]
+ responses = await asyncio.gather(*tasks)
+ return {"computed_fields": {"responses": responses}}
+
+
+async def demonstration_ensembling(query, ensemble_size):
+ response = await aggregate_answers(query=query, num_responses=ensemble_size)
+ print(response.content)
+
+
+query = """Help me write a notice that highlights the importance of attending \
+the company's social events. Give me just the notice, no further explanation."""
+
+await demonstration_ensembling(query=query, ensemble_size=5)
+```
+
+ Here is an aggregated notice highlighting the importance of attending the company's social events:
+
+ ---
+
+ **Notice: Importance of Attending Company Social Events**
+
+ Dear Team,
+
+ We would like to emphasize the significance of participating in our upcoming company social events. These gatherings provide a valuable opportunity to connect with colleagues, foster teamwork, and build a positive workplace culture. Engaging in social activities enhances collaboration and strengthens relationships across departments.
+
+ We encourage everyone to attend and contribute to our vibrant community. Your presence is vital to making these events successful and enjoyable for all, and it enriches both your experience and that of your coworkers.
+
+ Thank you for your continued commitment to making our workplace more connected and enjoyable.
+
+ Best regards,
+ [Your Name]
+ [Your Position]
+
+ ---
+
+ This notice combines essential elements from all responses to create a cohesive message.
+
+
+This implementation consists of three main functions:
+
+1. `answer`: This function takes a query and returns a response based on a random subset of examples.
+2. `aggregate_answers`: This function generates multiple responses using `answer` and then aggregates them.
+3. `demonstration_ensembling`: This function orchestrates the entire process, calling `aggregate_answers` with the specified ensemble size.
+
+## Benefits and Considerations
+
+The Demonstration Ensembling technique offers several advantages:
+
+1. Improved consistency and quality of responses by leveraging multiple examples.
+2. Reduced impact of potential biases in individual examples.
+3. Enhanced ability to generate responses that capture diverse perspectives.
+
+When implementing this technique, consider:
+
+- Balancing the ensemble size with computational cost and time constraints.
+- Carefully curating the example pool to ensure diversity and relevance.
+- Experimenting with different aggregation methods for the final response.
+
+
+
+- Content Generation: Use Demonstration Ensembling to create more diverse and comprehensive marketing materials.
+- Customer Support: Generate more robust and consistent responses to customer inquiries.
+- Data Analysis: Produce more reliable insights by aggregating multiple interpretations of data.
+- Educational Content: Create well-rounded explanations of complex topics by combining multiple teaching approaches.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Tailoring the example pool to your domain for better performance.
+- Implementing different methods of selecting examples (e.g., weighted sampling based on relevance).
+- Combining Demonstration Ensembling with other techniques like Chain of Thought for even more nuanced responses.
+- Developing a feedback mechanism to continuously improve the quality of the example pool.
+
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/diverse.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/diverse.mdx
new file mode 100644
index 0000000000..93983baacd
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/diverse.mdx
@@ -0,0 +1,173 @@
+---
+title: DiVeRSe: Enhancing LLM Reasoning with Prompt Variations
+description: Implement the DiVeRSe technique to improve LLM reasoning by generating multiple prompt variations of the same question for better accuracy
+---
+
+# DiVeRSe: Enhancing LLM Reasoning with Prompt Variations
+
+This recipe demonstrates how to implement the DiVeRSe (Diverse Verifier on Reasoning Steps) technique using Large Language Models (LLMs) with Mirascope. DiVeRSe is a prompt engineering method that enhances an LLM's reasoning capabilities by generating multiple reasoning chains from variations of the original prompt.
+
+
+
+
+
+
+DiVeRSe is a variant of the self-consistency prompt engineering technique. Instead of generating multiple chains from the same prompt, DiVeRSe creates variations of the original prompt to generate different reasoning chains. This approach can significantly improve the LLM's ability to reason about complex problems by considering multiple perspectives and phrasings of the same question.
+
+
+## Implementation
+
+Let's implement the DiVeRSe technique using Mirascope:
+
+
+
+
+```python
+import asyncio
+
+from mirascope.core import openai, prompt_template
+from pydantic import BaseModel, Field
+
+
+class PromptVariations(BaseModel):
+ variations: list[str] = Field(..., description="Variations of the original prompt")
+
+
+@openai.call(model="gpt-4o-mini", response_model=PromptVariations)
+@prompt_template(
+ """
+ Return the {num_prompts} alternate variations of the prompt which retain the
+ full meaning but uses different phrasing.
+ Prompt: {prompt}
+ """
+)
+def get_prompt_variations(prompt: str, num_prompts: int): ...
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ Answer the following question going step by step:
+ {query}
+ """
+)
+async def zero_shot_cot(query: str): ...
+
+
+class ResponseDetails(BaseModel):
+ solution_number: int = Field(
+ ..., description="The actual number given as the answer in a solution."
+ )
+ correctness_probability: float = Field(
+ ...,
+ ge=0,
+ le=1,
+ description="An estimated probability that the given solution is correct from 0.0 to 1.0",
+ )
+
+
+@openai.call(model="gpt-4o-mini", response_model=ResponseDetails)
+@prompt_template(
+ """
+ Here is a query and a response which attempts to answer the query.
+ Prompt: {query}
+ Response: {response}
+
+ Extract the raw numerical value of the answer given by the response, and also
+ give an estimate between 0.0 and 1.0 of the probability that this solution
+ is correct.
+ """
+)
+async def evaluate_response(query: str, response: str): ...
+
+
+async def diverse(query: str, num_variations: int) -> str:
+ # Gather the variations of the prompt
+ alternate_variations = get_prompt_variations(query, num_variations - 1)
+ all_variations = alternate_variations.variations + [query]
+
+ # Generate a unique reasoning chain for each prompt variation with CoT
+ cot_tasks = [zero_shot_cot(prompt) for prompt in all_variations]
+ cot_responses = [response.content for response in await asyncio.gather(*cot_tasks)]
+
+ # Evaluate each reasoning chain
+ eval_tasks = [
+ evaluate_response(query, cot_response) for cot_response in cot_responses
+ ]
+ eval_responses = await asyncio.gather(*eval_tasks)
+
+ response_scores = {}
+ for eval_response in eval_responses:
+ if eval_response.solution_number not in response_scores:
+ response_scores[eval_response.solution_number] = 0
+ response_scores[eval_response.solution_number] += (
+ eval_response.correctness_probability
+ )
+ best_response = max(response_scores.keys(), key=lambda k: response_scores[k])
+ return best_response
+
+
+async def run_diverse(prompt, num_variations=3) -> str:
+ response = await diverse(prompt, num_variations)
+ return response
+
+
+query = """
+A committee of 3 people must be formed from a pool of 6 people, but Amy and Bob do not
+get along and should not be on the committee at the same time. How many viable
+combinations are there?
+"""
+
+print(await run_diverse(query))
+```
+
+ 16
+
+
+
+
+This implementation consists of several key components:
+
+1. `get_prompt_variations`: Generates variations of the original prompt.
+2. `zero_shot_cot`: Implements zero-shot chain-of-thought reasoning for each prompt variation.
+3. `evaluate_response`: Evaluates each reasoning chain and assigns a probability of correctness.
+4. `diverse`: Orchestrates the DiVeRSe technique by generating prompt variations, creating reasoning chains, and selecting the best response.
+
+## Benefits and Considerations
+
+The DiVeRSe implementation offers several advantages:
+
+1. Improved reasoning by considering multiple phrasings of the same problem.
+2. Enhanced robustness against potential misinterpretations of the original prompt.
+3. Potential for more accurate responses in complex reasoning tasks.
+
+When implementing this technique, consider:
+
+- Balancing the number of prompt variations with computational cost and time constraints.
+- Adjusting the evaluation criteria for different types of problems (e.g., numerical vs. categorical answers).
+- Fine-tuning the prompt variation generation to ensure meaningful diversity while maintaining the original question's intent.
+
+
+
+- Complex Problem Solving: Use DiVeRSe for multi-step problems in fields like mathematics, physics, or engineering.
+- Legal Document Analysis: Apply the technique to interpret complex legal scenarios from multiple perspectives.
+- Market Research: Generate diverse interpretations of consumer feedback or survey responses.
+- Educational Assessment: Create and evaluate multiple versions of exam questions to ensure fairness and comprehension.
+- Scientific Hypothesis Generation: Use DiVeRSe to approach research questions from various angles, potentially uncovering novel insights.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Tailoring the prompt variation generation to your domain for better performance.
+- Experimenting with different evaluation methods for the reasoning chains.
+- Implementing a feedback loop to refine the prompt variation process based on the accuracy of final answers.
+- Combining DiVeRSe with other techniques like Self-Ask or Sim to M for even more nuanced reasoning capabilities.
+
+By leveraging Mirascope's `call` decorator, response models, and dynamic configuration, you can easily implement and customize the DiVeRSe technique to enhance your LLM's ability to reason about complex problems across a wide range of applications.
+
+
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/least-to-most.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/least-to-most.mdx
new file mode 100644
index 0000000000..46f098e24a
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/least-to-most.mdx
@@ -0,0 +1,161 @@
+---
+title: Least to Most: Enhancing LLM Reasoning with Subproblem Decomposition
+description: Learn to implement the Least to Most technique, which enhances LLM reasoning by breaking down complex problems into smaller, more manageable subproblems for improved accuracy.
+---
+
+# Least to Most: Enhancing LLM Reasoning with Subproblem Decomposition
+
+This recipe demonstrates how to implement the Least to Most technique using Large Language Models (LLMs) with Mirascope. Least to Most is a prompt engineering method that enhances an LLM's reasoning capabilities by breaking down complex problems into smaller, more manageable subproblems.
+
+
+
+
+
+
+Least to Most is a more extensive version of Chain of Thought, where separate calls are used to break down the original problem into subproblems as well as solve each individual step/subproblem. After solving each subproblem, the result is appended to the chat's history until the original problem is solved. Least to Most is an effective technique for symbolic and arithmetic reasoning tasks.
+
+
+## Implementing Least to Most
+
+Let's implement the Least to Most technique using Mirascope:
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+from mirascope.core.openai import OpenAICallResponse
+from openai.types.chat import ChatCompletionMessageParam
+from pydantic import BaseModel, Field
+
+few_shot_examples = [
+ {
+ "question": "The median age in the city was 22.1 years. 10.1% of residents were under the age of 18; 56.2% were between the ages of 18 and 24; 16.1% were from 25 to 44; 10.5% were from 45 to 64; and 7% were 65 years of age or older. Which age group is larger: under the age of 18 or 18 and 24?",
+ "answer": 'To answer the question "Which age group is larger: under the age of 18 or 18 and 24?", we need to know: "How many percent were under the age of 18?", "How many percent were between the ages of 18 and 24?".',
+ },
+ {
+ "question": "Old age pensions were raised by 300 francs per month to 1,700 francs for a single person and to 3,700 francs for a couple, while health insurance benefits were made more widely available to unemployed persons and part-time employees. How many francs were the old age pensions for a single person before they were raised?",
+ "answer": 'To answer the question "How many francs were the old age pensions for a single person before they were raised?", we need to know: "How many francs were the old age pensions for a single person?", "How many francs were old age pensions raised for a single person?".',
+ },
+ {
+ "question": "In April 2011, the ECB raised interest rates for the first time since 2008 from 1% to 1.25%, with a further increase to 1.50% in July 2011. However, in 2012-2013 the ECB lowered interest rates to encourage economic growth, reaching the historically low 0.25% in November 2013. Soon after the rates were cut to 0.15%, then on 4 September 2014 the central bank reduced the rates from 0.15% to 0.05%, the lowest rates on record. How many percentage points did interest rates drop between April 2011 and September 2014?",
+ "answer": 'To answer the question "How many percentage points did interest rates drop between April 2011 and September 2014?", we need to know: "What was the interest rate in April 2011?", "What was the interest rate in September 2014?".',
+ },
+ {
+ "question": "Non-nationals make up more than half of the population of Bahrain. According to government statistics dated between 2005-2009 roughly 290,000 Indians, 125,000 Bangladeshis, 45,000 Pakistanis, 45,000 Filipinos, and 8,000 Indonesians. How many Pakistanis and Indonesians are in Bahrain?",
+ "answer": 'To answer the question "How many Pakistanis and Indonesians are in Bahrain?", we need to know: "How many Pakistanis are in Bahrain?", "How many Indonesians are in Bahrain?".',
+ },
+]
+
+
+class Problem(BaseModel):
+ subproblems: list[str] = Field(
+ ..., description="The subproblems that the original problem breaks down into"
+ )
+
+
+@openai.call(model="gpt-4o-mini", response_model=Problem)
+@prompt_template(
+ """
+ Examples to guide your answer:
+ {examples:lists}
+ Break the following query into subproblems:
+ {query}
+ """
+)
+def break_into_subproblems(
+ query: str, few_shot_examples: list[dict[str, str]]
+) -> openai.OpenAIDynamicConfig:
+ examples = [
+ [f"Q:{example['question']}", f"A:{example['answer']}"]
+ for example in few_shot_examples
+ ]
+ return {"computed_fields": {"examples": examples}}
+
+
+@openai.call(model="gpt-4o-mini")
+def call(history: list[ChatCompletionMessageParam]) -> str:
+ return f"MESSAGES: {history}"
+
+
+def least_to_most(query_context: str, query_question: str) -> OpenAICallResponse:
+ problem = break_into_subproblems(
+ query=query_context + query_question, few_shot_examples=few_shot_examples
+ )
+ history: list[ChatCompletionMessageParam] = [
+ {"role": "user", "content": query_context + problem.subproblems[0]}
+ ]
+ response = call(history=history)
+ history.append(response.message_param)
+ if len(problem.subproblems) == 1:
+ return response
+ else:
+ for i in range(1, len(problem.subproblems)):
+ history.append({"role": "user", "content": problem.subproblems[i]})
+ response = call(history=history)
+ history.append(response.message_param)
+ return response
+
+
+query_context = """The Census Bureaus 2006-2010 American Community Survey showed that \
+(in 2010 inflation adjustment dollars) median household income was $52,056 and the \
+median family income was $58,942."""
+
+query_question = "How many years did the Census Bureaus American Community Survey last?"
+
+print(least_to_most(query_context=query_context, query_question=query_question))
+```
+
+```
+ To calculate the duration between two years, you subtract the earlier year from the later year. The formula is:
+
+ \[ \text{Duration} = \text{Later Year} - \text{Earlier Year} \]
+
+ For example, if you want to calculate the duration between 2005 and 2010:
+
+ \[ \text{Duration} = 2010 - 2005 = 5 \]
+
+ So, the duration between 2005 and 2010 is 5 years.
+```
+
+This implementation consists of three main components:
+
+1. `break_into_subproblems`: This function takes a query and breaks it down into subproblems using few-shot examples.
+2. `call`: A simple function that makes a call to the LLM with the given message history.
+3. `least_to_most`: This function orchestrates the Least to Most technique. It first breaks the problem into subproblems, then solves each subproblem sequentially, appending the results to the message history.
+
+## Benefits and Considerations
+
+The Least to Most implementation offers several advantages:
+
+1. Improved reasoning for complex problems by breaking them down into manageable steps.
+2. Enhanced ability to handle multi-step arithmetic and symbolic reasoning tasks.
+3. Potential for more accurate responses by solving subproblems individually.
+
+When implementing this technique, consider:
+
+- Carefully crafting few-shot examples to guide the problem decomposition process.
+- Balancing the number of subproblems to avoid oversimplification or overcomplexity.
+- Ensuring that the query context and question are clear and contain all necessary information.
+
+
+
+- Complex Mathematical Problem Solving: Use Least to Most for multi-step mathematical proofs or calculations.
+- Project Planning: Break down large projects into smaller, manageable tasks.
+- Algorithmic Design: Decompose complex algorithms into simpler steps for implementation.
+- Legal Case Analysis: Break down complex legal cases into individual points of law to be addressed.
+- Medical Diagnosis: Analyze symptoms and test results step-by-step to reach a diagnosis.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Tailoring the few-shot examples to your domain for better problem decomposition.
+- Implementing a mechanism to handle interdependent subproblems.
+- Combining Least to Most with other techniques like Self-Consistency for even more robust reasoning.
+- Developing a feedback loop to refine the problem decomposition process based on the accuracy of final answers.
+
+By leveraging Mirascope's `call` decorator, response models, and dynamic configuration, you can easily implement and customize the Least to Most technique to enhance your LLM's ability to reason about complex problems across a wide range of applications.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/mixture-of-reasoning.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/mixture-of-reasoning.mdx
new file mode 100644
index 0000000000..3b2bbdf82b
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/mixture-of-reasoning.mdx
@@ -0,0 +1,160 @@
+---
+title: Mixture of Reasoning: Enhancing LLM Performance with Multiple Techniques
+description: Learn how to implement Mixture of Reasoning, a technique that combines multiple prompt engineering approaches to improve LLM performance across a variety of tasks.
+---
+
+# Mixture of Reasoning: Enhancing LLM Performance with Multiple Techniques
+
+Mixture of Reasoning is a prompt engineering technique where you set up multiple calls, each utilizing a different prompt engineering technique. This approach is best when you want to be able to handle a wide variety of responses, or have a variety of techniques that you have found to be successful for responding to a type of prompt.
+
+
+
+
+
+
+In the original paper, a trained classifier is used to determine the best answer, but since we do not have access to that, we will use an LLM to evaluate the best answer instead. To get a clean separation between the reasoning and the actual output, we'll use response_model in our final evaluation call.
+
+
+## Implementation
+
+Let's implement the Mixture of Reasoning technique using Mirascope:
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+from pydantic import BaseModel, Field
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ Answer this question, thinking step by step.
+ {query}
+ """
+)
+def cot_call(query: str): ...
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ {query}
+ It's very important to my career.
+ """
+)
+def emotion_prompting_call(query: str): ...
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ {query}
+ Rephrase and expand the question, and respond.
+ """
+)
+def rar_call(query: str): ...
+
+
+class BestResponse(BaseModel):
+ best_response: str = Field(
+ ..., description="The best response of the options given, verbatim"
+ )
+ reasoning: str = Field(
+ ...,
+ description="""A short description of why this is the best response to
+ the query, along with reasons why the other answers were worse.""",
+ )
+
+
+@openai.call(model="gpt-4o-mini", response_model=BestResponse)
+@prompt_template(
+ """
+ Here is a query: {query}
+
+ Evaluate the following responses from LLMs and decide which one
+ is the best based on correctness, fulfillment of the query, and clarity:
+
+ Response 1:
+ {cot_response}
+
+ Response 2:
+ {emotion_prompting_response}
+
+ Response 3:
+ {rar_response}
+ """
+)
+def mixture_of_reasoning(query: str) -> openai.OpenAIDynamicConfig:
+ cot_response = cot_call(query=query)
+ emotion_prompting_response = emotion_prompting_call(query=query)
+ rar_response = rar_call(query=query)
+
+ return {
+ "computed_fields": {
+ "cot_response": cot_response,
+ "emotion_prompting_response": emotion_prompting_response,
+ "rar_response": rar_response,
+ }
+ }
+
+
+prompt = "What are the side lengths of a rectangle with area 8 and perimeter 12?"
+
+best_response = mixture_of_reasoning(prompt)
+print(best_response.best_response)
+print(best_response.reasoning)
+```
+
+This implementation consists of several key components:
+
+1. Three different prompt engineering techniques:
+ - `cot_call`: [Chain of Thought reasoning](/docs/mirascope/guides/prompt-engineering/text-based/chain-of-thought)
+ - `emotion_prompting_call`: [Emotion prompting](/docs/mirascope/guides/prompt-engineering/text-based/emotion-prompting)
+ - `rar_call`: [Rephrase and Respond](/docs/mirascope/guides/prompt-engineering/text-based/rephrase-and-respond)
+
+2. A `BestResponse` model to structure the output of the final evaluation.
+
+3. The `mixture_of_reasoning` function, which:
+ - Calls each of the three prompt engineering techniques
+ - Uses dynamic configuration to pass the responses to the final evaluation
+ - Returns the best response and reasoning using the `BestResponse` model
+
+## Benefits and Considerations
+
+The Mixture of Reasoning implementation offers several advantages:
+
+1. Improved ability to handle a wide variety of queries by leveraging multiple prompt engineering techniques.
+2. Enhanced performance by selecting the best response from multiple approaches.
+3. Flexibility to add or modify prompt engineering techniques as needed.
+
+When implementing this technique, consider:
+
+- Carefully selecting the prompt engineering techniques to include based on your specific use case.
+- Balancing the number of techniques with computational cost and time constraints.
+- Fine-tuning the evaluation criteria in the final step to best suit your needs.
+
+
+
+- Complex Problem Solving: Use Mixture of Reasoning for multi-step problems in fields like mathematics, physics, or engineering.
+- Customer Support: Implement different response strategies to handle various types of customer queries effectively.
+- Content Generation: Generate diverse content ideas by applying multiple creative thinking techniques.
+- Decision Making: Analyze complex scenarios from different perspectives to make more informed decisions.
+- Educational Tutoring: Provide explanations using various teaching methods to cater to different learning styles.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Experimenting with different combinations of prompt engineering techniques.
+- Implementing a feedback loop to continuously improve the selection of the best response.
+- Tailoring the evaluation criteria to your specific domain or task requirements.
+- Combining Mixture of Reasoning with other techniques like Self-Consistency for even more robust reasoning capabilities.
+
+By leveraging Mirascope's `call` decorator, response models, and dynamic configuration, you can easily implement and customize the Mixture of Reasoning technique to enhance your LLM's performance across a wide range of applications.
+
+
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/prompt-paraphrasing.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/prompt-paraphrasing.mdx
new file mode 100644
index 0000000000..609e0f3686
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/prompt-paraphrasing.mdx
@@ -0,0 +1,114 @@
+---
+title: Prompt Paraphrasing: Generating Diverse Prompts for LLM Testing and Evaluation
+description: Explore Prompt Paraphrasing, a technique for creating ensembles of prompt variations through translation and back-translation to test and evaluate LLM performance across different phrasings.
+---
+
+# Prompt Paraphrasing: Generating Diverse Prompts for LLM Testing and Evaluation
+
+[Prompt Paraphrasing](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00324/96460/How-Can-We-Know-What-Language-Models-Know) is not a prompt engineering technique, but rather a prompt generation technique used to create ensembles of prompts for testing or other prompt engineering techniques. In this example, we cover a specific method of generating prompts mentioned in the paper whereby a prompt is translated into $B$ versions in another language, then backtranslated into $B^2$ versions to English.
+
+
+
+
+
+## Implementation
+
+Let's implement the Prompt Paraphrasing technique using Mirascope:
+
+
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+from pydantic import BaseModel, Field
+
+
+class Translations(BaseModel):
+ translations: list[str] = Field(
+ ..., description="The list of translations into the requested language"
+ )
+
+
+@openai.call(model="gpt-4o-mini", response_model=Translations)
+@prompt_template(
+ """
+ For this phrase: {phrase}
+
+
+ Give {num_translations} translations in {language}
+ """
+)
+def translate(phrase: str, language: str, num_translations: int): ...
+
+
+def prompt_paraphrasing(query: str, num_translations: int = 3) -> set[str]:
+ spanish_translations = translate(
+ phrase=query,
+ language="Spanish",
+ num_translations=num_translations,
+ )
+ # Avoid Duplicates
+ prompt_variations = set()
+ for spanish_phrase in spanish_translations.translations:
+ back_translations = translate(
+ spanish_phrase, language="English", num_translations=3
+ )
+ prompt_variations.update(back_translations.translations)
+ return prompt_variations
+
+
+print(
+ prompt_paraphrasing(
+ query="What are some manageable ways to improve my focus and productivity?"
+ )
+)
+```
+
+ {'What are some ways to boost my focus and achieve greater productivity in a manageable fashion?', 'How can I improve my focus and productivity?', 'What methods are effective for enhancing my concentration and productivity?', 'What are some practical strategies to boost my focus and productivity?', 'What are some feasible methods to enhance my concentration and productivity?', 'What are some manageable ways to improve my focus and productivity?', 'What are useful ways to increase my concentration and productivity?', 'How can I improve my focus and be more productive in a manageable way?', 'How can I enhance my concentration and increase my productivity in a sustainable manner?'}
+
+
+
+This implementation consists of two main functions:
+
+1. `translate`: This function takes a phrase, target language, and number of translations as input, and returns multiple translations of the phrase in the specified language.
+2. `prompt_paraphrasing`: This function orchestrates the Prompt Paraphrasing technique. It first translates the input query into Spanish, then back-translates each Spanish translation into English, creating a set of diverse prompt variations.
+
+## Benefits and Considerations
+
+The Prompt Paraphrasing implementation offers several advantages:
+
+1. Generation of diverse prompt variations for more robust LLM testing and evaluation.
+2. Potential discovery of more effective phrasings for specific tasks or queries.
+3. Improved understanding of LLM behavior across different linguistic formulations.
+
+When implementing this technique, consider:
+
+- Balancing the number of translations and languages with computational cost and time constraints.
+- Selecting appropriate languages for translation based on your specific use case or target audience.
+- Implementing a filtering mechanism to remove nonsensical or overly divergent paraphrases.
+
+
+
+- Robustness Testing: Use prompt paraphrasing to test LLM performance across various phrasings of the same query.
+- Data Augmentation: Generate additional training data by paraphrasing existing prompts or questions.
+- Chatbot Improvement: Enhance chatbot understanding by training on paraphrased versions of common queries.
+- Cross-lingual Information Retrieval: Improve search results by querying with multiple paraphrased versions of the search term.
+- Writing Assistance: Offer users alternative phrasings for their writing to improve clarity or style.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Experimenting with different source and target languages for translation.
+- Implementing a scoring mechanism to rank paraphrases based on relevance or quality.
+- Combining Prompt Paraphrasing with other techniques like Chain of Thought or Self-Consistency for more comprehensive LLM evaluation.
+- Developing a feedback loop to refine the paraphrasing process based on LLM performance on different prompt variations.
+
+By leveraging Mirascope calls and response models, you can easily implement and customize the Prompt Paraphrasing technique to generate diverse prompts for LLM testing, evaluation, and improvement across a wide range of applications.
+
+
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/reverse-chain-of-thought.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/reverse-chain-of-thought.mdx
new file mode 100644
index 0000000000..68eaea333d
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/reverse-chain-of-thought.mdx
@@ -0,0 +1,349 @@
+---
+title: Reverse Chain of Thought: Enhancing LLM Reasoning with Self-Reflection
+description: Implement Reverse Chain of Thought, a technique that enhances LLM reasoning by encouraging the model to reflect on and correct its own thought process through query reconstruction and comparison.
+---
+
+# Reverse Chain of Thought: Enhancing LLM Reasoning with Self-Reflection
+
+This recipe demonstrates how to implement the Reverse Chain of Thought technique using Large Language Models (LLMs) with Mirascope. Reverse Chain of Thought is a prompt engineering method that enhances an LLM's reasoning capabilities by encouraging it to reflect on and correct its own thought process.
+
+
+
+
+
+
+
+Reverse chain of thought is a prompt engineering technique where a chain of thought call is made for a query, then we attempt to reconstruct the query from the attempted solution. Both the original and reconstructed query are broken down into their individual conditions, and each condition is cross-referenced with the totality of conditions for the other query to determine the existence of overlooked facts or hallucinations. The questions themselves are also compared to ensure that the two queries not only share the context but also ask the same question. This fine-grained comparison is used in a final prompt.
+
+
+In the original paper, no prompt was given for the case when mistakes do not exist, so we took the liberty of asking the model to repeat a solution without mistakes.
+
+
+Reverse chain of thought is a technique that works for any prompt which has shown signs of susceptibility to hallucinations or misinterpretations in its initial attempts to answer the question.
+
+
+
+## Implementation
+
+Let's implement the Reverse Chain of Thought technique using Mirascope:
+
+
+```python
+import asyncio
+
+from mirascope.core import openai, prompt_template
+from mirascope.core.openai import OpenAICallResponse
+from openai.types.chat import ChatCompletionMessageParam
+from pydantic import BaseModel, Field
+
+
+@openai.call(model="gpt-4o-mini")
+def zero_shot_cot(query: str) -> str:
+ return f"{query} Let's think step by step."
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ USER:
+ Give the concrete prompt (problem) that can generate this answer.
+ The problem should contain all basic and necessary information and correspond to the answer.
+ The problem can only ask for one result.
+
+ {response}
+ """
+)
+def reconstruct_query(response: str): ...
+
+
+class Decomposition(BaseModel):
+ conditions: list[str] = Field(
+ ..., description="A list of conditions of the problem."
+ )
+
+
+@openai.call(
+ model="gpt-4o-mini",
+ response_model=Decomposition,
+ call_params={"tool_choice": "required"},
+)
+@prompt_template(
+ """
+ Please list the conditions of the problem. There may be multiple conditions.
+ Do not list conditions not related to calculations, but list all necessary conditions.
+ The format should be a list of conditions with one condition per item.
+
+ {query}
+ """
+)
+async def decompose_query(query: str): ...
+
+
+class Comparison(BaseModel):
+ condition: str = Field(
+ ..., description="The original condition the comparison was made with, verbatim"
+ )
+ deducible: bool = Field(
+ ...,
+ description="Whether the condition is deducible from the list of other conditions.",
+ )
+ illustration: str = Field(
+ ...,
+ description="A quick illustration of the reason the condition is/isn't deducible from the list of other conditions.",
+ )
+
+
+@openai.call(
+ model="gpt-4o-mini",
+ response_model=Comparison,
+ call_params={"tool_choice": "required"},
+)
+@prompt_template(
+ """
+ Given a candidate condition: '{condition}'
+
+ Here is a condition list: '{condition_list}'
+
+ From a mathematical point of view, can this candidate condition be deduced from the condition list?
+ Please illustrate your reason and answer True or False.
+ """
+)
+async def compare_conditions(condition: str, condition_list: list[str]): ...
+
+
+@openai.call(
+ model="gpt-4o-mini", response_model=bool, call_params={"tool_choice": "required"}
+)
+@prompt_template(
+ """
+ Q1: {original_problem}
+ Q2: {reconstructed_problem}
+
+ From a mathematical point of view, are these two problems asking the same thing at the end?
+ """
+)
+def compare_questions(original_problem: str, reconstructed_problem: str): ...
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ MESSAGES: {history}
+ USER:
+ {mistakes_prompt}
+ {overlooked_prompt}
+ {hallucination_prompt}
+ {misinterpretation_prompt}
+ """
+)
+async def fine_grained_comparison(
+ history: list[ChatCompletionMessageParam], query: str, reconstructed_query: str
+) -> openai.OpenAIDynamicConfig:
+ # Decompose both queries into conditions
+ original_conditions, reconstructed_conditions = [
+ response.conditions
+ for response in await asyncio.gather(
+ decompose_query(query), decompose_query(reconstructed_query)
+ )
+ ]
+
+ # Identify overlooked/hallucinated conditions and misinterpretation of question
+ overlooking_tasks = [
+ compare_conditions(original_condition, reconstructed_conditions)
+ for original_condition in original_conditions
+ ]
+ hallucination_tasks = [
+ compare_conditions(reconstructed_condition, original_conditions)
+ for reconstructed_condition in reconstructed_conditions
+ ]
+ full_comparison = await asyncio.gather(*(overlooking_tasks + hallucination_tasks))
+
+ question_misinterpretation = compare_questions(query, reconstructed_query)
+
+ overlooked_comparisons = [
+ comparison
+ for comparison in full_comparison[: len(original_conditions)]
+ if not comparison.deducible
+ ]
+ hallucination_comparisons = [
+ comparison
+ for comparison in full_comparison[len(original_conditions) :]
+ if not comparison.deducible
+ ]
+
+ # Fill out prompt depending on the comparisons
+ if (
+ not question_misinterpretation
+ and not overlooked_comparisons
+ and not hallucination_comparisons
+ ):
+ mistakes_prompt = """There are no mistakes in your interpretation of the prompt.
+ Repeat your original solution verbatim."""
+ overlooked_prompt = ""
+ hallucination_prompt = ""
+ misinterpretation_prompt = ""
+ else:
+ mistakes_prompt = (
+ "Here are the mistakes and reasons in your answer to the problem.\n"
+ )
+
+ if overlooked_comparisons:
+ conditions = [comparison.condition for comparison in overlooked_comparisons]
+ illustrations = [
+ comparison.illustration for comparison in overlooked_comparisons
+ ]
+ overlooked_prompt = f"""
+ Overlooked Conditions:
+ You have ignored some real conditions:
+ {conditions}
+ The real problem has the conditions:
+ {original_conditions}
+ You should consider all real conditions in the problem.
+ Here are the detailed reasons:
+ {illustrations}"""
+ else:
+ overlooked_prompt = ""
+
+ if hallucination_comparisons:
+ conditions = [
+ comparison.condition for comparison in hallucination_comparisons
+ ]
+ illustrations = [
+ comparison.illustration for comparison in overlooked_comparisons
+ ]
+ hallucination_prompt = f"""
+ Hallucinated Conditions
+ You use some wrong candidate conditions:
+ {conditions}
+ They all can not be deduced from the true condition list.
+ The real problem has the conditions:
+ {original_conditions}
+ You should consider all real conditions in the problem.
+ Here are the detailed reasons:
+ {illustrations}"""
+ else:
+ hallucination_prompt = ""
+
+ if question_misinterpretation:
+ misinterpretation_prompt = f"""
+ You misunderstood the question.
+ You think the question is: {reconstructed_query}.
+ But the real question is: {query}
+ They are different. You should consider the original question."""
+ else:
+ misinterpretation_prompt = ""
+ return {
+ "computed_fields": {
+ "mistakes_prompt": mistakes_prompt,
+ "overlooked_prompt": overlooked_prompt,
+ "hallucination_prompt": hallucination_prompt,
+ "misinterpretation_prompt": misinterpretation_prompt,
+ }
+ }
+
+
+async def reverse_cot(query: str) -> OpenAICallResponse:
+ cot_response = zero_shot_cot(query=query)
+ reconstructed_query_response = reconstruct_query(cot_response.content)
+ history = cot_response.messages + reconstructed_query_response.messages
+ response = await fine_grained_comparison(
+ history=history,
+ query=query,
+ reconstructed_query=reconstructed_query_response.content,
+ )
+ return response
+
+
+query = """At the trip to the county level scavenger hunt competition 90 people \
+were required to split into groups for the competition to begin. To break \
+people up into smaller groups with different leaders 9-person groups were \
+formed. If 3/5 of the number of groups each had members bring back 2 seashells each \
+how many seashells did they bring?"""
+
+print(await reverse_cot(query=query))
+```
+
+```
+Thank you for your feedback! Based on your clarifications, here’s a revised problem prompt that corresponds to your original question while considering all the necessary conditions:
+
+
+**Problem Prompt:**
+
+At the trip to the county-level scavenger hunt competition, there are 90 participants who need to be divided into groups for the event to start. Each group consists of 9 people. If \( \frac{3}{5} \) of the total number of groups formed contributed by bringing back seashells, and each member of those groups brought back 2 seashells each, how many seashells were brought back in total?
+
+
+This prompt encapsulates all essential components of the problem. It clarifies the number of participants, how they are grouped, the fraction of groups contributing, and the number of seashells collected by each member, leading to the final calculation. Let's go through the reasoning for clarity:
+
+1. **Total Participants**: 90 people.
+2. **Groups Formed**: Each group has 9 members, so:
+ \[
+ \text{Number of groups} = \frac{90}{9} = 10
+ \]
+3. **Groups Contributing Shells**: \( \frac{3}{5} \) of the total groups contributed:
+ \[
+ \text{Groups contributing} = 10 \times \frac{3}{5} = 6
+ \]
+4. **Total Members in Contributing Groups**: Each contributing group has 9 members:
+ \[
+ \text{Total members contributing} = 6 \times 9 = 54
+ \]
+5. **Total Seashells Collected**: Each member brought back 2 seashells:
+ \[
+ \text{Total seashells} = 54 \times 2 = 108
+ \]
+
+Thus, the total number of seashells brought back is:
+
+\[
+\boxed{108}
+\]
+
+This approach correctly follows the sequence of logical deductions based on the problem's premise. Thank you for your guidance!
+```
+
+This implementation consists of several key functions:
+
+1. `zero_shot_cot`: Generates an initial chain of thought response.
+2. `reconstruct_query`: Attempts to reconstruct the original query from the chain of thought response.
+3. `decompose_query`: Breaks down a query into its individual conditions.
+4. `compare_conditions`: Compares individual conditions to determine if they are deducible from a list of other conditions.
+5. `compare_questions`: Checks if two questions are asking the same thing.
+6. `fine_grained_comparison`: Performs a detailed comparison between the original and reconstructed queries, identifying overlooked conditions, hallucinations, and misinterpretations.
+7. `reverse_cot`: Orchestrates the entire Reverse Chain of Thought process.
+
+## Benefits and Considerations
+
+The Reverse Chain of Thought implementation offers several advantages:
+
+1. Improved accuracy by identifying and correcting overlooked conditions, hallucinations, and misinterpretations.
+2. Enhanced reasoning capabilities through self-reflection and error correction.
+3. More robust problem-solving, especially for complex or ambiguous queries.
+
+When implementing this technique, consider:
+
+- Balancing the computational cost of multiple LLM calls with the improved accuracy.
+- Fine-tuning the prompts for each step to optimize the reflection and correction process.
+- Adapting the technique for different types of problems or domains.
+
+
+
+- Complex Problem Solving: Use Reverse Chain of Thought for multi-step problems in fields like physics or engineering.
+- Legal Analysis: Apply the technique to enhance the accuracy of legal interpretations and argumentation.
+- Medical Diagnosis: Implement Reverse Chain of Thought to improve the reliability of symptom analysis and potential diagnoses.
+- Financial Modeling: Enhance the accuracy of financial predictions and risk assessments by identifying overlooked factors.
+- Educational Assessment: Use the technique to generate and validate complex exam questions and their solutions.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Tailoring the decomposition and comparison steps to your domain for better performance.
+- Implementing a feedback loop to continuously improve the quality of the Reverse Chain of Thought responses.
+- Combining Reverse Chain of Thought with other techniques like Self-Ask or Self-Consistency for even more powerful reasoning capabilities.
+
+By leveraging Mirascope's `call` decorator, response models, and dynamic configuration, you can easily implement and customize the Reverse Chain of Thought technique to enhance your LLM's reasoning capabilities across a wide range of applications.
+
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-consistency.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-consistency.mdx
new file mode 100644
index 0000000000..a8b8f340bb
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-consistency.mdx
@@ -0,0 +1,209 @@
+---
+title: Self-Consistency: Enhancing LLM Reasoning with Multiple Outputs
+description: Implement the Self-Consistency technique, which enhances LLM reasoning by generating multiple Chain of Thought responses and selecting the most common answer for improved accuracy.
+---
+
+# Self-Consistency: Enhancing LLM Reasoning with Multiple Outputs
+
+This recipe demonstrates how to implement the Self-Consistency technique using Large Language Models (LLMs) with Mirascope. Self-Consistency is a prompt engineering method that enhances an LLM's reasoning capabilities by generating multiple Chain of Thought (CoT) responses and selecting the most common answer. We'll explore both a basic implementation and an enhanced version with automated answer extraction.
+
+
+
+
+
+
+Self-consistency is a prompt engineering technique where multiple calls are made with Chain of Thought prompting, resulting in various answers, and the most common answer is selected. Self-consistency has shown to be highly effective on mathematical and symbolic reasoning, and has also been shown to help in niche scenarios where CoT actually reduces the quality of LLM output.
+
+In the original paper, users manually pick the most frequent response, but we have integrated response models to automate that process once all responses have been generated.
+
+
+## Basic Self-Consistency Implementation
+
+Let's start with a basic implementation of Self-Consistency using Chain of Thought reasoning:
+
+
+
+```python
+import asyncio
+from collections import Counter
+
+from mirascope.core import openai, prompt_template
+
+few_shot_examples = [
+ {
+ "question": "There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?",
+ "answer": "We start with 15 trees. Later we have 21 trees. The difference must be the number of trees they planted. So, they must have planted 21 - 15 = 6 trees. The answer is 6.",
+ },
+ {
+ "question": "If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?",
+ "answer": "There are 3 cars in the parking lot already. 2 more arrive. Now there are 3 + 2 = 5 cars. The answer is 5.",
+ },
+ {
+ "question": "Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?",
+ "answer": "Leah had 32 chocolates and Leah’s sister had 42. That means there were originally 32 + 42 = 74 chocolates. 35 have been eaten. So in total they still have 74 - 35 = 39 chocolates. The answer is 39.",
+ },
+ {
+ "question": "Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?",
+ "answer": "Jason had 20 lollipops. Since he only has 12 now, he must have given the rest to Denny. The number of lollipops he has given to Denny must have been 20 - 12 = 8 lollipops. The answer is 8.",
+ },
+ {
+ "question": "Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?",
+ "answer": "He has 5 toys. He got 2 from mom, so after that he has 5 + 2 = 7 toys. Then he got 2 more from dad, so in total he has 7 + 2 = 9 toys. The answer is 9.",
+ },
+ {
+ "question": "There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?",
+ "answer": "There are 4 days from monday to thursday. 5 computers were added each day. That means in total 4 * 5 = 20 computers were added. There were 9 computers in the beginning, so now there are 9 + 20 = 29 computers. The answer is 29.",
+ },
+ {
+ "question": "Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?",
+ "answer": "Michael initially had 58 balls. He lost 23 on Tuesday, so after that he has 58 - 23 = 35 balls. On Wednesday he lost 2 more so now he has 35 - 2 = 33 balls. The answer is 33.",
+ },
+]
+
+
+@openai.call(model="gpt-4o-mini", call_params={"temperature": 0.5})
+@prompt_template(
+ """
+ Some examples on how to think step by step:
+ {examples:lists}
+
+ Answer the following question, thinking step by step:
+ {query}
+ """
+)
+async def chain_of_thought(
+ query: str, few_shot_examples: list[dict[str, str]]
+) -> openai.OpenAIDynamicConfig:
+ examples = [
+ [f"Q:{example['question']}", f"A:{example['answer']}"]
+ for example in few_shot_examples
+ ]
+ return {"computed_fields": {"examples": examples}}
+
+
+def most_frequent(lst):
+ """Returns the most frequent element in a list."""
+ counter = Counter(lst)
+ most_common = counter.most_common(1)
+ return most_common[0][0] if most_common else None
+
+
+async def self_consistency(
+ query: str, num_samples: int, few_shot_examples: list[dict[str, str]]
+):
+ cot_tasks = [chain_of_thought(query, few_shot_examples) for _ in range(num_samples)]
+ cot_responses = [response.content for response in await asyncio.gather(*cot_tasks)]
+ # Extract final answers manually (simplified for this example)
+ final_answers = [
+ response.split("The answer is ")[-1].strip(".") for response in cot_responses
+ ]
+ return most_frequent(final_answers)
+
+
+query = "Olivia has $23. She bought five bagels for $3 each. How much money does she have left?"
+result = await self_consistency(
+ query=query, num_samples=5, few_shot_examples=few_shot_examples
+)
+print(f"The most consistent answer is: {result}")
+```
+
+ The most consistent answer is: $8
+
+
+This basic implementation demonstrates how to use Self-Consistency with Chain of Thought reasoning. The `self_consistency` function generates multiple CoT responses and selects the most frequent final answer.
+
+## Enhanced Self-Consistency with Automated Answer Extraction
+
+Now, let's improve our implementation by adding automated answer extraction:
+
+
+```python
+from pydantic import BaseModel, Field
+
+
+class Solution(BaseModel):
+ solution_value: int = Field(
+ ..., description="The actual number of a solution to a math problem."
+ )
+
+
+@openai.call(model="gpt-4o-mini", response_model=Solution)
+@prompt_template(
+ """
+ Extract just the number of a solution to a math problem.
+ For example, for the solution:
+ Michael initially had 58 balls. He lost 23 on Tuesday, so after that he has
+ 58 - 23 = 35 balls. On Wednesday he lost 2 more so now he has 35 - 2 = 33 balls.
+ The answer is 33.
+
+ You would extract 33.
+
+ Solution to extract from:
+ {response}
+ """
+)
+async def extract_number(response: str): ...
+
+
+async def enhanced_self_consistency(
+ query: str, num_samples: int, few_shot_examples: list[dict[str, str]]
+) -> int:
+ cot_tasks = [chain_of_thought(query, few_shot_examples) for _ in range(num_samples)]
+ cot_responses = [response.content for response in await asyncio.gather(*cot_tasks)]
+ extract_number_tasks = [extract_number(response) for response in cot_responses]
+ response_numbers = [
+ response.solution_value
+ for response in await asyncio.gather(*extract_number_tasks)
+ ]
+ return most_frequent(response_numbers)
+
+
+result = await enhanced_self_consistency(
+ query=query, num_samples=5, few_shot_examples=few_shot_examples
+)
+print(f"The most consistent answer is: {result}")
+```
+
+ The most consistent answer is: 8
+
+
+
+This enhanced version introduces the `extract_number` function, which uses a response model to automatically extract the numerical answer from each CoT response. The `enhanced_self_consistency` function then uses this extracted number to determine the most consistent answer.
+
+## Benefits and Considerations
+
+The Self-Consistency implementation offers several advantages:
+
+1. Improved accuracy on mathematical and symbolic reasoning tasks.
+2. Mitigation of occasional errors or inconsistencies in LLM outputs.
+3. Potential for better performance in scenarios where standard CoT might struggle.
+
+When implementing this technique, consider:
+
+- Balancing the number of samples with computational cost and time constraints.
+- Adjusting the temperature parameter to control the diversity of generated responses.
+- Fine-tuning the answer extraction process for different types of problems (e.g., numerical vs. categorical answers).
+
+
+
+- Complex Problem Solving: Use Self-Consistency for multi-step problems in fields like physics or engineering.
+- Medical Diagnosis: Apply Self-Consistency to improve the accuracy of symptom analysis and potential diagnoses.
+- Financial Modeling: Implement Self-Consistency for more reliable financial predictions and risk assessments.
+- Natural Language Understanding: Enhance text classification or sentiment analysis tasks with Self-Consistency.
+- Educational Assessment: Use Self-Consistency to generate and validate multiple-choice questions and answers.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Tailoring the few-shot examples to your domain for better performance.
+- Experimenting with different prompt formats and Chain of Thought structures.
+- Implementing a feedback loop to continuously improve the quality of the Self-Consistency responses.
+- Combining Self-Consistency with other techniques like Self-Ask for even more powerful reasoning capabilities.
+
+By leveraging Mirascope's `call` decorator, response models, and dynamic configuration, you can easily implement and customize the Self-Consistency technique to enhance your LLM's reasoning capabilities across a wide range of applications.
+
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-refine.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-refine.mdx
new file mode 100644
index 0000000000..f2bc8aa141
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-refine.mdx
@@ -0,0 +1,185 @@
+---
+title: Self-Refine: Enhancing LLM Outputs Through Iterative Self-Improvement
+description: Learn how to implement Self-Refine to improve LLM outputs by using iterative feedback and refinement cycles to enhance responses
+---
+
+# Self-Refine: Enhancing LLM Outputs Through Iterative Self-Improvement
+
+This recipe demonstrates how to implement the Self-Refine technique using Large Language Models (LLMs) with Mirascope. Self-Refine is a prompt engineering method that enhances an LLM's output by iteratively generating feedback and improving its responses.
+
+
+
+
+
+
+Self-refine is a prompt engineering technique where a model gives feedback about its answer and uses the feedback to generate a new answer. This self refinement can take place multiple times to generate the final answer. Self-refine is helpful for reasoning, coding, and generation tasks.
+
+
+## Basic Self-Refine Implementation
+
+Let's start with a basic implementation of Self-Refine using Mirascope:
+
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+from mirascope.core.openai import OpenAICallResponse
+
+
+@openai.call(model="gpt-4o-mini")
+def call(query: str) -> str:
+ return query
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ Here is a query and a response to the query. Give feedback about the answer,
+ noting what was correct and incorrect.
+ Query:
+ {query}
+ Response:
+ {response}
+ """
+)
+def evaluate_response(query: str, response: OpenAICallResponse): ...
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ For this query:
+ {query}
+ The following response was given:
+ {response}
+ Here is some feedback about the response:
+ {feedback}
+
+ Consider the feedback to generate a new response to the query.
+ """
+)
+def generate_new_response(
+ query: str, response: OpenAICallResponse
+) -> openai.OpenAIDynamicConfig:
+ feedback = evaluate_response(query, response)
+ return {"computed_fields": {"feedback": feedback}}
+
+
+def self_refine(query: str, depth: int) -> str:
+ response = call(query)
+ for _ in range(depth):
+ response = generate_new_response(query, response)
+ return response.content
+
+
+query = """Olivia has $23. She bought five bagels for $3 each.
+How much money does she have left?"""
+print(self_refine(query, 1))
+```
+```
+ To determine how much money Olivia has left after her purchase, let's break it down step by step:
+
+ 1. **Starting Amount**: Olivia has $23 initially.
+ 2. **Cost of Bagels**: She bought 5 bagels at $3 each. The total spent on bagels is calculated as:
+ \[
+ 5 \times 3 = 15 \text{ dollars}
+ \]
+ 3. **Amount Left**: Now, we subtract the total amount spent on the bagels from Olivia's starting amount:
+ \[
+ 23 - 15 = 8 \text{ dollars}
+ \]
+
+ Therefore, after buying the bagels, Olivia has **$8 remaining**.
+```
+
+## Enhanced Self-Refine with Response Model
+
+Now, let's improve our implementation by adding a response model to structure the output:
+
+
+
+```python
+from pydantic import BaseModel, Field
+
+
+class MathSolution(BaseModel):
+ steps: list[str] = Field(..., description="The steps taken to solve the problem")
+ final_answer: float = Field(..., description="The final numerical answer")
+
+
+@openai.call(model="gpt-4o-mini", response_model=MathSolution)
+@prompt_template(
+ """
+ For this query:
+ {query}
+ The following response was given:
+ {response}
+ Here is some feedback about the response:
+ {feedback}
+
+ Consider the feedback to generate a new response to the query.
+ Provide the solution steps and the final numerical answer.
+ """
+)
+def enhanced_generate_new_response(
+ query: str, response: OpenAICallResponse
+) -> openai.OpenAIDynamicConfig:
+ feedback = evaluate_response(query, response)
+ return {"computed_fields": {"feedback": feedback}}
+
+
+def enhanced_self_refine(query: str, depth: int) -> MathSolution:
+ response = call(query)
+ for _ in range(depth):
+ solution = enhanced_generate_new_response(query, response)
+ response = f"Steps: {solution.steps}\nFinal Answer: {solution.final_answer}"
+ return solution
+
+
+# Example usage
+result = enhanced_self_refine(query, 1)
+print(result)
+```
+
+```
+ steps=['Olivia has $23.', 'She bought five bagels for $3 each.', 'Calculate the total cost for the bagels: 5 bagels * $3/bagel = $15.', 'Subtract the total cost of the bagels from the amount of money she had: $23 - $15 = $8.'] final_answer=8.0
+```
+
+This enhanced version introduces a MathSolution response model to structure the output, providing a clearer separation between solution steps and the final answer.
+
+## Benefits and Considerations
+
+The Self-Refine implementation offers several advantages:
+
+1. Improved accuracy through iterative refinement of responses.
+2. Enhanced reasoning capabilities, especially for complex problems.
+3. Potential for generating more detailed and step-by-step solutions.
+
+When implementing this technique, consider:
+
+- Balancing the number of refinement iterations with computational cost and time constraints.
+- Tailoring the feedback prompts to focus on specific aspects of improvement relevant to your use case.
+- Experimenting with different model parameters (e.g., temperature) for initial responses vs. refinement steps.
+
+
+
+- Essay Writing: Use Self-Refine to iteratively improve essay drafts, focusing on structure, argument coherence, and style.
+- Code Generation: Apply the technique to generate, evaluate, and refine code snippets or entire functions.
+- Data Analysis Reports: Enhance the quality and depth of data analysis reports through iterative self-improvement.
+- Product Descriptions: Refine product descriptions to be more engaging, accurate, and tailored to target audiences.
+- Legal Document Drafting: Improve the precision and comprehensiveness of legal documents through self-refinement.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Customizing the feedback prompts to focus on domain-specific criteria.
+- Implementing different types of response models for various tasks (e.g., text generation, problem-solving).
+- Combining Self-Refine with other techniques like Chain of Thought for more complex reasoning tasks.
+- Developing a mechanism to halt refinement when improvements become marginal.
+
+By leveraging Mirascope's call decorator, response models, and dynamic configuration, you can easily implement and customize the Self-Refine technique to enhance your LLM's output quality across a wide range of applications.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/sim-to-m.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/sim-to-m.mdx
new file mode 100644
index 0000000000..d408e4a616
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/sim-to-m.mdx
@@ -0,0 +1,105 @@
+---
+title: Sim to M: Enhancing LLM Reasoning with Perspective-Taking
+description: Master the Sim to M technique to improve LLM reasoning by simulating different perspectives in complex scenarios involving multiple viewpoints
+---
+
+# Sim to M: Enhancing LLM Reasoning with Perspective-Taking
+
+This recipe demonstrates how to implement the Sim to M (Simulation Theory of Mind) technique using Large Language Models (LLMs) with Mirascope. Sim to M is a prompt engineering method that enhances an LLM's ability to reason about complex situations involving multiple perspectives.
+
+
+
+
+
+
+Sim to M is a prompt engineering technique for dealing with complex situations which involve multiple perspectives. First ask the LLM to establish the facts from one person's perspective, then answer the question based only on that perspective. This approach can significantly improve the LLM's ability to reason about situations involving different viewpoints or limited information.
+
+
+## Implementation
+
+Let's implement the Sim to M technique using Mirascope:
+
+
+
+```python
+from mirascope.core import openai
+from mirascope.core.base.prompt import prompt_template
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ The following is a sequence of events:
+ {story}
+ What events does {name} know about?
+ """
+)
+def get_one_perspective(story: str, name: str):
+ """Gets one person's perspective of a story."""
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ {story_from_perspective}
+ Based on the above information, answer the following question:
+ {query}
+ """
+)
+def sim_to_m(story: str, query: str, name: str) -> openai.OpenAIDynamicConfig:
+ """Executes the flow of the Sim to M technique."""
+ story_from_perspective = get_one_perspective(story=story, name=name)
+ return {"computed_fields": {"story_from_perspective": story_from_perspective}}
+
+
+story = """Jim put the ball in the box. While Jim wasn't looking, Avi moved the \
+ball to the basket."""
+query = "Where does Jim think the ball is?"
+
+print(sim_to_m(story=story, query=query, name="Jim"))
+```
+
+ Based on the information provided, Jim believes the ball is in the box, as he is only aware of his own action of putting the ball there. He is unaware of Avi's action of moving the ball to the basket. Therefore, Jim thinks the ball is still in the box.
+
+
+This implementation consists of two main functions:
+
+1. `get_one_perspective`: This function takes a story and a person's name as input, and returns the events known to that person.
+2. `sim_to_m`: This function orchestrates the Sim to M technique. It first calls `get_one_perspective` to establish the facts from one person's viewpoint, then uses this perspective to answer the given query.
+
+
+## Benefits and Considerations
+
+The Sim to M implementation offers several advantages:
+
+1. Improved reasoning about situations involving multiple perspectives or limited information.
+2. Enhanced ability to model and simulate different viewpoints in complex scenarios.
+3. Potential for more accurate responses in tasks involving theory of mind or perspective-taking.
+
+When implementing this technique, consider:
+
+- Carefully crafting the initial story to include relevant information about different perspectives.
+- Ensuring that the query is specific to a particular perspective or viewpoint.
+- Experimenting with different prompts for the `get_one_perspective` function to optimize perspective extraction.
+
+
+
+- Character Analysis in Literature: Use Sim to M to analyze characters' motivations and beliefs in complex narratives.
+- Conflict Resolution: Apply the technique to understand different stakeholders' viewpoints in disputes.
+- User Experience Design: Simulate how different user groups might perceive and interact with a product or service.
+- Historical Analysis: Model historical figures' decision-making based on their known information at the time.
+- Psychological Assessments: Enhance AI-assisted psychological evaluations by better modeling individual perspectives.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Tailoring the story and query formats to your domain for better performance.
+- Implementing a mechanism to handle multiple perspectives in more complex scenarios.
+- Combining Sim to M with other techniques like Chain of Thought for even more nuanced reasoning.
+- Developing a feedback loop to refine the perspective extraction process based on the accuracy of final answers.
+
+By leveraging Mirascope's `call` decorator and dynamic configuration, you can easily implement and customize the Sim to M technique to enhance your LLM's ability to reason about complex, multi-perspective situations across a wide range of applications.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/skeleton-of-thought.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/skeleton-of-thought.mdx
new file mode 100644
index 0000000000..84cc7ce651
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/skeleton-of-thought.mdx
@@ -0,0 +1,145 @@
+---
+title: Skeleton of Thought: Enhancing LLM Response Speed
+description: Speed up LLM responses with the Skeleton of Thought technique by creating initial outlines and expanding individual points in parallel
+---
+
+# Skeleton of Thought: Enhancing LLM Response Speed
+
+This recipe demonstrates how to implement Skeleton of Thought, a speed-oriented prompt engineering technique.
+
+This recipe demonstrates how to implement the Skeleton of Thought technique using Large Language Models (LLMs) with Mirascope.
+
+
+
+
+
+
+Skeleton of Thought is a prompt-engineering technique that is speed-oriented as opposed to the quality of the response. To expedite the response from a model, make an initial call to create a "skeleton" of the problem that outlines its solution in bulletpoints (without further explanations), then make an individual call with each of the subpoints in parallel before reconstructing the answer at the end.
+
+
+## Basic Skeleton of Thought Implementation
+
+Let's start with a basic implementation of Skeleton of Thought:
+
+
+
+
+```python
+import asyncio
+
+from mirascope.core import openai, prompt_template
+from pydantic import BaseModel, Field
+
+
+class Skeleton(BaseModel):
+ subpoints: list[str] = Field(
+ ...,
+ description="""The subpoints of the skeleton of the original query.
+ Each is 3-5 words and starts with its point index, e.g.
+ 1. Some subpoint...""",
+ )
+
+
+@openai.call(model="gpt-3.5-turbo", response_model=Skeleton)
+@prompt_template(
+ """
+ You're an organizer responsible for only giving the skeleton (not the full content) for answering the question.
+ Provide the skeleton in a list of points (numbered 1., 2., 3., etc.) to answer the question.
+ Instead of writing a full sentence, each skeleton point should be very short with only 3∼5 words.
+ Generally, the skeleton should have 3∼10 points.
+ Now, please provide the skeleton for the following question.
+ {query}
+ Skeleton:
+ """
+)
+def break_into_subpoints(query: str): ...
+
+
+@openai.call(model="gpt-3.5-turbo")
+@prompt_template(
+ """
+ You're responsible for continuing the writing of one and only one point in the overall answer to the following question:
+
+ {query}
+
+ The skeleton of the answer is:
+
+ {skeleton}
+
+ Continue and only continue the writing of point {point_index}. Write it very shortly in 1-2 sentences and do not continue with other points!
+ """
+)
+async def expand_subpoint(query: str, skeleton: list[str], point_index: int): ...
+
+
+query = "How can I improve my focus?"
+
+
+async def skeleton_of_thought(query):
+ skeleton = break_into_subpoints(query)
+ tasks = [
+ expand_subpoint(query, skeleton.subpoints, i + 1)
+ for i, subpoint in enumerate(skeleton.subpoints)
+ ]
+ results = await asyncio.gather(*tasks)
+ return "\n".join([result.content for result in results])
+
+
+print(await skeleton_of_thought(query))
+```
+
+ Identify distractions by making a list of the things that tend to pull your attention away from the task at hand. Once you know what they are, you can take steps to minimize their impact on your focus.
+ Establishing a routine can help improve focus by creating structure and consistency in your daily tasks and priorities. By sticking to a set schedule, you can reduce the likelihood of getting off track and better manage your time and energy.
+ Set specific goals by breaking down your tasks into smaller, manageable steps with clear deadlines. This will help you stay on track and maintain focus on what needs to be accomplished.
+ 4. Practice mindfulness by staying present in the moment and focusing on your breathing to help quiet the mind and improve concentration.
+ Take regular breaks to give your mind time to rest and recharge, allowing you to come back to your tasks with renewed focus and energy.
+
+
+This implementation demonstrates how to use Skeleton of Thought with Mirascope. The `break_into_subpoints` function creates the initial skeleton, and `expand_subpoint` expands each subpoint in parallel. The `skeleton_of_thought` function orchestrates the entire process.
+
+Intermediate Response:
+
+
+
+```python
+print(break_into_subpoints(query))
+```
+
+ subpoints=['Identify distractions', 'Implement time management techniques', 'Practice mindfulness', 'Get enough sleep', 'Stay hydrated', 'Exercise regularly', 'Set clear goals', 'Take short breaks', 'Limit multitasking']
+
+
+## Benefits and Considerations
+
+The Skeleton of Thought implementation offers several advantages:
+
+1. Improved response speed by parallelizing the expansion of subpoints.
+2. Enhanced structure in responses, making them easier to read and understand.
+3. Potential for better performance on complex queries that benefit from a structured approach.
+
+When implementing this technique, consider:
+
+- Balancing the number of subpoints with the desired response length and complexity.
+- Adjusting the prompt for subpoint expansion based on the specific use case or domain.
+- Implementing error handling and retries to ensure robustness in production environments.
+
+
+
+- Content Creation: Use Skeleton of Thought to quickly generate outlines for articles or blog posts.
+- Project Planning: Rapidly break down complex projects into manageable tasks and subtasks.
+- Educational Materials: Create structured lesson plans or study guides efficiently.
+- Technical Documentation: Generate quick, well-structured documentation outlines for software or products.
+- Problem-Solving: Break down complex problems into smaller, more manageable components for analysis.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Customizing the skeleton generation prompt to fit your domain-specific needs.
+- Experimenting with different LLM models for skeleton generation and subpoint expansion to optimize for speed and quality.
+- Implementing a feedback loop to refine the skeleton based on the quality of expanded subpoints.
+
+By leveraging Mirascope's `call` decorator, response models, and dynamic configuration, you can easily implement and customize the Skeleton of Thought technique to enhance your LLM's response speed and structure across a wide range of applications.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/step-back.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/step-back.mdx
new file mode 100644
index 0000000000..c6369b0c36
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/step-back.mdx
@@ -0,0 +1,184 @@
+---
+title: Step-back Prompting: Enhancing LLM Reasoning with High-Level Questions
+description: Learn to implement Step-back prompting, a technique that improves LLM reasoning by asking a high-level question about relevant concepts before addressing the original query.
+---
+
+# Step-back Prompting: Enhancing LLM Reasoning with High-Level Questions
+
+This recipe demonstrates how to implement the Step-back prompting technique using Large Language Models (LLMs) with Mirascope. Step-back prompting is a method that enhances an LLM's reasoning capabilities by asking a high-level question about relevant concepts or facts before addressing the original query.
+
+
+
+
+
+
+Step-back prompting is an alternative to the Chain of Thought technique, where one asks the LLM a high-level question about relevant concepts or facts before asking it the actual question. This technique is derived from the fact that humans often take step backs and use abstractions to arrive at an answer, and it can yield correct answers at times when Chain of Thought fails.
+
+
+## Implementation
+
+Let's implement the Step-back prompting technique using Mirascope:
+
+
+
+
+
+```python
+from mirascope.core import openai
+from mirascope.core.base.prompt import prompt_template
+
+few_shot_examples = [
+ {
+ "original_question": "Which position did Knox Cunningham hold from May 1955 to Apr 1956?",
+ "stepback_question": "Which positions have Knox Cunningham held in his career?",
+ },
+ {
+ "original_question": "Who was the spouse of Anna Karina from 1968 to 1974?",
+ "stepback_question": "Who were the spouses of Anna Karina?",
+ },
+ {
+ "original_question": "Which team did Thierry Audel play for from 2007 to 2008?",
+ "stepback_question": "Which teams did Thierry Audel play for in his career?",
+ },
+ {
+ "original_question": "What was the operator of GCR Class 11E from 1913 to Dec 1922?",
+ "stepback_question": "What were the operators of GCR Class 11E in history?",
+ },
+ {
+ "original_question": "Which country did Sokolovsko belong to from 1392 to 1525?",
+ "stepback_question": "Which countries did Sokolovsko belong to in history?",
+ },
+ {
+ "original_question": "when was the last time a team from canada won the stanley cup as of 2002",
+ "stepback_question": "which years did a team from canada won the stanley cup as of 2002",
+ },
+ {
+ "original_question": "when did england last get to the semi final in a world cup as of 2019",
+ "stepback_question": "which years did england get to the semi final in a world cup as of 2019?",
+ },
+ {
+ "original_question": "what is the biggest hotel in las vegas nv as of November 28, 1993",
+ "stepback_question": "what is the size of the hotels in las vegas nv as of November 28, 1993",
+ },
+ {
+ "original_question": "who has scored most runs in t20 matches as of 2017",
+ "stepback_question": "What are the runs of players in t20 matches as of 2017",
+ },
+]
+
+stepback_prompt = """You are an expert at world knowledge. Your task is to step \
+back and paraphrase a question to a more generic step-back question, which is \
+easier to answer. Here are a few examples:"""
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ SYSTEM: {stepback_prompt_and_examples}
+ USER: {query}
+ """
+)
+def get_stepback_question(
+ query: str, num_examples: int = 0
+) -> openai.OpenAIDynamicConfig:
+ """Gets the generic, step-back version of a query."""
+ if num_examples < 0 or num_examples > len(few_shot_examples):
+ raise ValueError(
+ "num_examples cannot be negative or greater than number of available examples."
+ )
+ example_prompts = ""
+ for i in range(num_examples):
+ example_prompts += (
+ f"Original Question: {few_shot_examples[i]['original_question']}\n"
+ )
+ example_prompts += (
+ f"Stepback Question: {few_shot_examples[i]['stepback_question']}\n"
+ )
+ return {
+ "computed_fields": {
+ "stepback_prompt_and_examples": f"{stepback_prompt}\n{example_prompts}"
+ if num_examples
+ else None
+ }
+ }
+
+
+@openai.call(model="gpt-4o-mini")
+def call(query: str) -> str:
+ """A standard call to OpenAI."""
+ return query
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ You are an expert of world knowledge. I am going to ask you a question.
+ Your response should be comprehensive and not contradicted with the
+ following context if they are relevant. Otherwise, ignore them if they are
+ not relevant.
+
+ {stepback_response}
+
+ Original Question: {query}
+ Answer:
+ """
+)
+def stepback(query: str, num_examples: int) -> openai.OpenAIDynamicConfig:
+ """Executes the flow of the Step-Back technique."""
+ stepback_question = get_stepback_question(
+ query=query, num_examples=num_examples
+ ).content
+ stepback_response = call(query=stepback_question).content
+ return {"computed_fields": {"stepback_response": stepback_response}}
+
+
+# Example usage
+query = """Who is the highest paid player in the nba this season as of 2017"""
+
+print(stepback(query=query, num_examples=len(few_shot_examples)))
+```
+
+ As of the 2017 NBA season, the highest-paid player was Stephen Curry. He signed a four-year, $215 million contract extension with the Golden State Warriors, which was the largest contract in NBA history at that time. This contract significantly boosted his earnings, making him the top earner in the league for that season. Other players like LeBron James and Kevin Durant were also among the highest-paid, but Curry's contract set a new benchmark in player salaries at that time.
+
+
+This implementation consists of three main functions:
+
+1. `get_stepback_question`: This function takes a query and generates a more generic, step-back version of the question.
+2. `call`: A standard call to OpenAI that processes the step-back question.
+3. `stepback`: This function orchestrates the Step-back prompting technique. It first calls `get_stepback_question` to generate a high-level question, then uses `call` to get a response to this question, and finally combines this information to answer the original query.
+
+## Benefits and Considerations
+
+The Step-back prompting implementation offers several advantages:
+
+1. Improved reasoning about complex queries by considering higher-level concepts first.
+2. Potential for more accurate responses in tasks that benefit from broader context.
+3. Ability to overcome limitations of other techniques like Chain of Thought in certain scenarios.
+
+When implementing this technique, consider:
+
+- Balancing the generality of the step-back question with its relevance to the original query.
+- Experimenting with different numbers of few-shot examples to optimize performance.
+- Adjusting the prompt for generating step-back questions based on your specific use case.
+
+
+
+- Complex Problem Solving: Use Step-back prompting for multi-step problems in fields like mathematics or engineering.
+- Medical Diagnosis: Apply the technique to consider general symptoms before focusing on specific conditions.
+- Legal Analysis: Implement Step-back prompting to first consider broader legal principles before addressing specific cases.
+- Historical Analysis: Use the method to first consider broader historical context before analyzing specific events.
+- Product Development: Apply Step-back prompting to consider general market trends before focusing on specific product features.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Tailoring the few-shot examples to your domain for better performance.
+- Implementing a feedback loop to continuously improve the quality of step-back questions generated.
+- Combining Step-back prompting with other techniques like Self-consistency for even more robust reasoning capabilities.
+- Experimenting with different LLM models to find the best balance between performance and efficiency for your use case.
+
+By leveraging Mirascope's `call` decorator and dynamic configuration, you can easily implement and customize the Step-back prompting technique to enhance your LLM's reasoning capabilities across a wide range of applications.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/system-to-attention.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/system-to-attention.mdx
new file mode 100644
index 0000000000..f950b7e1ff
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/system-to-attention.mdx
@@ -0,0 +1,133 @@
+---
+title: System to Attention (S2A): Enhancing LLM Focus with Query Filtering
+description: Implement System to Attention (S2A) technique to filter irrelevant information from queries for more accurate and focused LLM responses
+---
+
+# System to Attention (S2A): Enhancing LLM Focus with Query Filtering
+
+This recipe demonstrates how to implement the System to Attention (S2A) technique using Large Language Models (LLMs) with Mirascope. S2A is a prompt engineering method that enhances an LLM's ability to focus on relevant information by filtering out irrelevant context from the initial query.
+
+
+
+
+
+
+System to Attention (S2A) is a prompt engineering technique whereby the prompt is first filtered to remove all irrelevant information from the query. This approach helps LLMs focus on the most pertinent information, potentially improving the accuracy and relevance of their responses, especially for queries containing extraneous or potentially biasing information.
+
+
+## Implementation
+
+Let's implement the S2A technique using Mirascope:
+
+
+```python
+from mirascope.core import openai, prompt_template
+from pydantic import BaseModel, Field
+
+
+class RelevantContext(BaseModel):
+ context_text: str = Field(
+ description="Context text related to the question (includes all content except unrelated sentences)"
+ )
+ detailed_question: str = Field(description="Detailed question:")
+
+
+@openai.call(model="gpt-4o-mini", response_model=RelevantContext)
+@prompt_template(
+ """
+ Given the following text by a user, extract the part that is related and useful, so that using that text alone would be good context for providing an accurate and correct answer to the question portion of the text.
+ Please include the actual question or query that the user is asking.
+ Separate this into two categories labeled with ”Context text related to the question (includes all content except unrelated sentences):” and ”Detailed question:”.
+ Do not use list.
+ Text by User: {query}
+ """
+)
+def remove_irrelevant_info(query: str):
+ """Reduces a query down to its relevant context and question"""
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ Original user query (possibly biased): {query}
+ Unbiased context: {context_text}
+ Given the above unbiased context, answer the following: {detailed_question}
+ """
+)
+def s2a(query: str) -> openai.OpenAIDynamicConfig:
+ """Executes the flow of the System to Attention technique."""
+ relevant_context = remove_irrelevant_info(query=query)
+ context_text = relevant_context.context_text
+ detailed_question = relevant_context.detailed_question
+ return {
+ "computed_fields": {
+ "context_text": context_text,
+ "detailed_question": detailed_question,
+ }
+ }
+
+
+# Example usage
+query = """Sunnyvale is a city in California. \
+Sunnyvale has many parks. Sunnyvale city is \
+close to the mountains. Many notable people \
+are born in Sunnyvale. \
+In which city was San Jose's mayor Sam \
+Liccardo born?"""
+
+print(s2a(query=query))
+```
+
+ Sam Liccardo, the mayor of San Jose, was born in San Jose, California.
+
+
+This implementation consists of two main functions:
+
+1. `remove_irrelevant_info`: This function takes the original query and extracts the relevant context and the detailed question. It uses a `RelevantContext` response model to structure the output.
+
+2. `s2a`: This is the main function that orchestrates the S2A technique. It first calls `remove_irrelevant_info` to filter the query, then uses the filtered information to generate a response.
+
+## How It Works
+
+1. **Query Filtering**: The `remove_irrelevant_info` function analyzes the input query and separates it into relevant context and the actual question. This step helps remove any irrelevant or potentially biasing information.
+
+2. **Context Separation**: The filtered information is structured into two parts: the context text and the detailed question. This separation allows for more focused processing in the next step.
+
+3. **Unbiased Response Generation**: The `s2a` function uses the filtered context and question to generate a response. By providing the original query alongside the filtered information, it allows the model to be aware of potential biases while focusing on the relevant information.
+
+## Benefits and Considerations
+
+The S2A technique offers several advantages:
+
+1. Improved focus on relevant information, potentially leading to more accurate responses.
+2. Reduction of bias from irrelevant context in the original query.
+3. Clear separation of context and question, allowing for more structured reasoning.
+
+When implementing this technique, consider:
+
+- Balancing between removing irrelevant information and retaining important context.
+- Adjusting the filtering prompt based on the specific domain or type of queries you're dealing with.
+- Monitoring the performance to ensure that important information isn't being filtered out unintentionally.
+
+
+
+- Customer Support: Filter out emotional language or irrelevant details from customer queries to focus on the core issue.
+- Legal Document Analysis: Extract relevant facts and questions from lengthy legal documents for more efficient processing.
+- Medical Diagnosis Assistance: Focus on key symptoms and patient history while filtering out irrelevant personal information.
+- Educational Q&A Systems: Improve the relevance of answers by focusing on the core educational content of student questions.
+- Research Query Processing: Enhance literature review processes by focusing on the most relevant aspects of research questions.
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Fine-tuning the filtering process for your specific domain or types of queries.
+- Experimenting with different prompt formats for both the filtering and answering stages.
+- Implementing a feedback loop to continuously improve the quality of the filtering process.
+- Combining S2A with other techniques like Chain of Thought or Self-Consistency for even more robust reasoning capabilities.
+
+By leveraging Mirascope's `call` decorator, response models, and dynamic configuration, you can easily implement and customize the System to Attention technique to enhance your LLM's ability to focus on relevant information and provide more accurate responses across a wide range of applications.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/chain-of-thought.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/chain-of-thought.mdx
new file mode 100644
index 0000000000..2e27d9582b
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/chain-of-thought.mdx
@@ -0,0 +1,147 @@
+---
+title: Chain of Thought
+description: Implement chain-of-thought prompting to improve LLM reasoning. This guide demonstrates how to structure prompts that encourage step-by-step problem solving.
+---
+
+# Chain of Thought
+
+[Chain of Thought](https://arxiv.org/pdf/2201.11903) (CoT) is a common prompt engineering technique which asks the LLM to step through its reasoning and thinking process to answer a question. In its simplest form, it can be implemented by asking asking the LLM to step through a problem step by step, but is more effective when you leverage examples and patterns of reasoning similar to your query in a few shot prompt. Chain of Thought is most effective for mathematical and reasoning tasks.
+
+
+
+
+
+## Zero Shot CoT
+
+
+Recent models will automatically explain their reasoning (to a degree) for most reasoning tasks, but explicitly asking for a step by step solution can sometimes produce better solutions and explanations.
+
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+cot_augment = "\nLet's think step by step."
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template("{query} {cot_augment}")
+def call(query: str, cot_prompt: bool = False) -> openai.OpenAIDynamicConfig:
+ return {
+ "computed_fields": {
+ "cot_augment": cot_augment if cot_prompt else "",
+ }
+ }
+
+
+prompt = """Olivia has $23. She bought five bagels for $3 each.
+How much money does she have left?"""
+
+print(call(query=prompt, cot_prompt=True))
+```
+
+ First, let's determine how much money Olivia spent on the bagels.
+
+ 1. **Calculate the total cost of the bagels:**
+ - Price of one bagel = $3
+ - Number of bagels = 5
+ - Total cost = Price of one bagel × Number of bagels = $3 × 5 = $15
+
+ 2. **Subtract the total cost from Olivia's initial amount:**
+ - Initial amount = $23
+ - Amount spent = $15
+ - Amount left = Initial amount - Amount spent = $23 - $15 = $8
+
+ So, Olivia has $8 left after buying the bagels.
+
+
+## Few Shot CoT
+
+
+```python
+from mirascope.core import openai
+from openai.types.chat import ChatCompletionMessageParam
+
+few_shot_examples = [
+ {
+ "question": "There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?",
+ "answer": """There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6.""",
+ },
+ {
+ "question": "If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?",
+ "answer": """There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5.""",
+ },
+ {
+ "question": "Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?",
+ "answer": """Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39.""",
+ },
+ {
+ "question": "Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?",
+ "answer": """Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8.""",
+ },
+ {
+ "question": "Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?",
+ "answer": """Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9.""",
+ },
+ {
+ "question": "There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?",
+ "answer": """There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29.""",
+ },
+ {
+ "question": "Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?",
+ "answer": """Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33.""",
+ },
+]
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ MESSAGES: {example_prompts}
+ USER: {query}
+ """
+)
+def call(query: str, num_examples: int = 0) -> openai.OpenAIDynamicConfig:
+ if num_examples < 0 or num_examples > len(few_shot_examples):
+ raise ValueError(
+ "num_examples cannot be negative or greater than number of available examples."
+ )
+ example_prompts: list[ChatCompletionMessageParam] = []
+ for i in range(num_examples):
+ example_prompts.append(
+ {"role": "user", "content": few_shot_examples[i]["question"]}
+ )
+ example_prompts.append(
+ {"role": "assistant", "content": few_shot_examples[i]["answer"]}
+ )
+ return {"computed_fields": {"example_prompts": example_prompts}}
+
+
+prompt = """Olivia has $23. She bought five bagels for $3 each.
+How much money does she have left?"""
+
+print(call(query=prompt, num_examples=len(few_shot_examples)))
+```
+
+ Olivia bought 5 bagels for $3 each, which costs her a total of \(5 \times 3 = 15\) dollars.
+
+ She started with $23, so after the purchase, she has \(23 - 15 = 8\) dollars left.
+
+ The answer is $8.
+
+
+
+
+- Encourage Step-by-Step Thinking: Explicitly instruct the LLM to break down the problem into small steps.
+- Provide Relevant Examples: In few-shot learning, use examples that are similar to the problem you want to solve.
+- Ask for Clear Explanations: Prompt the LLM to explain its reasoning clearly at each step.
+- Apply to Complex Problems: Chain of Thought is particularly effective for problems that require multiple steps or complex reasoning.
+- Validate Results: Review the LLM's reasoning process and verify that each step is logical.
+
+
+
+By leveraging the Chain of Thought technique, you can make the LLM's reasoning process more transparent and obtain more accurate and explainable answers to complex problems. This technique is particularly useful for mathematical problems and tasks that require multi-step reasoning.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/common-phrases.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/common-phrases.mdx
new file mode 100644
index 0000000000..d626c6b00e
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/common-phrases.mdx
@@ -0,0 +1,71 @@
+---
+title: Common Phrases (Prompt Mining)
+description: Discover how using domain-specific terminology and common phrases can significantly improve LLM responses through Prompt Mining techniques.
+---
+
+# Common Phrases (Prompt Mining)
+
+Sometimes, an LLM can appear to know more or less about a topic depending on the phrasing you use because a specific bit of information shows up far more frequently in its training data with a specific phrasing. Prompt Mining, explained in [this paper](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00324/96460/How-Can-We-Know-What-Language-Models-Know), involves various methods of searching for the phrase which gives the best response regarding a topic.
+
+
+
+
+
+
+Prompt Mining itself is an extensive endeavor, but the takeaway is that LLMs are likely to give better answers when the question uses phrases that the LLM has been trained on. Here is an example where using common jargon regarding a video game produces a real answer, and a generic prompt doesn’t:
+
+
+
+```python
+from mirascope.core import openai
+
+
+@openai.call(model="gpt-4o-mini")
+def call(query: str):
+ return query
+
+
+generic_response = call(
+ """Does the roy in smash bros ultimate have a reliable way to knock out an\
+ opponent that starts with the A button?"""
+)
+engineered_response = call(
+ """In smash bros ultimate, what moves comprise Roy's kill confirm combo that\
+ starts with jab?"""
+)
+
+print(generic_response)
+print(engineered_response)
+```
+
+ Yes, in Super Smash Bros. Ultimate, Roy has a reliable way to knock out an opponent using an attack that starts with the A button. His forward tilt (dtilt), also known as the "F tilt," is a strong move that can lead to knockouts if used correctly, especially at higher percentages. Additionally, his neutral attack (AAA combo) can set up for follow-up attacks or build damage, although it's not typically a kill move on its own.
+
+ If you are looking specifically for a move that can knock out and starts with the A button, Roy's forward aerial (aerial attack) is also a strong option, particularly towards the edge of the stage when opponents are at higher percentages.
+
+ Keep in mind that spacing and timing are key to landing these moves effectively!
+ In Super Smash Bros. Ultimate, Roy's jab kill confirm combo typically involves starting with his jab, specifically the rapid jabs. After hitting the opponent with the jab, players can follow up with:
+
+ 1. **Jab (rapid jabs)** - Connects with the first few hits.
+ 2. **F-tilt (Forward Tilt)** - After the jab, quickly input F-tilt to catch the opponent off-guard. The jab pushes the opponent slightly away, and if done correctly, the F-tilt can connect reliably.
+
+ The timing and spacing are crucial for this combo to work effectively, and it often relies on the opponent being at a higher percentage for the F-tilt to secure the KO. Additionally, if performed correctly, this can work as an effective kill confirm at kill percentages.
+
+
+As you can see, using common phrases and jargon specific to the topic (in this case, Super Smash Bros. Ultimate) can lead to more accurate and detailed responses from the LLM. This technique can be particularly useful when dealing with specialized or technical subjects.
+
+## Tips for Using Common Phrases
+
+1. **Research the topic**: Familiarize yourself with the common terminology and phrases used in the field you're querying about.
+
+2. **Use specific jargon**: Incorporate field-specific terms that are likely to appear in the LLM's training data.
+
+3. **Experiment with different phrasings**: Try multiple versions of your query using different common phrases to see which yields the best results.
+
+4. **Be aware of potential biases**: Remember that using common phrases might reinforce existing biases in the training data.
+
+5. **Combine with other techniques**: Use this approach in conjunction with other prompt engineering techniques for even better results.
+
+By leveraging common phrases and domain-specific language, you can often elicit more accurate and detailed responses from LLMs, especially when dealing with specialized topics or technical subjects.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/contrastive-chain-of-thought.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/contrastive-chain-of-thought.mdx
new file mode 100644
index 0000000000..9c3e00167c
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/contrastive-chain-of-thought.mdx
@@ -0,0 +1,81 @@
+---
+title: Contrastive Chain of Thought
+description: Learn to apply Contrastive Chain of Thought by providing both correct and incorrect reasoning examples to improve LLM problem-solving
+---
+
+# Contrastive Chain of Thought
+
+[Contrastive Chain of Thought](https://arxiv.org/pdf/2311.09277) is an extension of [Chain of Thought](https://arxiv.org/abs/2201.11903) which involves adding both correct and incorrect examples to help the LLM reason. Contrastive Chain of Thought is applicable anywhere CoT is, such as mathematical and reasoning tasks, but is additionally helpful for scenarios where LLM might be prone to common errors or misunderstandings.
+
+
+
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+example = """
+Example Question: If you roll two 6 sided dice (1~6) and a 12 sided die (1~12),
+how many possible outcomes are there?
+
+Correct Reasoning: The smallest possible sum is 3 and the largest possible sum is 24.
+We know two six sided die can roll anywhere from 2 to 12 from their standalone sums,
+so it stands to reason that by adding a value from (1~12) to one of those possible
+sums from 2~12, we can hit any number from 3~24 without any gaps in coverage.
+So, there are (24-3)+1 = 22 possible outcomes.
+
+Incorrect Reasoning: 6x6x12 = 2592 outcomes
+"""
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ {example}
+ {query}
+ """
+)
+def call(query: str, ccot_prompt: bool = False) -> openai.OpenAIDynamicConfig:
+ return {"computed_fields": {"example": example if ccot_prompt else ""}}
+
+
+prompt = """
+If you roll two 8 sided dice (1~8) and a 10 sided die (1~10), how many possible
+outcomes are there?
+"""
+
+print(call(query=prompt, ccot_prompt=True))
+```
+
+ To find the total number of possible outcomes when rolling two 8-sided dice and one 10-sided die, we can use the counting principle.
+
+ 1. **Two 8-sided dice:** Each die has 8 possible outcomes. Therefore, the number of outcomes for two dice is:
+ \[
+ 8 \times 8 = 64
+ \]
+
+ 2. **One 10-sided die:** This die has 10 possible outcomes.
+
+ 3. **Total Outcomes:** Since the rolls are independent, the total number of outcomes when rolling two 8-sided dice and one 10-sided die is the product of the outcomes from each die:
+ \[
+ 64 \times 10 = 640
+ \]
+
+ Thus, the total number of possible outcomes when rolling two 8-sided dice and one 10-sided die is **640**.
+
+
+
+
+- Provide Clear Examples: Include both correct and incorrect reasoning examples to guide the LLM's thought process.
+- Highlight Common Mistakes: Use incorrect examples that demonstrate typical errors or misconceptions related to the problem.
+- Explain the Contrast: Clearly explain why the correct reasoning is right and why the incorrect reasoning is wrong.
+- Apply to Complex Problems: Use Contrastive Chain of Thought for problems where there are multiple potential approaches, some of which may lead to incorrect conclusions.
+- Customize Examples: Tailor the examples to be relevant to the specific type of problem or domain you're working with.
+
+
+
+Contrastive Chain of Thought enhances the standard Chain of Thought approach by explicitly showing both correct and incorrect reasoning paths. This technique can be particularly effective in helping the LLM avoid common pitfalls and misconceptions, leading to more accurate and robust problem-solving across a variety of tasks, especially those prone to subtle errors or misunderstandings.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/emotion-prompting.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/emotion-prompting.mdx
new file mode 100644
index 0000000000..fb6f4f9990
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/emotion-prompting.mdx
@@ -0,0 +1,76 @@
+---
+title: Emotion Prompting
+description: Enhance LLM responses by adding emotionally significant phrases to prompts to increase engagement and improve output quality
+---
+
+# Emotion Prompting
+
+[Emotion Prompting](https://arxiv.org/pdf/2307.11760) is a prompt engineering technique where you end your original prompt with a phrase of psychological importance. It is most helpful for open ended tasks, but can still improve some analytical prompts:
+
+
+
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+emotion_augment = "This is very important to my career."
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template("{query} {emotion_augment}")
+def call(query: str, emotion_prompt: bool = False) -> openai.OpenAIDynamicConfig:
+ return {
+ "computed_fields": {
+ "emotion_augment": emotion_augment if emotion_prompt else "",
+ }
+ }
+
+
+prompt = """Write me an email I can send to my boss about how I need to
+take a day off for mental health reasons."""
+
+print(call(query=prompt, emotion_prompt=True))
+```
+
+ Subject: Request for a Day Off
+
+ Dear [Boss's Name],
+
+ I hope this message finds you well. I am writing to formally request a day off for mental health reasons on [specific date]. I believe taking this time will allow me to recharge and return to work with renewed focus and energy.
+
+ I understand the importance of maintaining productivity and teamwork, and I will ensure that any pressing tasks are managed before my absence. I will also make sure to communicate with the team so that there are no disruptions.
+
+ Thank you for your understanding and support regarding my request. I’m committed to maintaining my well-being, which ultimately contributes to my overall performance and our team's success.
+
+ Best regards,
+
+ [Your Name]
+ [Your Position]
+ [Your Contact Information]
+
+
+This example demonstrates how to implement emotion prompting using Mirascope. The `emotion_augment` variable contains the emotional phrase that will be added to the end of the prompt when `emotion_prompt` is set to `True`.
+
+## Benefits of Emotion Prompting
+
+1. **Increased Engagement**: Adding emotional context can make the LLM's responses more empathetic and engaging.
+2. **Improved Relevance**: Emotional prompts can help guide the LLM to provide responses that are more relevant to the user's emotional state or needs.
+3. **Enhanced Creativity**: For open-ended tasks, emotion prompting can lead to more creative and nuanced responses.
+4. **Better Problem Solving**: In some cases, emotion prompting can help the LLM focus on more critical aspects of a problem or question.
+
+
+
+- Choose Appropriate Emotions: Select emotional phrases that are relevant to the context of your query.
+- Be Authentic: Use emotional prompts that genuinely reflect the importance or emotional weight of the task.
+- Experiment: Try different emotional phrases to see which produces the best results for your specific use case.
+- Balance: Be careful not to overuse emotional prompting, as it may not be appropriate for all types of queries.
+- Combine with Other Techniques: Emotion prompting can be used in conjunction with other prompt engineering techniques for even better results.
+
+
+
+By leveraging emotion prompting, you can guide the LLM to provide responses that are more emotionally attuned and potentially more helpful for tasks that benefit from emotional context.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/plan-and-solve.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/plan-and-solve.mdx
new file mode 100644
index 0000000000..134a2ecaa5
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/plan-and-solve.mdx
@@ -0,0 +1,71 @@
+---
+title: Plan and Solve
+description: Implement the Plan and Solve technique, a variation of Chain of Thought prompting that improves reasoning by first understanding the problem, devising a plan, and then solving step by step.
+---
+
+# Plan and Solve
+
+[Plan and Solve](https://arxiv.org/pdf/2305.04091) is another variation of zero-shot [Chain of Thought](https://arxiv.org/abs/2201.11903) whereby the LLM is asked to reason with the improved prompt `"Q: {prompt} A: Let's first understand the problem and devise a plan to solve it. Then, let's carry out the plan and solve the problem step by step"`. Plan-and-solve has shown improvements compared to standard CoT in reasoning and mathematical tasks.
+
+
+
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+pas_augment = """Let's first understand the problem and devise a plan to solve it.
+Then, let's carry out the plan and solve the problem step by step."""
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template("{modifiable_query}")
+def call(query: str, pas_prompt: bool = False) -> openai.OpenAIDynamicConfig:
+ if pas_prompt:
+ modifiable_query = f"Q: {query}\nA: {pas_augment}"
+ else:
+ modifiable_query = query
+ return {"computed_fields": {"modifiable_query": modifiable_query}}
+
+
+prompt = """The school cafeteria ordered 42 red apples and 7 green apples for
+students lunches. But, if only 9 students wanted fruit, how many extra did the
+cafeteria end up with?"""
+
+print(call(query=prompt, pas_prompt=True))
+```
+
+```
+To find out how many extra apples the cafeteria ended up with, we can follow these steps:
+
+1. **Calculate the total number of apples ordered:**
+ - Red apples: 42
+ - Green apples: 7
+ - Total apples = Red apples + Green apples = 42 + 7 = 49 apples
+
+2. **Identify how many apples were taken by the students:**
+ - Number of students who wanted fruit = 9 apples (since each student is presumably taking one apple)
+
+3. **Calculate the number of extra apples:**
+ - Extra apples = Total apples - Apples taken by students
+ - Extra apples = 49 - 9 = 40
+
+Therefore, the cafeteria ended up with **40 extra apples** after the students took their fruit.
+```
+
+
+
+- Encourage Structured Thinking: The Plan and Solve approach promotes a more organized problem-solving process, starting with understanding and planning before execution.
+- Break Down Complex Problems: Use this technique for problems that benefit from being broken down into smaller, manageable steps.
+- Improve Problem Comprehension: By asking the LLM to first understand the problem, it can lead to better overall comprehension and more accurate solutions.
+- Enhance Step-by-Step Reasoning: The explicit instruction to solve the problem step by step can result in clearer, more detailed explanations.
+- Apply to Various Domains: While particularly effective for mathematical and reasoning tasks, Plan and Solve can be adapted for a wide range of problem types.
+
+
+
+Plan and Solve enhances the standard Chain of Thought approach by explicitly structuring the problem-solving process into distinct phases: understanding, planning, and execution. This structured approach can lead to more comprehensive and accurate solutions, especially for complex problems that benefit from careful planning before execution. By encouraging the LLM to first grasp the problem and outline a strategy, Plan and Solve can result in more thoughtful and well-organized responses across various types of reasoning and mathematical tasks.
+
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/rephrase-and-respond.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/rephrase-and-respond.mdx
new file mode 100644
index 0000000000..6f10c40acb
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/rephrase-and-respond.mdx
@@ -0,0 +1,80 @@
+---
+title: Rephrase and Respond
+description: Learn to implement the Rephrase and Respond technique to improve LLM comprehension by having models rephrase questions before answering them
+---
+
+# Rephrase and Respond
+
+[Rephrase and respond](https://arxiv.org/pdf/2311.04205) (RaR) is a prompt engineering technique which involves asking the LLM to rephrase and expand upon the question before responding. RaR has shown improvements across all types of prompts, but we have personally found that RaR is most effective for shorter and vaguer prompts.
+
+
+
+
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+rar_augment = "\nRephrase and expand the question, and respond."
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template("{query} {rar_augment}")
+def call(query: str, rar_prompt: bool = False) -> openai.OpenAIDynamicConfig:
+ return {
+ "computed_fields": {
+ "rar_augment": rar_augment if rar_prompt else "",
+ }
+ }
+
+
+prompt = """A coin is heads up. aluino flips the coin. arthor flips the coin.
+Is the coin still heads up? Flip means reverse."""
+
+print(call(query=prompt, rar_prompt=True))
+```
+
+ ### Rephrased and Expanded Question:
+
+ A coin starts with the heads side facing up. If Aluino flips the coin, it will land with the tails side facing up. Then Arthur flips the coin again. After these two sequences of flips, can we say that the coin is still heads up?
+
+ ### Response:
+
+ To analyze the scenario, let's break down the actions step by step:
+
+ 1. **Initial State**: The coin starts with the heads side facing up.
+
+ 2. **Aluino Flips the Coin**: When Aluino flips the coin, it reverses its position. Since the coin initially was heads up, after Aluino's flip, the coin will now be tails up.
+
+ 3. **Arthur Flips the Coin**: Next, Arthur takes his turn to flip the coin. Given that the current state of the coin is tails up, flipping it will reverse it again, resulting in the coin now being heads up.
+
+ At the end of these actions, after both Aluino and Arthur have flipped the coin, the final state of the coin is heads up once more. Thus, the answer to the question is:
+
+ **No, after Aluino flips it, the coin is tails up; however, after Arthur flips it again, the coin is heads up once more.**
+
+
+This example demonstrates how to implement the Rephrase and Respond technique using Mirascope. The `rar_augment` variable contains the instruction for the LLM to rephrase and expand the question before responding. This instruction is added to the end of the prompt when `rar_prompt` is set to `True`.
+
+## Benefits of Rephrase and Respond
+
+1. **Improved Understanding**: By rephrasing the question, the LLM demonstrates and often improves its understanding of the query.
+2. **Clarity**: The rephrasing can help clarify ambiguous or vague queries.
+3. **Context Expansion**: The expansion part of RaR allows the LLM to consider additional relevant context.
+4. **Better Responses**: The combination of rephrasing and expanding often leads to more comprehensive and accurate responses.
+
+
+
+- Use with Shorter Prompts: RaR is particularly effective with shorter or vaguer prompts that benefit from expansion.
+- Allow for Flexibility: The rephrasing may interpret the question slightly differently, which can lead to new insights.
+- Review the Rephrasing: Pay attention to how the LLM rephrases the question, as it can provide insights into the model's understanding.
+- Iterative Refinement: If the rephrasing misses key points, consider refining your original prompt.
+- Combine with Other Techniques: RaR can be used in conjunction with other prompt engineering techniques for even better results.
+
+
+
+By leveraging the Rephrase and Respond technique, you can often obtain more thorough and accurate responses from the LLM, especially for queries that benefit from additional context or clarification.
+
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/rereading.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/rereading.mdx
new file mode 100644
index 0000000000..cda26d174e
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/rereading.mdx
@@ -0,0 +1,71 @@
+---
+title: Rereading
+description: Explore the Rereading technique that improves LLM performance by asking the model to reread a question, particularly effective for older models across various reasoning tasks.
+---
+
+# Rereading
+
+
+Our experiences indicate that re-reading is not as effective for newer, more powerful models such as Anthropic's 3.5 Sonnet or OpenAI's GPT-4o, although it remains effective in older models.
+
+
+[Rereading](https://arxiv.org/pdf/2309.06275) is a prompt engineering technique that simply asks the LLM to reread a question and repeats it. When working with older, less capable LLM models, rereading has shown improvements for all types of reasoning tasks (arithmetic, symbolic, commonsense).
+
+
+
+
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template("{query} {reread}")
+def call(query: str, reread_prompt: bool = False) -> openai.OpenAIDynamicConfig:
+ return {
+ "computed_fields": {
+ "reread": f"Read the question again: {query}" if reread_prompt else "",
+ }
+ }
+
+
+prompt = """A coin is heads up. aluino flips the coin. arthor flips the coin.
+Is the coin still heads up? Flip means reverse."""
+
+print(call(query=prompt, reread_prompt=True))
+```
+
+ To analyze the situation:
+
+ 1. The coin starts heads up.
+ 2. Aluino flips the coin, which reverses it to tails up.
+ 3. Arthor then flips the coin again, which reverses it back to heads up.
+
+ So, after both flips, the coin is heads up again. The final answer is yes, the coin is still heads up.
+
+
+This example demonstrates how to implement the Rereading technique using Mirascope. The `reread` computed field is added to the prompt when `reread_prompt` is set to `True`, instructing the LLM to read the question again.
+
+## Benefits of Rereading
+
+1. **Improved Comprehension**: Rereading can help the LLM better understand complex or nuanced questions.
+2. **Enhanced Accuracy**: For older models, rereading has shown to improve accuracy across various reasoning tasks.
+3. **Reinforcement**: Repeating the question can reinforce key details that might be overlooked in a single pass.
+4. **Reduced Errors**: Rereading can help minimize errors that might occur due to misreading or misinterpreting the initial question.
+
+
+
+- Use with Older Models: Rereading is most effective with older, less capable LLM models.
+- Apply to Complex Questions: Consider using rereading for questions that involve multiple steps or complex reasoning.
+- Combine with Other Techniques: Rereading can be used in conjunction with other prompt engineering techniques for potentially better results.
+- Monitor Performance: Keep track of how rereading affects your model's performance, as its effectiveness may vary depending on the specific task and model used.
+- Consider Model Capabilities: For newer, more advanced models, rereading might not provide significant benefits and could potentially be redundant.
+
+
+
+By leveraging the Rereading technique, particularly with older LLM models, you may be able to improve the model's understanding and accuracy across various types of reasoning tasks. However, always consider the capabilities of your specific model when deciding whether to apply this technique.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/role-prompting.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/role-prompting.mdx
new file mode 100644
index 0000000000..fd8f2562e5
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/role-prompting.mdx
@@ -0,0 +1,94 @@
+---
+title: Role Prompting
+description: Implement role-based prompting to improve LLM responses. This guide demonstrates how assigning roles affects response style and quality.
+---
+
+# Role Prompting
+
+[Role prompting](https://arxiv.org/pdf/2311.10054) is a commonly used prompt engineering technique where responses can be improved by setting the roles of the LLM or the audience within the conversation. The paper linked above showcases some analytics for which roles perform best for specific tasks. Role prompting can improve response quality in both accuracy based and open ended tasks.
+
+
+
+
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template("""
+ SYSTEM: {llm_role} {audience}
+ USER: {query}
+ """)
+def call(
+ query: str, llm_role: str | None = None, audience: str | None = None
+) -> openai.OpenAIDynamicConfig:
+ return {
+ "computed_fields": {
+ "llm_role": f"You are {llm_role}." if llm_role else "",
+ "audience": f"You are talking to {audience}." if audience else "",
+ }
+ }
+
+
+response = call(
+ query="What's the square root of x^2 + 2x + 1?",
+ llm_role="a math teacher",
+ audience="your student",
+)
+print(response.content)
+```
+
+```
+To find the square root of the expression \( x^2 + 2x + 1 \), we can first recognize that this expression can be factored.
+
+The expression \( x^2 + 2x + 1 \) is a perfect square trinomial, and it can be factored as:
+
+\[
+(x + 1)^2
+\]
+
+Now, we can take the square root of this expression:
+
+\[
+\sqrt{x^2 + 2x + 1} = \sqrt{(x + 1)^2}
+\]
+
+Taking the square root of a square gives us the absolute value:
+
+\[
+\sqrt{(x + 1)^2} = |x + 1|
+\]
+
+So, the final result is:
+
+\[
+\sqrt{x^2 + 2x + 1} = |x + 1|
+\]
+```
+
+In this example, we're using role prompting to set the LLM's role as a math teacher and the audience as a student. This context can help the LLM tailor its response to be more educational and easier to understand, as a teacher would explain to a student.
+
+## Benefits of Role Prompting
+
+1. **Contextual Responses**: By setting roles, the LLM can provide responses that are more appropriate for the given context.
+2. **Improved Accuracy**: For certain tasks, setting the right role can lead to more accurate or relevant information.
+3. **Tailored Language**: The LLM can adjust its language and explanation style based on the roles, making responses more suitable for the intended audience.
+4. **Enhanced Creativity**: For open-ended tasks, role prompting can lead to more diverse and creative responses.
+
+
+
+- Choose Relevant Roles: Select roles that are appropriate for the task or query at hand.
+- Be Specific: The more specific you are about the roles, the better the LLM can tailor its response.
+- Experiment: Try different role combinations to see which produces the best results for your specific use case.
+- Consider the Audience: Setting an audience role can be just as important as setting the LLM's role.
+- Combine with Other Techniques: Role prompting can be used in conjunction with other prompt engineering techniques for even better results.
+
+
+
+By leveraging role prompting, you can guide the LLM to provide responses that are more aligned with your specific needs and context.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/self-ask.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/self-ask.mdx
new file mode 100644
index 0000000000..8352c7c539
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/self-ask.mdx
@@ -0,0 +1,339 @@
+---
+title: Self-Ask
+description: Learn how to implement the Self-Ask technique with LLMs to enhance reasoning capabilities by teaching models to ask and answer their own follow-up questions
+---
+
+# Self-Ask
+
+This recipe demonstrates how to implement the Self-Ask technique using Large Language Models (LLMs) with Mirascope. Self-Ask is a prompt engineering method that enhances an LLM's reasoning capabilities by encouraging it to ask and answer follow-up questions before providing a final answer. We'll explore both a basic implementation and an enhanced version with dynamic example selection.
+
+
+
+- Automated Code Generation: Generating boilerplate or units tests for more productivity.
+- Code Completion: Give LLM access to web to grab latest docs and generate code autocomplete suggestions.
+- Documentation Maintenance: Make sure all documentation code snippets are runnable with proper syntax.
+- Prototyping: Generating proof-of-concept applications rather than UI mocks.
+
+
+
+## Setup
+
+To set up our environment, first let's install all of the packages we will use:
+
+
+```python
+!pip install "mirascope[openai]" numpy scikit-learn
+```
+
+
+```python
+import os
+
+os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
+# Set the appropriate API key for the provider you're using
+```
+
+
+## Basic Self-Ask Implementation
+
+Let's start with a basic implementation of Self-Ask using few-shot learning examples:
+
+
+```python
+import inspect
+
+from mirascope.core import openai, prompt_template
+from typing_extensions import TypedDict
+
+
+class FewShotExample(TypedDict):
+ question: str
+ answer: str
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ Examples:
+ {examples:lists}
+
+ Query: {query}
+ """
+)
+def self_ask(query: str, examples: list[FewShotExample]) -> openai.OpenAIDynamicConfig:
+ return {
+ "computed_fields": {
+ "examples": [
+ [example["question"], example["answer"]] for example in examples
+ ]
+ }
+ }
+
+
+few_shot_examples = [
+ FewShotExample(
+ question="When does monsoon season end in the state the area code 575 is located?",
+ answer=inspect.cleandoc(
+ """
+ Are follow up questions needed here: Yes.
+ Follow up: Which state is the area code 575 located in?
+ Intermediate answer: The area code 575 is located in New Mexico.
+ Follow up: When does monsoon season end in New Mexico?
+ Intermediate answer: Monsoon season in New Mexico typically ends in mid-September.
+ So the final answer is: mid-September.
+ """
+ ),
+ ),
+ FewShotExample(
+ question="What is the current official currency in the country where Ineabelle Diaz is a citizen?",
+ answer=inspect.cleandoc(
+ """
+ Are follow up questions needed here: Yes.
+ Follow up: Which country is Ineabelle Diaz a citizen of?
+ Intermediate answer: Ineabelle Diaz is from Peurto Rico, which is in the United States of America.
+ Follow up: What is the current official currency in the United States of America?
+ Intermediate answer: The current official currency in the United States is the United States dollar.
+ So the final answer is: United States dollar.
+ """
+ ),
+ ),
+ FewShotExample(
+ question="Where was the person who founded the American Institute of Public Opinion in 1935 born?",
+ answer=inspect.cleandoc(
+ """
+ Are follow up questions needed here: Yes.
+ Follow up: Who founded the American Institute of Public Opinion in 1935?
+ Intermediate answer: George Gallup.
+ Follow up: Where was George Gallup born?
+ Intermediate answer: George Gallup was born in Jefferson, Iowa.
+ So the final answer is: Jefferson.
+ """
+ ),
+ ),
+ FewShotExample(
+ question="What language is used by the director of Tiffany Memorandum?",
+ answer=inspect.cleandoc(
+ """
+ Are follow up questions needed here: Yes.
+ Follow up: Who directed the movie called Tiffany Memorandum?
+ Intermediate answer: Sergio Grieco.
+ Follow up: What language is used by Sergio Grieco?
+ Intermediate answer: Sergio Grieco speaks Italian.
+ So the final answer is: Italian.
+ """
+ ),
+ ),
+ FewShotExample(
+ question="What is the sports team the person played for who scored the first touchdown in Superbowl 1?",
+ answer=inspect.cleandoc(
+ """
+ Are follow up questions needed here: Yes.
+ Follow up: Which player scored the first touchdown in Superbowl 1?
+ Intermediate answer: Max McGee.
+ Follow up: Which sports team did Max McGee play for?
+ Intermediate answer: Max McGee played for the Green Bay Packers.
+ So the final answer is: Green Bay Packers.
+ """
+ ),
+ ),
+]
+
+query = "The birth country of Jayantha Ketagoda left the British Empire when?"
+response = self_ask(query=query, examples=few_shot_examples)
+print(response.content)
+
+response = self_ask(query=query, examples=[])
+print(response.content)
+```
+
+```txt
+Are follow up questions needed here: Yes.
+Follow up: Which country is Jayantha Ketagoda from?
+Intermediate answer: Jayantha Ketagoda is from Sri Lanka.
+Follow up: When did Sri Lanka leave the British Empire?
+Intermediate answer: Sri Lanka gained independence from the British Empire on February 4, 1948.
+So the final answer is: February 4, 1948.
+Jayantha Ketagoda was born in Sri Lanka, which was formerly known as Ceylon. Sri Lanka gained independence from the British Empire on February 4, 1948.
+```
+
+
+This basic implementation demonstrates how to use few-shot learning with Self-Ask. The `self_ask` function takes a query and a list of examples, then uses Mirascope's `OpenAIDynamicConfig` to inject the examples into the prompt.
+
+## Enhanced Self-Ask with Dynamic Example Selection
+
+Now, let's improve our implementation by adding dynamic example selection:
+
+
+
+```python
+import inspect
+
+import numpy as np
+from mirascope.core import openai, prompt_template
+from sklearn.feature_extraction.text import TfidfVectorizer
+from sklearn.metrics.pairwise import cosine_similarity
+from typing_extensions import TypedDict
+
+
+class FewShotExample(TypedDict):
+ question: str
+ answer: str
+
+
+def select_relevant_examples(
+ query: str, examples: list[FewShotExample], n: int = 3
+) -> list[FewShotExample]:
+ """Select the most relevant examples based on cosine similarity."""
+ vectorizer = TfidfVectorizer().fit([ex["question"] for ex in examples] + [query])
+ example_vectors = vectorizer.transform([ex["question"] for ex in examples])
+ query_vector = vectorizer.transform([query])
+
+ similarities = cosine_similarity(query_vector, example_vectors)[0]
+ most_similar_indices = np.argsort(similarities)[-n:][::-1]
+
+ return [examples[i] for i in most_similar_indices]
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ Examples:
+ {examples:lists}
+
+ Query: {query}
+ """
+)
+def dynamic_self_ask(
+ query: str, examples: list[FewShotExample], n: int = 3
+) -> openai.OpenAIDynamicConfig:
+ relevant_examples = select_relevant_examples(query, examples, n)
+ return {
+ "computed_fields": {
+ "examples": [
+ [example["question"], example["answer"]]
+ for example in relevant_examples
+ ]
+ }
+ }
+
+
+few_shot_examples = [
+ FewShotExample(
+ question="When does monsoon season end in the state the area code 575 is located?",
+ answer=inspect.cleandoc(
+ """
+ Are follow up questions needed here: Yes.
+ Follow up: Which state is the area code 575 located in?
+ Intermediate answer: The area code 575 is located in New Mexico.
+ Follow up: When does monsoon season end in New Mexico?
+ Intermediate answer: Monsoon season in New Mexico typically ends in mid-September.
+ So the final answer is: mid-September.
+ """
+ ),
+ ),
+ FewShotExample(
+ question="What is the current official currency in the country where Ineabelle Diaz is a citizen?",
+ answer=inspect.cleandoc(
+ """
+ Are follow up questions needed here: Yes.
+ Follow up: Which country is Ineabelle Diaz a citizen of?
+ Intermediate answer: Ineabelle Diaz is from Peurto Rico, which is in the United States of America.
+ Follow up: What is the current official currency in the United States of America?
+ Intermediate answer: The current official currency in the United States is the United States dollar.
+ So the final answer is: United States dollar.
+ """
+ ),
+ ),
+ FewShotExample(
+ question="Where was the person who founded the American Institute of Public Opinion in 1935 born?",
+ answer=inspect.cleandoc(
+ """
+ Are follow up questions needed here: Yes.
+ Follow up: Who founded the American Institute of Public Opinion in 1935?
+ Intermediate answer: George Gallup.
+ Follow up: Where was George Gallup born?
+ Intermediate answer: George Gallup was born in Jefferson, Iowa.
+ So the final answer is: Jefferson.
+ """
+ ),
+ ),
+ FewShotExample(
+ question="What language is used by the director of Tiffany Memorandum?",
+ answer=inspect.cleandoc(
+ """
+ Are follow up questions needed here: Yes.
+ Follow up: Who directed the movie called Tiffany Memorandum?
+ Intermediate answer: Sergio Grieco.
+ Follow up: What language is used by Sergio Grieco?
+ Intermediate answer: Sergio Grieco speaks Italian.
+ So the final answer is: Italian.
+ """
+ ),
+ ),
+ FewShotExample(
+ question="What is the sports team the person played for who scored the first touchdown in Superbowl 1?",
+ answer=inspect.cleandoc(
+ """
+ Are follow up questions needed here: Yes.
+ Follow up: Which player scored the first touchdown in Superbowl 1?
+ Intermediate answer: Max McGee.
+ Follow up: Which sports team did Max McGee play for?
+ Intermediate answer: Max McGee played for the Green Bay Packers.
+ So the final answer is: Green Bay Packers.
+ """
+ ),
+ ),
+]
+
+
+query = "What was the primary language spoken by the inventor of the phonograph?"
+response = dynamic_self_ask(query=query, examples=few_shot_examples, n=2)
+print(response.content)
+```
+
+```txt
+Are follow up questions needed here: Yes.
+Follow up: Who invented the phonograph?
+Intermediate answer: Thomas Edison.
+Follow up: What language did Thomas Edison primarily speak?
+Intermediate answer: Thomas Edison primarily spoke English.
+So the final answer is: English.
+```
+
+
+This enhanced version introduces the `select_relevant_examples` function, which uses TF-IDF vectorization and cosine similarity to find the most relevant examples for a given query. The `dynamic_self_ask` function then selects these relevant examples before including them in the prompt.
+
+## Benefits and Considerations
+
+The enhanced Self-Ask implementation offers several advantages:
+
+1. Reduced prompt size by including only the most relevant examples.
+2. Potentially improved response quality by focusing on the most applicable few-shot examples.
+3. Ability to maintain a larger pool of examples without always including all of them in every query.
+
+When implementing this technique, consider:
+
+- Balancing the number of selected examples with the desired prompt length and model context window.
+- Experimenting with different similarity metrics or embedding techniques for example selection.
+- Regularly updating your example pool to cover a wide range of query types and topics.
+
+
+
+- Complex Problem Solving: Use Self-Ask for multi-step problems in fields like mathematics or engineering.
+- Research Assistance: Implement Self-Ask to help researchers explore complex topics and formulate hypotheses.
+- Legal Analysis: Apply Self-Ask to break down complex legal questions and explore relevant precedents.
+- Medical Diagnosis: Use Self-Ask to guide through differential diagnosis processes.
+- Customer Support: Implement Self-Ask to handle complex customer queries that require multiple pieces of information.
+
+
+
+
+When adapting this recipe to your specific use-case, consider:
+
+- Tailoring the few-shot examples to your domain for better performance.
+- Experimenting with different prompts and example formats to optimize the Self-Ask process.
+- Implementing a feedback loop to continuously improve the quality of the Self-Ask responses.
+- Combining Self-Ask with other techniques like chain-of-thought for even more powerful reasoning capabilities.
+
+By leveraging Mirascope's `call` decorator and `prompt_template`, you can easily implement and customize the Self-Ask technique to enhance your LLM's reasoning capabilities across a wide range of applications.
+
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/tabular-chain-of-thought.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/tabular-chain-of-thought.mdx
new file mode 100644
index 0000000000..31bdc1e0e4
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/tabular-chain-of-thought.mdx
@@ -0,0 +1,132 @@
+---
+title: Tabular Chain of Thought
+description: Implement the Tabular Chain of Thought technique to structure LLM reasoning in a table format for improved problem-solving accuracy
+---
+
+# Tabular Chain of Thought
+
+[Tabular Chain of Thought](https://arxiv.org/pdf/2305.17812) (Tab-CoT) is an extension of zero-shot [Chain of Thought](https://arxiv.org/abs/2201.1190), with the caveat that the LLM is given a Markdown heading to structure each step of its response in an individual row of a Markdown table. The added structure can help the LLM's reasoning process and improves accuracy in arithmetic and reasoning tasks.
+
+
+
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+tab_cot_augment = "|step|subquestion|process|result|"
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ {query}
+ {tab_cot_augment}
+ """
+)
+def call(query: str, tab_cot_prompt: bool = False) -> openai.OpenAIDynamicConfig:
+ return {
+ "computed_fields": {
+ "tab_cot_augment": tab_cot_augment if tab_cot_prompt else "",
+ }
+ }
+
+
+prompt = """A pet store had 102 puppies. In one day they sold 21 of them and put
+the rest into cages with 9 in each cage. How many cages did they use?"""
+
+print(call(query=prompt, tab_cot_prompt=True))
+```
+
+```
+| Step | Subquestion | Process | Result |
+|—————|—————————————————————————|————————————————————————————————————————————————————————————————————————|————————————|
+| 1 | How many puppies are left? | Start with 102 puppies and subtract the 21 sold: 102 - 21 = 81 | 81 puppies |
+| 2 | How many cages are needed? | Divide the remaining puppies by the number in each cage: 81 ÷ 9 = 9 | 9 cages |
+| 3 | Check for remainder? | Calculate the remainder when dividing 81 by 9: 81 mod 9 = 0 (no remainder, so no extra cage needed)| N/A |
+
+Final Result: The pet store used **9 cages**.
+```
+
+Tabular Chain of Thought is an extension of [Chain of Thought](https://arxiv.org/abs/2201.1190), with the caveat that the LLM is asked to put each step of its reasoning process in a row of a Markdown table. The added structure can structure the LLM's reasoning and make it likelier to give a correct answer.
+
+
+
+```python
+from mirascope.core import openai
+
+
+@openai.call(model="gpt-3.5-turbo")
+def call(query: str) -> str:
+ return query
+
+
+prompt = """
+A circle with radius 1 circumscribes (perfectly surrounds) an equilateral triangle.
+What's the area of the triangle?
+"""
+generic_response = call(prompt)
+engineered_response = call(f"""{prompt}. Explain your reasoning step by step,
+with each step in a row in a markdown table.""")
+
+print(generic_response)
+print("\n\n\n")
+print(engineered_response)
+```
+
+```
+To find the area of the equilateral triangle that is circumscribed by the circle with radius 1, first we need to find the side length of the triangle.
+
+In an equilateral triangle, all sides are equal. Let's label the side length as "s".
+
+The radius of the circle will be the distance from the center of the circle to the midpoint of a side of the equilateral triangle. This forms a right triangle with the side of the equilateral triangle and half of the side of the equilateral triangle. Using the Pythagorean theorem, we have:
+
+s^2 = (s/2)^2 + 1^2
+s^2 = s^2/4 + 1
+3s^2/4 = 1
+s^2 = 4/3
+s = sqrt(4/3)
+s = 2/sqrt(3)
+
+Now that we have the side length of the equilateral triangle, we can find the area of the triangle using the formula:
+
+Area = (sqrt(3)/4)(s^2)
+Area = (sqrt(3)/4)(4/3)
+Area = sqrt(3)/3
+
+Therefore, the area of the equilateral triangle is sqrt(3)/3 units squared.
+
+
+
+| Step | Calculation | Reasoning |
+|——————|——————————————————————————————————————————————————|————————————————————————————————————————————————————————————————————————|
+| 1 | Equilateral triangle has all equal sides | By definition, all sides of an equilateral triangle are equal |
+| 2 | The radius of the circle is 1 | Given in the problem statement |
+| 3 | The radius of the circle is the distance | From the center of the circle to any vertex of the triangle, which is also the altitude |
+| 4 | The altitude of an equilateral triangle | Is also the perpendicular bisector of any side of the triangle |
+| 5 | Dividing the angle at a vertex of the triangle | into two equal parts gives two 30-60-90 right triangles |
+| 6 | The side opposite the 60 degree angle in | a 30-60-90 triangle is $\sqrt{3}$ times the side opposite the 30 degree angle |
+| 7 | The side opposite the 30 degree angle in a | 30-60-90 triangle is 1 times the radius of the circle |
+| 8 | Therefore, the side opposite the 60 degree | angle in our equilateral triangle is $\sqrt{3}$ times the radius of the circle, which is 1 |
+| 9 | Using the formula for the area of an equilateral | triangle with side length "s": $Area = \frac{\sqrt{3}}{4} * s^2$ |
+| 10 | Substituting $s = \sqrt{3}$ | into the formula gives $Area = \frac{\sqrt{3}}{4} * (\sqrt{3})^2 = \frac{3\sqrt{3}}{4}$ |
+| 11 | Thus, the area of the equilateral triangle | circumscribed by a circle with radius 1 is $\frac{3\sqrt{3}}{4}$ units squared |
+```
+
+For reference, `engineered_response` answer is correct.
+
+
+
+- Structured Reasoning: Use Tab-CoT to encourage the LLM to break down complex problems into clear, discrete steps.
+- Improved Accuracy: The tabular format can lead to improved accuracy, especially in arithmetic and multi-step reasoning tasks.
+- Easy Verification: The step-by-step tabular format makes it easier to verify the LLM's reasoning process.
+- Consistency: Tab-CoT can help maintain consistency in the problem-solving approach across different queries.
+- Visual Clarity: The table format provides a clear visual representation of the problem-solving process, which can be beneficial for understanding and presentation.
+
+
+
+Tabular Chain of Thought provides a structured approach to problem-solving that can enhance the LLM's reasoning capabilities. By organizing thoughts into a table format, it allows for clearer step-by-step analysis, which can lead to more accurate results, especially in complex arithmetic or logical reasoning tasks.
diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/thread-of-thought.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/thread-of-thought.mdx
new file mode 100644
index 0000000000..96b5acc8d8
--- /dev/null
+++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/thread-of-thought.mdx
@@ -0,0 +1,127 @@
+---
+title: Thread of Thought
+description: Explore Thread of Thought (THoT), an extension of Chain of Thought that provides a structured approach for LLMs to analyze information step-by-step, particularly effective for tasks with large context.
+---
+
+# Thread of Thought
+
+[Thread of Thought](https://arxiv.org/pdf/2311.08734) (THoT) is an extension of zero-shot [Chain of Thought](/docs/mirascope/guides/prompt-engineering/text-based/chain-of-thought) where the request to walk through the reasoning steps is improved. The paper tests the results of various phrases, but finds the best to be "Walk me through this context in manageable parts step by step, summarizing and analyzing as we go." It is applicable to reasoning and mathematical tasks just like CoT, but is most useful for tasks with retrieval / large amounts of context and Q and A on this context.
+
+
+
+
+
+
+
+```python
+from mirascope.core import openai, prompt_template
+
+rag_output = [
+ """Apple Inc. was founded on April 1, 1976, by Steve Jobs, Steve Wozniak, and
+Ronald Wayne. The company started in the garage of Jobs' childhood home in
+Los Altos, California.""",
+ """Steve Jobs was a visionary entrepreneur and the co-founder of Apple Inc.
+He played a key role in the development of the Macintosh, iPod, iPhone, and iPad.""",
+ """Apple's headquarters, known as Apple Park, is located in Cupertino, California.
+The campus, designed by Norman Foster, opened to employees in April 2017.""",
+ """In 1977, Apple Computer, Inc. was incorporated. The Apple II, one of the first
+highly successful mass-produced microcomputer products, was introduced that year.""",
+ """Apple's first product, the Apple I, was sold as a fully assembled circuit board.
+The idea for the company came from Steve Wozniak's interest in building a computer
+kit.""",
+ """Steve Wozniak and Steve Jobs were high school friends before they founded Apple
+together. They were both members of the Homebrew Computer Club, where they exchanged
+ideas with other computer enthusiasts.""",
+ """The first Apple Store opened in Tysons Corner, Virginia, in May 2001.
+Apple Stores have since become iconic retail spaces around the world.""",
+ """Apple has a strong commitment to environmental sustainability. The company
+aims to have its entire supply chain carbon neutral by 2030.""",
+ """Ronald Wayne, the lesser-known third co-founder of Apple, sold his shares
+in the company just 12 days after it was founded. He believed the venture was too
+risky and wanted to avoid potential financial loss.""",
+ """In 1984, Apple launched the Macintosh, the first personal computer to feature
+a graphical user interface and a mouse. This product revolutionized the computer
+industry and set new standards for user-friendly design.""",
+]
+
+
+def retrieve_passages(query: str):
+ """Simulates RAG retrieval."""
+ return rag_output
+
+
+thot_augment = """Walk me through this context in manageable parts step by step,
+summarizing and analyzing as we go"""
+
+
+@openai.call(model="gpt-4o-mini")
+@prompt_template(
+ """
+ As a content reviewer, I provide multiple retrieved passages about
+ this question; you need to answer the question.
+
+ {context}
+
+ {query} {thot_augment}
+ """
+)
+def call(query: str, thot_prompt: bool = False) -> openai.OpenAIDynamicConfig:
+ passages = retrieve_passages(query)
+ context = [
+ f"retrieved passage {i + 1} is: {passage}" for i, passage in enumerate(passages)
+ ]
+ return {
+ "computed_fields": {
+ "context": context,
+ "thot_augment": thot_augment if thot_prompt else "",
+ }
+ }
+
+
+prompt = "Where was Apple founded?"
+
+print(call(query=prompt, thot_prompt=True))
+```
+
+```
+To answer the question "Where was Apple founded?" let's break down the information available in the retrieved passages step by step.
+
+### Step 1: Identify Founding Information
+
+From **retrieved passage 1**, we learn the following:
+- Apple Inc. was founded on April 1, 1976.
+- The founders are Steve Jobs, Steve Wozniak, and Ronald Wayne.
+- The company started in the garage of Jobs' childhood home.
+
+### Step 2: Analyze the Location
+
+The precise location mentioned in **retrieved passage 1** is:
+- **Los Altos, California.**
+This indicates that Apple was founded in a residential setting, specifically in a garage, which is a common story for many tech startups, illustrating humble beginnings.
+
+### Step 3: Confirming with Additional Context
+
+While the remaining passages provide various pieces of information about Apple, such as the development of its products and its incorporation, they do not provide an alternative founding location. Therefore, the core location remains unchanged by the additional context.
+
+### Step 4: Summarizing
+
+In summary, Apple Inc. was founded in **Los Altos, California**, in the garage of Steve Jobs' childhood home. This information highlights the origins of a now-massive corporation, emphasizing that great companies can start in modest environments.
+
+Thus, the answer to the question "Where was Apple founded?" is **Los Altos, California**.
+```
+
+
+
+
+- Use with Large Context: THoT is particularly effective when dealing with large amounts of retrieved information or context.
+- Encourage Step-by-Step Analysis: The key phrase "Walk me through this context in manageable parts step by step, summarizing and analyzing as we go" prompts the LLM to break down and analyze information incrementally.
+- Apply to Q&A Tasks: THoT is especially useful for question-answering tasks that require processing and synthesizing information from multiple sources.
+- Combine with Retrieval: THoT works well in combination with retrieval augmented generation (RAG) techniques.
+- Review Intermediate Steps: Examine the LLM's step-by-step analysis to ensure it's properly interpreting and synthesizing the context.
+
+
+
+Thread of Thought enhances the zero-shot Chain of Thought approach by providing a more structured way for the LLM to process and analyze large amounts of context. This technique is particularly valuable for tasks that involve information retrieval and synthesis, allowing for more thorough and transparent reasoning in complex question-answering scenarios.
diff --git a/cloud/content/docs/v1/index.mdx b/cloud/content/docs/v1/index.mdx
new file mode 100644
index 0000000000..4927ec650e
--- /dev/null
+++ b/cloud/content/docs/v1/index.mdx
@@ -0,0 +1,113 @@
+---
+title: Welcome
+description: Mirascope is a Python library that streamlines working with LLMs
+---
+
+# Welcome to Mirascope
+
+
+
+Mirascope is a powerful, flexible, and user-friendly library that simplifies the process of working with LLMs through a unified interface that works across various supported providers, including [OpenAI](https://openai.com/), [Anthropic](https://www.anthropic.com/), [Mistral](https://mistral.ai/), [Google (Gemini/Vertex)](https://googleapis.github.io/python-genai/), [Groq](https://groq.com/), [Cohere](https://cohere.com/), [LiteLLM](https://www.litellm.ai/), [Azure AI](https://azure.microsoft.com/en-us/solutions/ai), and [Bedrock](https://aws.amazon.com/bedrock/).
+
+Whether you're generating text, extracting structured information, or developing complex AI-driven agent systems, Mirascope provides the tools you need to streamline your development process and create powerful, robust applications.
+
+
+
+ Why Use Mirascope
+
+
+ Join Our Community
+
+
+ Star the Repo
+
+
+
+## Getting Started
+
+Install Mirascope, specifying the provider you intend to use, and set your API key:
+
+
+
+## Mirascope API
+
+Mirascope provides a consistent, easy-to-use API across all providers:
+
+
+
+```python
+from mirascope import llm
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ """An extracted book."""
+
+ title: str
+ author: str
+
+
+# [!code highlight:6]
+@llm.call(
+ provider="$PROVIDER",
+ model="$MODEL",
+ response_model=Book
+)
+def extract_book(text: str) -> str:
+ return f"Extract {text}"
+
+
+book: Book = extract_book("The Name of the Wind by Patrick Rothfuss") # [!code highlight]
+print(book)
+# Output: title='The Name of the Wind' author='Patrick Rothfuss' # [!code highlight]
+```
+
+
+```python
+from mirascope import llm, prompt_template
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ """An extracted book."""
+
+ title: str
+ author: str
+
+
+# [!code highlight:6]
+@llm.call(
+ provider="$PROVIDER",
+ model="$MODEL",
+ response_model=Book
+)
+@prompt_template("Extract {text}")
+def extract_book(text: str): ...
+
+
+book: Book = extract_book("The Name of the Wind by Patrick Rothfuss") # [!code highlight]
+print(book)
+# Output: title='The Name of the Wind' author='Patrick Rothfuss' # [!code highlight]
+```
+
+
+
+## Provider SDK Equivalent
+
+For comparison, here's how you would achieve the same result using the provider's native SDK:
+
+
+
+
+
+If you'd like a more in-depth guide to getting started with Mirascope, check out our [quickstart guide](/docs/mirascope/guides/getting-started/quickstart/)
+
+We're excited to see what you'll build with Mirascope, and we're here to help! Don't hesitate to reach out :)
diff --git a/cloud/content/docs/v1/learn/agents.mdx b/cloud/content/docs/v1/learn/agents.mdx
new file mode 100644
index 0000000000..93642c14f8
--- /dev/null
+++ b/cloud/content/docs/v1/learn/agents.mdx
@@ -0,0 +1,634 @@
+---
+title: Agents
+description: Learn how to build autonomous and semi-autonomous LLM-powered agents with Mirascope that can use tools, maintain state, and execute multi-step reasoning processes.
+---
+
+# Agents
+
+> __Definition__: a person who acts on behalf of another person or group
+
+When working with Large Language Models (LLMs), an "agent" refers to an autonomous or semi-autonomous system that can act on your behalf. The core concept is the use of tools to enable the LLM to interact with its environment.
+
+In this section we will implement a toy `Librarian` agent to demonstrate key concepts in Mirascope that will help you build agents.
+
+
+ If you haven't already, we recommend first reading the section on [Tools](/docs/mirascope/learn/tools)
+
+
+
+ ```mermaid
+ sequenceDiagram
+ participant YC as Your Code
+ participant LLM
+
+ loop Agent Loop
+ YC->>LLM: Call with prompt + history + function definitions
+ loop Tool Calling Cycle
+ LLM->>LLM: Decide to respond or call functions
+ LLM->>YC: Respond with function to call and arguments
+ YC->>YC: Execute function with given arguments
+ YC->>YC: Add tool call message parameters to history
+ YC->>LLM: Call with prompt + history including function result
+ end
+ LLM->>YC: Finish calling tools and return final response
+ YC->>YC: Update history with final response
+ end
+ ```
+
+
+## State Management
+
+Since an agent needs to operate across multiple LLM API calls, the first concept to cover is state. The goal of providing state to the agent is to give it memory. For example, we can think of local variables as "working memory" and a database as "long-term memory".
+
+Let's take a look at a basic chatbot (not an agent) that uses a class to maintain the chat's history:
+
+
+
+```python
+from mirascope import Messages, llm, BaseMessageParam
+from pydantic import BaseModel
+
+
+class Librarian(BaseModel):
+ history: list[BaseMessageParam] = [] # [!code highlight]
+
+ @llm.call(provider="$PROVIDER", model="$MODEL")
+ def _call(self, query: str) -> Messages.Type:
+ return [
+ Messages.System("You are a librarian"),
+ *self.history, # [!code highlight]
+ Messages.User(query),
+ ]
+
+ def run(self) -> None:
+ while True:
+ query = input("(User): ")
+ if query in ["exit", "quit"]:
+ break
+ print("(Assistant): ", end="", flush=True)
+ response = self._call(query)
+ print(response.content)
+ self.history += [ # [!code highlight]
+ Messages.User(query), # [!code highlight]
+ response.message_param, # [!code highlight]
+ ] # [!code highlight]
+
+
+Librarian().run()
+```
+
+
+```python
+from mirascope import Messages, llm, BaseMessageParam, prompt_template
+from pydantic import BaseModel
+
+
+class Librarian(BaseModel):
+ history: list[BaseMessageParam] = [] # [!code highlight]
+
+ @llm.call(provider="$PROVIDER", model="$MODEL")
+ @prompt_template(
+ """
+ SYSTEM: You are a librarian
+ MESSAGES: {self.history} # [!code highlight]
+ USER: {query}
+ """
+ )
+ def _call(self, query: str): ...
+
+ def run(self) -> None:
+ while True:
+ query = input("(User): ")
+ if query in ["exit", "quit"]:
+ break
+ print("(Assistant): ", end="", flush=True)
+ response = self._call(query)
+ print(response.content)
+ self.history += [ # [!code highlight]
+ Messages.User(query), # [!code highlight]
+ response.message_param, # [!code highlight]
+ ] # [!code highlight]
+
+
+Librarian().run()
+```
+
+
+
+In this example we:
+
+- Create a `Librarian` class with a `history` attribute.
+- Implement a private `_call` method that injects `history`.
+- Run the `_call` method in a loop, saving the history at each step.
+
+A chatbot with memory, while more advanced, is still not an agent.
+
+
+
+
+```python
+from mirascope import BaseMessageParam, Messages, llm
+from pydantic import BaseModel
+
+
+class Librarian(BaseModel):
+ history: list[BaseMessageParam] = []
+
+ @llm.call(provider="$PROVIDER", model="$MODEL")
+ def _call(self, query: str) -> Messages.Type:
+ return [
+ Messages.System("You are a librarian"),
+ *self.history,
+ Messages.User(query),
+ ]
+
+ def run(
+ self,
+ provider: llm.Provider, # [!code highlight]
+ model: str, # [!code highlight]
+ ) -> None:
+ while True:
+ query = input("(User): ")
+ if query in ["exit", "quit"]:
+ break
+ print("(Assistant): ", end="", flush=True)
+ response = llm.override(self._call, provider=provider, model=model)(query) # [!code highlight]
+ print(response.content)
+ self.history += [
+ response.user_message_param,
+ response.message_param,
+ ]
+
+
+Librarian().run("anthropic", "claude-3-5-sonnet-latest")
+```
+
+
+```python
+from mirascope import BaseMessageParam, llm, prompt_template
+from pydantic import BaseModel
+
+
+class Librarian(BaseModel):
+ history: list[BaseMessageParam] = []
+
+ @llm.call(provider="$PROVIDER", model="$MODEL") # [!code highlight]
+ @prompt_template(
+ """
+ SYSTEM: You are a librarian
+ MESSAGES: {self.history}
+ USER: {query}
+ """
+ )
+ def _call(self, query: str): ...
+
+ def run(
+ self,
+ provider: llm.Provider,
+ model: str,
+ ) -> None:
+ while True:
+ query = input("(User): ")
+ if query in ["exit", "quit"]:
+ break
+ print("(Assistant): ", end="", flush=True)
+ response = llm.override(self._call, provider=provider, model=model)(query) # [!code highlight]
+ print(response.content)
+ self.history += [
+ response.user_message_param,
+ response.message_param,
+ ]
+
+
+Librarian().run("anthropic", "claude-3-5-sonnet-latest")
+```
+
+
+
+
+## Integrating Tools
+
+The next concept to cover is introducing tools to our chatbot, turning it into an agent capable of acting on our behalf. The most basic agent flow is to call tools on behalf of the agent, providing them back through the chat history until the agent is ready to response to the initial query.
+
+Let's take a look at a basic example where the `Librarian` can access the books available in the library:
+
+
+
+```python
+import json
+
+from mirascope import BaseDynamicConfig, Messages, llm, BaseMessageParam
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+class Librarian(BaseModel):
+ history: list[BaseMessageParam] = [] # [!code highlight]
+ library: list[Book] = [ # [!code highlight]
+ Book(title="The Name of the Wind", author="Patrick Rothfuss"), # [!code highlight]
+ Book(title="Mistborn: The Final Empire", author="Brandon Sanderson"), # [!code highlight]
+ ] # [!code highlight]
+
+ def _available_books(self) -> str: # [!code highlight]
+ """Returns the list of books available in the library.""" # [!code highlight]
+ return json.dumps([book.model_dump() for book in self.library]) # [!code highlight]
+
+ @llm.call(provider="$PROVIDER", model="$MODEL")
+ def _call(self, query: str) -> BaseDynamicConfig:
+ messages = [
+ Messages.System("You are a librarian"),
+ *self.history,
+ ]
+ if query:
+ messages.append(Messages.User(query))
+ return {"messages": messages, "tools": [self._available_books]} # [!code highlight]
+
+ def _step(self, query: str) -> str:
+ response = self._call(query)
+ if query:
+ self.history.append(Messages.User(query))
+ self.history.append(response.message_param)
+ tools_and_outputs = [] # [!code highlight]
+ if tools := response.tools: # [!code highlight]
+ for tool in tools: # [!code highlight]
+ print(f"[Calling Tool '{tool._name()}' with args {tool.args}]") # [!code highlight]
+ tools_and_outputs.append((tool, tool.call())) # [!code highlight]
+ self.history += response.tool_message_params(tools_and_outputs) # [!code highlight]
+ return self._step("") # [!code highlight]
+ else:
+ return response.content
+
+ def run(self) -> None:
+ while True:
+ query = input("(User): ")
+ if query in ["exit", "quit"]:
+ break
+ print("(Assistant): ", end="", flush=True)
+ step_output = self._step(query)
+ print(step_output)
+
+
+Librarian().run()
+```
+
+
+```python
+import json
+
+from mirascope import (
+ BaseDynamicConfig,
+ Messages,
+ llm,
+ BaseMessageParam,
+ prompt_template,
+)
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+class Librarian(BaseModel):
+ history: list[BaseMessageParam] = []
+ library: list[Book] = [ # [!code highlight]
+ Book(title="The Name of the Wind", author="Patrick Rothfuss"), # [!code highlight]
+ Book(title="Mistborn: The Final Empire", author="Brandon Sanderson"), # [!code highlight]
+ ] # [!code highlight]
+
+ def _available_books(self) -> str: # [!code highlight]
+ """Returns the list of books available in the library.""" # [!code highlight]
+ return json.dumps([book.model_dump() for book in self.library]) # [!code highlight]
+
+ @llm.call(provider="$PROVIDER", model="$MODEL")
+ @prompt_template(
+ """
+ SYSTEM: You are a librarian
+ MESSAGES: {self.history}
+ USER: {query}
+ """
+ )
+ def _call(self, query: str) -> BaseDynamicConfig:
+ return {"tools": [self._available_books]} # [!code highlight]
+
+ def _step(self, query: str) -> str:
+ response = self._call(query)
+ if query:
+ self.history.append(Messages.User(query))
+ self.history.append(response.message_param)
+ tools_and_outputs = [] # [!code highlight]
+ if tools := response.tools: # [!code highlight]
+ for tool in tools: # [!code highlight]
+ print(f"[Calling Tool '{tool._name()}' with args {tool.args}]") # [!code highlight]
+ tools_and_outputs.append((tool, tool.call())) # [!code highlight]
+ self.history += response.tool_message_params(tools_and_outputs) # [!code highlight]
+ return self._step("") # [!code highlight]
+ else:
+ return response.content
+
+ def run(self) -> None:
+ while True:
+ query = input("(User): ")
+ if query in ["exit", "quit"]:
+ break
+ print("(Assistant): ", end="", flush=True)
+ step_output = self._step(query)
+ print(step_output)
+
+
+Librarian().run()
+```
+
+
+
+In this example we:
+
+1. Added the `library` state to maintain the list of available books.
+2. Implemented the `_available_books` tool that returns the library as a string.
+3. Updated `_call` to give the LLM access to the tool.
+ - We used the `tools` dynamic configuration field so the tool has access to the library through `self`.
+4. Added a `_step` method that implements a full step from user input to assistant output.
+5. For each step, we call the LLM and see if there are any tool calls.
+ - If yes, we call the tools, collect the outputs, and insert the tool calls into the chat history. We then recursively call `_step` again with an empty user query until the LLM is done calling tools and is ready to response
+ - If no, the LLM is ready to respond and we return the response content.
+
+Now that our chatbot is capable of using tools, we have a basic agent.
+
+## Human-In-The-Loop
+
+While it would be nice to have fully autonomous agents, LLMs are far from perfect and often need assistance to ensure they continue down the right path in an agent flow.
+
+One common and easy way to help guide LLM agents is to give the agent the ability to ask for help. This "human-in-the-loop" flow lets the agent ask for help if it determines it needs it:
+
+
+
+```python
+from mirascope import BaseDynamicConfig, Messages, llm, BaseMessageParam
+from pydantic import BaseModel
+
+
+class Librarian(BaseModel):
+ history: list[BaseMessageParam] = []
+
+ def _ask_for_help(self, question: str) -> str: # [!code highlight]
+ """Asks for help from an expert.""" # [!code highlight]
+ print("[Assistant Needs Help]") # [!code highlight]
+ print(f"[QUESTION]: {question}") # [!code highlight]
+ answer = input("[ANSWER]: ") # [!code highlight]
+ print("[End Help]") # [!code highlight]
+ return answer # [!code highlight]
+
+ @llm.call(provider="$PROVIDER", model="$MODEL")
+ def _call(self, query: str) -> BaseDynamicConfig:
+ messages = [
+ Messages.System("You are a librarian"),
+ *self.history,
+ ]
+ if query:
+ messages.append(Messages.User(query))
+ return {"messages": messages, "tools": [self._ask_for_help]} # [!code highlight]
+
+ def _step(self, query: str) -> str:
+ response = self._call(query)
+ if query:
+ self.history.append(Messages.User(query))
+ self.history.append(response.message_param)
+ tools_and_outputs = []
+ if tools := response.tools:
+ for tool in tools:
+ print(f"[Calling Tool '{tool._name()}' with args {tool.args}]")
+ tools_and_outputs.append((tool, tool.call()))
+ self.history += response.tool_message_params(tools_and_outputs)
+ return self._step("")
+ else:
+ return response.content
+
+ def run(self) -> None:
+ while True:
+ query = input("(User): ")
+ if query in ["exit", "quit"]:
+ break
+ print("(Assistant): ", end="", flush=True)
+ step_output = self._step(query)
+ print(step_output)
+
+
+Librarian().run()
+```
+
+
+```python
+from mirascope import (
+ BaseDynamicConfig,
+ Messages,
+ llm,
+ BaseMessageParam,
+ prompt_template,
+)
+from pydantic import BaseModel
+
+
+class Librarian(BaseModel):
+ history: list[BaseMessageParam] = []
+
+ def _ask_for_help(self, question: str) -> str: # [!code highlight]
+ """Asks for help from an expert.""" # [!code highlight]
+ print("[Assistant Needs Help]") # [!code highlight]
+ print(f"[QUESTION]: {question}") # [!code highlight]
+ answer = input("[ANSWER]: ") # [!code highlight]
+ print("[End Help]") # [!code highlight]
+ return answer # [!code highlight]
+
+ @llm.call(provider="$PROVIDER", model="$MODEL")
+ @prompt_template(
+ """
+ SYSTEM: You are a librarian
+ MESSAGES: {self.history}
+ USER: {query}
+ """
+ )
+ def _call(self, query: str) -> BaseDynamicConfig:
+ return {"tools": [self._ask_for_help]} # [!code highlight]
+
+ def _step(self, query: str) -> str:
+ response = self._call(query)
+ if query:
+ self.history.append(Messages.User(query))
+ self.history.append(response.message_param)
+ tools_and_outputs = []
+ if tools := response.tools:
+ for tool in tools:
+ print(f"[Calling Tool '{tool._name()}' with args {tool.args}]")
+ tools_and_outputs.append((tool, tool.call()))
+ self.history += response.tool_message_params(tools_and_outputs)
+ return self._step("")
+ else:
+ return response.content
+
+ def run(self) -> None:
+ while True:
+ query = input("(User): ")
+ if query in ["exit", "quit"]:
+ break
+ print("(Assistant): ", end="", flush=True)
+ step_output = self._step(query)
+ print(step_output)
+
+
+Librarian().run()
+```
+
+
+
+## Streaming
+
+The previous examples print each tool call so you can see what the agent is doing before the final response; however, you still need to wait for the agent to generate its entire final response before you see the output.
+
+Streaming can help to provide an even more real-time experience:
+
+
+
+```python
+import json
+
+from mirascope import BaseDynamicConfig, Messages, llm, BaseMessageParam
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+class Librarian(BaseModel):
+ history: list[BaseMessageParam] = []
+ library: list[Book] = [
+ Book(title="The Name of the Wind", author="Patrick Rothfuss"),
+ Book(title="Mistborn: The Final Empire", author="Brandon Sanderson"),
+ ]
+
+ def _available_books(self) -> str:
+ """Returns the list of books available in the library."""
+ return json.dumps([book.model_dump() for book in self.library])
+
+ @llm.call(provider="$PROVIDER", model="$MODEL", stream=True) # [!code highlight]
+ def _stream(self, query: str) -> BaseDynamicConfig: # [!code highlight]
+ messages = [
+ Messages.System("You are a librarian"),
+ *self.history,
+ ]
+ if query:
+ messages.append(Messages.User(query))
+ return {"messages": messages, "tools": [self._available_books]}
+
+ def _step(self, query: str) -> None:
+ stream = self._stream(query) # [!code highlight]
+ if query:
+ self.history.append(Messages.User(query))
+ tools_and_outputs = [] # [!code highlight]
+ for chunk, tool in stream: # [!code highlight]
+ if tool: # [!code highlight]
+ print(f"[Calling Tool '{tool._name()}' with args {tool.args}]") # [!code highlight]
+ tools_and_outputs.append((tool, tool.call())) # [!code highlight]
+ else: # [!code highlight]
+ print(chunk.content, end="", flush=True) # [!code highlight]
+ self.history.append(stream.message_param) # [!code highlight]
+ if tools_and_outputs: # [!code highlight]
+ self.history += stream.tool_message_params(tools_and_outputs) # [!code highlight]
+ self._step("") # [!code highlight]
+
+ def run(self) -> None:
+ while True:
+ query = input("(User): ")
+ if query in ["exit", "quit"]:
+ break
+ print("(Assistant): ", end="", flush=True)
+ self._step(query)
+ print()
+
+
+Librarian().run()
+```
+
+
+```python
+import json
+
+from mirascope import (
+ BaseDynamicConfig,
+ Messages,
+ llm,
+ BaseMessageParam,
+ prompt_template,
+)
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+class Librarian(BaseModel):
+ history: list[BaseMessageParam] = []
+ library: list[Book] = [
+ Book(title="The Name of the Wind", author="Patrick Rothfuss"),
+ Book(title="Mistborn: The Final Empire", author="Brandon Sanderson"),
+ ]
+
+ def _available_books(self) -> str:
+ """Returns the list of books available in the library."""
+ return json.dumps([book.model_dump() for book in self.library])
+
+ @llm.call(provider="$PROVIDER", model="$MODEL", stream=True) # [!code highlight]
+ @prompt_template(
+ """
+ SYSTEM: You are a librarian
+ MESSAGES: {self.history}
+ USER: {query}
+ """
+ )
+ def _stream(self, query: str) -> BaseDynamicConfig: # [!code highlight]
+ return {"tools": [self._available_books]}
+
+ def _step(self, query: str) -> None:
+ stream = self._stream(query) # [!code highlight]
+ if query:
+ self.history.append(Messages.User(query))
+ tools_and_outputs = [] # [!code highlight]
+ for chunk, tool in stream: # [!code highlight]
+ if tool: # [!code highlight]
+ print(f"[Calling Tool '{tool._name()}' with args {tool.args}]") # [!code highlight]
+ tools_and_outputs.append((tool, tool.call())) # [!code highlight]
+ else: # [!code highlight]
+ print(chunk.content, end="", flush=True) # [!code highlight]
+ self.history.append(stream.message_param) # [!code highlight]
+ if tools_and_outputs: # [!code highlight]
+ self.history += stream.tool_message_params(tools_and_outputs) # [!code highlight]
+ self._step("") # [!code highlight]
+
+ def run(self) -> None:
+ while True:
+ query = input("(User): ")
+ if query in ["exit", "quit"]:
+ break
+ print("(Assistant): ", end="", flush=True)
+ self._step(query)
+ print()
+
+
+Librarian().run()
+```
+
+
+
+## Next Steps
+
+This section is just the tip of the iceberg when it comes to building agents, implementing just one type of simple agent flow. It's important to remember that "agent" is quite a general term and can mean different things for different use-cases. Mirascope's various features make building agents easier, but it will be up to you to determine the architecture that best suits your goals.
+
+Next, we recommend taking a look at our [Agent Tutorials](/docs/mirascope/guides/agents/web-search-agent) to see examples of more complex, real-world agents.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/async.mdx b/cloud/content/docs/v1/learn/async.mdx
new file mode 100644
index 0000000000..9d1892f01e
--- /dev/null
+++ b/cloud/content/docs/v1/learn/async.mdx
@@ -0,0 +1,330 @@
+---
+title: Async
+description: Learn how to use asynchronous programming with Mirascope to efficiently handle I/O-bound operations, improve responsiveness, and run multiple LLM calls concurrently.
+---
+
+# Async
+
+Asynchronous programming is a crucial concept when building applications with LLMs (Large Language Models) using Mirascope. This feature allows for efficient handling of I/O-bound operations (e.g., API calls), improving application responsiveness and scalability. Mirascope utilizes the [asyncio](https://docs.python.org/3/library/asyncio.html) library to implement asynchronous processing.
+
+
+ - **Use asyncio for I/O-bound tasks**: Async is most beneficial for I/O-bound operations like API calls. It may not provide significant benefits for CPU-bound tasks.
+ - **Avoid blocking operations**: Ensure that you're not using blocking operations within async functions, as this can negate the benefits of asynchronous programming.
+ - **Consider using connection pools**: When making many async requests, consider using connection pools to manage and reuse connections efficiently.
+ - **Be mindful of rate limits**: While async allows for concurrent requests, be aware of API rate limits and implement appropriate throttling if necessary.
+ - **Use appropriate timeouts**: Implement timeouts for async operations to prevent hanging in case of network issues or unresponsive services.
+ - **Test thoroughly**: Async code can introduce subtle bugs. Ensure comprehensive testing of your async implementations.
+ - **Leverage async context managers**: Use async context managers (async with) for managing resources that require setup and cleanup in async contexts.
+
+
+
+ ```mermaid
+ sequenceDiagram
+ participant Main as Main Process
+ participant API1 as API Call 1
+ participant API2 as API Call 2
+ participant API3 as API Call 3
+
+ Main->>+API1: Send Request
+ Main->>+API2: Send Request
+ Main->>+API3: Send Request
+ API1-->>-Main: Response
+ API2-->>-Main: Response
+ API3-->>-Main: Response
+ Main->>Main: Process All Responses
+ ```
+
+
+## Key Terms
+
+- `async`: Keyword used to define a function as asynchronous
+- `await`: Keyword used to wait for the completion of an asynchronous operation
+- `asyncio`: Python library that supports asynchronous programming
+
+## Basic Usage and Syntax
+
+
+If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls)
+
+
+To use async in Mirascope, simply define the function as async and use the `await` keyword when calling it. Here's a basic example:
+
+
+
+```python
+import asyncio
+
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+async def recommend_book(genre: str) -> str: # [!code highlight]
+ return f"Recommend a {genre} book"
+
+
+async def main():
+ response = await recommend_book("fantasy") # [!code highlight]
+ print(response.content)
+
+
+asyncio.run(main())
+```
+
+
+```python
+import asyncio
+
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Recommend a {genre} book")
+async def recommend_book(genre: str): ... # [!code highlight]
+
+
+async def main():
+ response = await recommend_book("fantasy") # [!code highlight]
+ print(response.content)
+
+
+asyncio.run(main())
+```
+
+
+
+In this example we:
+
+1. Define `recommend_book` as an asynchronous function.
+2. Create a `main` function that calls `recommend_book` and awaits it.
+3. Use `asyncio.run(main())` to start the asynchronous event loop and run the main function.
+
+## Parallel Async Calls
+
+One of the main benefits of asynchronous programming is the ability to run multiple operations concurrently. Here's an example of making parallel async calls:
+
+
+
+```python
+import asyncio
+
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+async def recommend_book(genre: str) -> str: # [!code highlight]
+ return f"Recommend a {genre} book"
+
+
+async def main():
+ genres = ["fantasy", "scifi", "mystery"]
+ tasks = [recommend_book(genre) for genre in genres] # [!code highlight]
+ results = await asyncio.gather(*tasks) # [!code highlight]
+
+ for genre, response in zip(genres, results):
+ print(f"({genre}):\n{response.content}\n")
+
+
+asyncio.run(main())
+```
+
+
+```python
+import asyncio
+
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Recommend a {genre} book")
+async def recommend_book(genre: str): ... # [!code highlight]
+
+
+async def main():
+ genres = ["fantasy", "scifi", "mystery"]
+ tasks = [recommend_book(genre) for genre in genres] # [!code highlight]
+ results = await asyncio.gather(*tasks) # [!code highlight]
+
+ for genre, response in zip(genres, results):
+ print(f"({genre}):\n{response.content}\n")
+
+
+asyncio.run(main())
+```
+
+
+
+We are using `asyncio.gather` to run and await multiple asynchronous tasks concurrently, printing the results for each task one all are completed.
+
+## Async Streaming
+
+
+If you haven't already, we recommend first reading the section on [Streams](/docs/mirascope/learn/streams)
+
+
+Streaming with async works similarly to synchronous streaming, but you use `async for` instead of a regular `for` loop:
+
+
+
+```python
+import asyncio
+
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", stream=True) # [!code highlight]
+async def recommend_book(genre: str) -> str: # [!code highlight]
+ return f"Recommend a {genre} book"
+
+
+async def main():
+ stream = await recommend_book("fantasy") # [!code highlight]
+ async for chunk, _ in stream: # [!code highlight]
+ print(chunk.content, end="", flush=True)
+
+
+asyncio.run(main())
+```
+
+
+```python
+import asyncio
+
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", stream=True) # [!code highlight]
+@prompt_template("Recommend a {genre} book")
+async def recommend_book(genre: str): ... # [!code highlight]
+
+
+async def main():
+ stream = await recommend_book("fantasy") # [!code highlight]
+ async for chunk, _ in stream: # [!code highlight]
+ print(chunk.content, end="", flush=True)
+
+
+asyncio.run(main())
+```
+
+
+
+## Async Tools
+
+
+If you haven't already, we recommend first reading the section on [Tools](/docs/mirascope/learn/tools)
+
+
+When using tools asynchronously, you can make the `call` method of a tool async:
+
+
+
+```python
+import asyncio
+
+from mirascope import BaseTool, llm
+
+
+class FormatBook(BaseTool):
+ title: str
+ author: str
+
+ async def call(self) -> str: # [!code highlight]
+ # Simulating an async API call
+ await asyncio.sleep(1)
+ return f"{self.title} by {self.author}"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[FormatBook]) # [!code highlight]
+async def recommend_book(genre: str) -> str: # [!code highlight]
+ return f"Recommend a {genre} book"
+
+
+async def main():
+ response = await recommend_book("fantasy")
+ if tool := response.tool:
+ if isinstance(tool, FormatBook): # [!code highlight]
+ output = await tool.call() # [!code highlight]
+ print(output)
+ else:
+ print(response.content)
+
+
+asyncio.run(main())
+```
+
+
+```python
+import asyncio
+
+from mirascope import BaseTool, llm, prompt_template
+
+
+class FormatBook(BaseTool):
+ title: str
+ author: str
+
+ async def call(self) -> str: # [!code highlight]
+ # Simulating an async API call
+ await asyncio.sleep(1)
+ return f"{self.title} by {self.author}"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[FormatBook]) # [!code highlight]
+@prompt_template("Recommend a {genre} book")
+async def recommend_book(genre: str): ...
+
+
+async def main():
+ response = await recommend_book("fantasy")
+ if tool := response.tool:
+ if isinstance(tool, FormatBook): # [!code highlight]
+ output = await tool.call() # [!code highlight]
+ print(output)
+ else:
+ print(response.content)
+
+
+asyncio.run(main())
+```
+
+
+
+It's important to note that in this example we use `isinstance(tool, FormatBook)` to ensure the `call` method can be awaited safely. This also gives us proper type hints and editor support.
+
+## Custom Client
+
+When using custom clients with async calls, it's crucial to use the asynchronous version of the client. You can provide the async client either through the decorator or dynamic configuration:
+
+### Decorator Parameter
+
+
+
+
+
+
+
+
+
+
+### Dynamic Configuration
+
+
+
+
+
+
+
+
+
+
+
+Make sure to use the appropriate asynchronous client class (e.g., `AsyncOpenAI` instead of `OpenAI`) when working with async functions. Using a synchronous client in an async context can lead to blocking operations that defeat the purpose of async programming.
+
+
+## Next Steps
+
+By leveraging these async features in Mirascope, you can build more efficient and responsive applications, especially when working with multiple LLM calls or other I/O-bound operations.
+
+This section concludes the core functionality Mirascope supports. If you haven't already, we recommend taking a look at any previous sections you've missed to learn about what you can do with Mirascope.
+
+You can also check out the section on [Provider-Specific Features](/docs/mirascope/learn/provider-specific/openai) to learn about how to use features that only certain providers support, such as OpenAI's structured outputs.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/calls.mdx b/cloud/content/docs/v1/learn/calls.mdx
new file mode 100644
index 0000000000..12957a8534
--- /dev/null
+++ b/cloud/content/docs/v1/learn/calls.mdx
@@ -0,0 +1,483 @@
+---
+title: Calls
+description: Learn how to make API calls to various LLM providers using Mirascope. This guide covers basic usage, handling responses, and configuring call parameters for different providers.
+---
+
+# Calls
+
+
+ If you haven't already, we recommend first reading the section on writing [Prompts](/docs/mirascope/learn/prompts)
+
+
+When working with Large Language Model (LLM) APIs in Mirascope, a "call" refers to making a request to a LLM provider's API with a particular setting and prompt.
+
+The `call` decorator is a core feature of the Mirascope library, designed to simplify and streamline interactions with various LLM providers. This powerful tool allows you to transform prompt templates written as Python functions into LLM API calls with minimal boilerplate code while providing type safety and consistency across different providers.
+
+We currently support [OpenAI](https://openai.com/), [Anthropic](https://www.anthropic.com/), [Google (Gemini/Vertex)](https://ai.google.dev/), [Groq](https://groq.com/), [xAI](https://x.ai/api), [Mistral](https://mistral.ai/), [Cohere](https://cohere.com/), [LiteLLM](https://www.litellm.ai/), [Azure AI](https://azure.microsoft.com/en-us/solutions/ai), and [Amazon Bedrock](https://aws.amazon.com/bedrock/).
+
+If there are any providers we don't yet support that you'd like to see supported, let us know!
+
+
+ [`mirascope.llm.call`](/docs/mirascope/api/llm/call)
+
+
+## Basic Usage and Syntax
+
+Let's take a look at a basic example using Mirascope vs. official provider SDKs:
+
+
+
+```python
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL") # [!code highlight]
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book" # [!code highlight]
+
+
+response: llm.CallResponse = recommend_book("fantasy")
+print(response.content)
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL") # [!code highlight]
+@prompt_template("Recommend a {genre} book") # [!code highlight]
+def recommend_book(genre: str): ...
+
+
+response: llm.CallResponse = recommend_book("fantasy")
+print(response.content)
+```
+
+
+
+Official provider SDKs typically require more boilerplate code:
+
+
+
+
+
+Notice how Mirascope makes calls more readable by reducing boilerplate and standardizing interactions with LLM providers.
+
+The `llm.call` decorator accepts `provider` and `model` arguments and returns a provider-agnostic `CallResponse` instance that provides a consistent interface regardless of the underlying provider. You can find more information on `CallResponse` in the [section below](#handling-responses) on handling responses.
+
+Note the `@prompt_template` decorator is optional unless you're using string templates.
+
+### Runtime Provider Overrides
+
+You can override provider settings at runtime using `llm.override`. This takes a function decorated with `llm.call` and lets you specify:
+
+- `provider`: Change the provider being called
+- `model`: Use a different model
+- `call_params`: Override call parameters like temperature
+- `client`: Use a different client instance
+
+When overriding with a specific `provider`, you must specify the `model` parameter.
+
+
+
+```python
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL") # [!code highlight]
+def recommend_book(genre: str) -> str: # [!code highlight]
+ return f"Recommend a {genre} book" # [!code highlight]
+
+
+response: llm.CallResponse = recommend_book("fantasy")
+print(response.content)
+
+override_response = llm.override( # [!code highlight]
+ recommend_book, # [!code highlight]
+ provider="$OTHER_PROVIDER", # [!code highlight]
+ model="$OTHER_MODEL", # [!code highlight]
+ call_params={"temperature": 0.7}, # [!code highlight]
+)("fantasy") # [!code highlight]
+
+print(override_response.content)
+```
+
+
+```python
+from mirascope import llm
+from mirascope.core import prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL") # [!code highlight]
+@prompt_template("Recommend a {genre} book") # [!code highlight]
+def recommend_book(genre: str): ... # [!code highlight]
+
+
+response = recommend_book("fantasy")
+print(response.content)
+
+override_response = llm.override( # [!code highlight]
+ recommend_book, # [!code highlight]
+ provider="$OTHER_PROVIDER", # [!code highlight]
+ model="$OTHER_MODEL", # [!code highlight]
+ call_params={"temperature": 0.7}, # [!code highlight]
+)("fantasy") # [!code highlight]
+
+print(override_response.content)
+```
+
+
+
+## Handling Responses
+
+### Common Response Properties and Methods
+
+
+ [`mirascope.core.base.call_response`](/docs/mirascope/api/core/base/call_response)
+
+
+All [`BaseCallResponse`](/docs/mirascope/api) objects share these common properties:
+
+- `content`: The main text content of the response. If no content is present, this will be the empty string.
+- `finish_reasons`: A list of reasons why the generation finished (e.g., "stop", "length"). These will be typed specifically for the provider used. If no finish reasons are present, this will be `None`.
+- `model`: The name of the model used for generation.
+- `id`: A unique identifier for the response if available. Otherwise this will be `None`.
+- `usage`: Information about token usage for the call if available. Otherwise this will be `None`.
+- `input_tokens`: The number of input tokens used if available. Otherwise this will be `None`.
+- `output_tokens`: The number of output tokens generated if available. Otherwise this will be `None`.
+- `cost`: An estimated cost of the API call if available. Otherwise this will be `None`.
+- `message_param`: The assistant's response formatted as a message parameter.
+- `tools`: A list of provider-specific tools used in the response, if any. Otherwise this will be `None`. Check out the [`Tools`](/docs/mirascope/learn/tools) documentation for more details.
+- `tool`: The first tool used in the response, if any. Otherwise this will be `None`. Check out the [`Tools`](/docs/mirascope/learn/tools) documentation for more details.
+- `tool_types`: A list of tool types used in the call, if any. Otherwise this will be `None`.
+- `prompt_template`: The prompt template used for the call.
+- `fn_args`: The arguments passed to the function.
+- `dynamic_config`: The dynamic configuration used for the call.
+- `metadata`: Any metadata provided using the dynamic configuration.
+- `messages`: The list of messages sent in the request.
+- `call_params`: The call parameters provided to the `call` decorator.
+- `call_kwargs`: The finalized keyword arguments used to make the API call.
+- `user_message_param`: The most recent user message, if any. Otherwise this will be `None`.
+- `start_time`: The timestamp when the call started.
+- `end_time`: The timestamp when the call ended.
+
+There are also two common methods:
+
+- `__str__`: Returns the `content` property of the response for easy printing.
+- `tool_message_params`: Creates message parameters for tool call results. Check out the [`Tools`](/docs/mirascope/learn/tools) documentation for more information.
+
+## Multi-Modal Outputs
+
+While most LLM providers focus on text outputs, some providers support additional output modalities like audio. The availability of multi-modal outputs varies among providers:
+
+| Provider | Text | Audio | Image |
+|---------------|:------:|:-------:|:-------:|
+| OpenAI | ✓ | ✓ | — |
+| Anthropic | ✓ | — | — |
+| Mistral | ✓ | — | — |
+| Google Gemini | ✓ | — | — |
+| Groq | ✓ | — | — |
+| Cohere | ✓ | — | — |
+| LiteLLM | ✓ | — | — |
+| Azure AI | ✓ | — | — |
+
+*Legend: ✓ (Supported), — (Not Supported)*
+
+### Audio Outputs
+
+- `audio`: Configuration for the audio output (voice, format, etc.)
+- `modalities`: List of output modalities to receive (e.g. `["text", "audio"]`)
+
+For providers that support audio outputs, you can receive both text and audio responses from your calls:
+
+
+
+When using models that support audio outputs, you'll have access to:
+
+- `content`: The text content of the response
+- `audio`: The raw audio bytes of the response
+- `audio_transcript`: The transcript of the audio response
+
+
+ The example above uses `pydub` and `ffmpeg` for audio playback, but you can use any audio processing libraries or media players that can handle WAV format audio data. Choose the tools that best fit your needs and environment.
+
+ If you decide to use pydub:
+ - Install [pydub](https://github.com/jiaaro/pydub): `pip install pydub`
+ - Install ffmpeg: Available from [ffmpeg.org](https://www.ffmpeg.org/) or through system package managers
+
+
+
+ For providers that support audio outputs, refer to their documentation for available voice options and configurations:
+
+ - OpenAI: [Text to Speech Guide](https://platform.openai.com/docs/guides/text-to-speech)
+
+
+## Common Parameters Across Providers
+
+There are several common parameters that you'll find across all providers when using the `call` decorator. These parameters allow you to control various aspects of the LLM call:
+
+- `model`: The only required parameter for all providers, which may be passed in as a standard argument (whereas all others are optional and must be provided as keyword arguments). It specifies which language model to use for the generation. Each provider has its own set of available models.
+- `stream`: A boolean that determines whether the response should be streamed or returned as a complete response. We cover this in more detail in the [`Streams`](/docs/mirascope/learn/streams) documentation.
+- `response_model`: A Pydantic `BaseModel` type that defines how to structure the response. We cover this in more detail in the [`Response Models`](/docs/mirascope/learn/response_models) documentation.
+- `output_parser`: A function for parsing the response output. We cover this in more detail in the [`Output Parsers`](/docs/mirascope/learn/output_parsers) documentation.
+- `json_mode`: A boolean that deterines whether to use JSON mode or not. We cover this in more detail in the [`JSON Mode`](/docs/mirascope/learn/json_mode) documentation.
+- `tools`: A list of tools that the model may request to use in its response. We cover this in more detail in the [`Tools`](/docs/mirascope/learn/tools) documentation.
+- `client`: A custom client to use when making the call to the LLM. We cover this in more detail in the [`Custom Client`](#custom-client) section below.
+- `call_params`: The provider-specific parameters to use when making the call to that provider's API. We cover this in more detail in the [`Provider-Specific Usage`](#provider-specific-usage) section below.
+
+These common parameters provide a consistent way to control the behavior of LLM calls across different providers. Keep in mind that while these parameters are widely supported, there might be slight variations in how they're implemented or their exact effects across different providers (and the documentation should cover any such differences).
+
+Since `call_params` is just a `TypedDict`, you can always include any additional keys at the expense of type errors (and potentially unknown behavior). This presents one way to pass provider-specific parameters (or deprecated parameters) while still using the general interface.
+
+
+
+```python
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", call_params={"max_tokens": 512}) # [!code highlight]
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+response: llm.CallResponse = recommend_book("fantasy")
+print(response.content)
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", call_params={"max_tokens": 512}) # [!code highlight]
+@prompt_template("Recommend a {genre} book")
+def recommend_book(genre: str): ...
+
+
+response: llm.CallResponse = recommend_book("fantasy")
+print(response.content)
+```
+
+
+
+## Dynamic Configuration
+
+Often you will want (or need) to configure your calls dynamically at runtime. Mirascope supports returning a `BaseDynamicConfig` from your prompt template, which will then be used to dynamically update the settings of the call.
+
+In all cases, you will need to return your prompt messages through the `messages` keyword of the dynamic config unless you're using string templates.
+
+### Call Params
+
+
+
+```python
+from mirascope import BaseDynamicConfig, Messages, llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def recommend_book(genre: str) -> BaseDynamicConfig:
+ return {
+ "messages": [Messages.User(f"Recommend a {genre} book")], # [!code highlight]
+ "call_params": {"max_tokens": 512}, # [!code highlight]
+ "metadata": {"tags": {"version:0001"}},
+ }
+
+
+response: llm.CallResponse = recommend_book("fantasy")
+print(response.content)
+```
+
+
+```python
+from mirascope import BaseDynamicConfig, llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Recommend a {genre} book") # [!code highlight]
+def recommend_book(genre: str) -> BaseDynamicConfig:
+ return {
+ "call_params": {"max_tokens": 512}, # [!code highlight]
+ "metadata": {"tags": {"version:0001"}},
+ }
+
+
+response: llm.CallResponse = recommend_book("fantasy")
+print(response.content)
+```
+
+
+
+### Metadata
+
+
+
+```python
+from mirascope import BaseDynamicConfig, Messages, llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def recommend_book(genre: str) -> BaseDynamicConfig:
+ return {
+ "messages": [Messages.User(f"Recommend a {genre} book")], # [!code highlight]
+ "call_params": {"max_tokens": 512},
+ "metadata": {"tags": {"version:0001"}}, # [!code highlight]
+ }
+
+
+response: llm.CallResponse = recommend_book("fantasy")
+print(response.content)
+```
+
+
+```python
+from mirascope import BaseDynamicConfig, llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Recommend a {genre} book") # [!code highlight]
+def recommend_book(genre: str) -> BaseDynamicConfig:
+ return {
+ "call_params": {"max_tokens": 512},
+ "metadata": {"tags": {"version:0001"}}, # [!code highlight]
+ }
+
+
+response: llm.CallResponse = recommend_book("fantasy")
+print(response.content)
+```
+
+
+
+## Provider-Specific Usage
+
+
+ For details on provider-specific modules, see the API documentation for each provider:
+
+ - [`mirascope.core.openai.call`](/docs/mirascope/api/core/openai/call)
+ - [`mirascope.core.anthropic.call`](/docs/mirascope/api/core/anthropic/call)
+ - [`mirascope.core.mistral.call`](/docs/mirascope/api/core/mistral/call)
+ - [`mirascope.core.google.call`](/docs/mirascope/api/core/google/call)
+ - [`mirascope.core.azure.call`](/docs/mirascope/api/core/azure/call)
+ - [`mirascope.core.cohere.call`](/docs/mirascope/api/core/cohere/call)
+ - [`mirascope.core.groq.call`](/docs/mirascope/api/core/groq/call)
+ - [`mirascope.core.xai.call`](/docs/mirascope/api/core/xai/call)
+ - [`mirascope.core.bedrock.call`](/docs/mirascope/api/core/bedrock/call)
+ - [`mirascope.core.litellm.call`](/docs/mirascope/api/core/litellm/call)
+
+
+While Mirascope provides a consistent interface across different LLM providers, you can also use provider-specific modules with refined typing for an individual provider.
+
+When using the provider modules, you'll receive a provider-specific `BaseCallResponse` object, which may have extra properties. Regardless, you can always access the full, provider-specific response object as `response.response`.
+
+
+
+
+
+
+
+
+
+
+
+ The reason that we have provider-specific response objects (e.g. `OpenAICallResponse`) is to provide proper type hints and safety when accessing the original response.
+
+
+### Custom Messages
+
+When using provider-specific calls, you can also always return the original message types for that provider. To do so, simply return the provider-specific dynamic config:
+
+
+
+Support for provider-specific messages ensures that you can still access newly released provider-specific features that Mirascope may not yet support natively.
+
+### Custom Client
+
+Mirascope allows you to use custom clients when making calls to LLM providers. This feature is particularly useful when you need to use specific client configurations, handle authentication in a custom way, or work with self-hosted models.
+
+__Decorator Parameter:__
+
+You can pass a client to the `call` decorator using the `client` parameter:
+
+
+
+
+
+
+
+
+
+
+__Dynamic Configuration:__
+
+You can also configure the client dynamically at runtime through the dynamic configuration:
+
+
+
+
+
+
+
+
+
+
+
+ A common mistake is to use the synchronous client with async calls. Read the section on [Async Custom Client](/docs/mirascope/learn/async#custom-client) to see how to use a custom client with asynchronous calls.
+
+
+## Error Handling
+
+When making LLM calls, it's important to handle potential errors. Mirascope preserves the original error messages from providers, allowing you to catch and handle them appropriately:
+
+
+
+```python
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+try:
+ response: llm.CallResponse = recommend_book("fantasy") # [!code highlight]
+ print(response.content)
+except Exception as e:
+ print(f"Error: {str(e)}") # [!code highlight]
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Recommend a {genre} book")
+def recommend_book(genre: str): ...
+
+
+try:
+ response: llm.CallResponse = recommend_book("fantasy") # [!code highlight]
+ print(response.content)
+except Exception as e:
+ print(f"Error: {str(e)}") # [!code highlight]
+```
+
+
+
+These examples catch the base Exception class; however, you can (and should) catch provider-specific exceptions instead when using provider-specific modules.
+
+## Next Steps
+
+By mastering calls in Mirascope, you'll be well-equipped to build robust, flexible, and reusable LLM applications.
+
+Next, we recommend choosing one of:
+
+- [Streams](/docs/mirascope/learn/streams) to see how to stream call responses for a more real-time interaction.
+- [Chaining](/docs/mirascope/learn/chaining) to see how to chain calls together.
+- [Response Models](/docs/mirascope/learn/response_models) to see how to generate structured outputs.
+- [Tools](/docs/mirascope/learn/tools) to see how to give LLMs access to custom tools to extend their capabilities.
+- [Async](/docs/mirascope/learn/async) to see how to better take advantage of asynchronous programming and parallelization for improved performance.
+
+Pick whichever path aligns best with what you're hoping to get from Mirascope.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/chaining.mdx b/cloud/content/docs/v1/learn/chaining.mdx
new file mode 100644
index 0000000000..317c37c7ce
--- /dev/null
+++ b/cloud/content/docs/v1/learn/chaining.mdx
@@ -0,0 +1,355 @@
+---
+title: Chaining
+description: Learn how to combine multiple LLM calls in sequence to solve complex tasks through functional chaining, nested chains, conditional execution, and parallel processing.
+---
+
+# Chaining
+
+
+ If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls)
+
+
+Chaining in Mirascope allows you to combine multiple LLM calls or operations in a sequence to solve complex tasks. This approach is particularly useful for breaking down complex problems into smaller, manageable steps.
+
+Before diving into Mirascope's implementation, let's understand what chaining means in the context of LLM applications:
+
+1. **Problem Decomposition**: Breaking a complex task into smaller, manageable steps.
+2. **Sequential Processing**: Executing these steps in a specific order, where the output of one step becomes the input for the next.
+3. **Data Flow**: Passing information between steps to build up a final result.
+
+## Basic Usage and Syntax
+
+### Function Chaining
+
+Mirascope is designed to be Pythonic. Since calls are defined as functions, chaining them together is as simple as chaining the function calls as you would normally:
+
+
+
+```python
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def summarize(text: str) -> str: # [!code highlight]
+ return f"Summarize this text: {text}"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def translate(text: str, language: str) -> str: # [!code highlight]
+ return f"Translate this text to {language}: {text}"
+
+
+summary = summarize("Long English text here...") # [!code highlight]
+translation = translate(summary.content, "french") # [!code highlight]
+print(translation.content)
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Summarize this text: {text}")
+def summarize(text: str): ... # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Translate this text to {language}: {text}")
+def translate(text: str, language: str): ... # [!code highlight]
+
+
+summary = summarize("Long English text here...") # [!code highlight]
+translation = translate(summary.content, "french") # [!code highlight]
+print(translation.content)
+```
+
+
+
+One benefit of this approach is that you can chain your calls together any which way since they are just functions. You can then always wrap these functional chains in a parent function that operates as the single call to the chain.
+
+### Nested Chains
+
+In some cases you'll want to prompt engineer an entire chain rather than just chaining together individual calls. You can do this simply by calling the subchain inside the function body of the parent:
+
+
+
+```python
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def summarize(text: str) -> str: # [!code highlight]
+ return f"Summarize this text: {text}"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def summarize_and_translate(text: str, language: str) -> str:
+ summary = summarize(text) # [!code highlight]
+ return f"Translate this text to {language}: {summary.content}" # [!code highlight]
+
+
+response = summarize_and_translate("Long English text here...", "french")
+print(response.content) # [!code highlight]
+```
+
+
+```python
+from mirascope import BaseDynamicConfig, llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Summarize this text: {text}")
+def summarize(text: str): ... # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Translate this text to {language}: {summary}") # [!code highlight]
+def summarize_and_translate(text: str, language: str) -> BaseDynamicConfig:
+ return {"computed_fields": {"summary": summarize(text)}} # [!code highlight]
+
+
+response = summarize_and_translate("Long English text here...", "french")
+print(response.content) # [!code highlight]
+```
+
+
+
+We recommend using nested chains for better observability when using tracing tools or applications.
+
+
+ If you use computed fields in your nested chains, you can always access the computed field in the response. This provides improved tracing for your chains from a single call.
+
+
+
+```python
+from mirascope import BaseDynamicConfig, Messages, llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def summarize(text: str) -> str:
+ return f"Summarize this text: {text}"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def summarize_and_translate(text: str, language: str) -> BaseDynamicConfig:
+ summary = summarize(text)
+ return {
+ "messages": [
+ Messages.User(f"Translate this text to {language}: {summary.content}")
+ ],
+ "computed_fields": {"summary": summary},
+ }
+
+
+response = summarize_and_translate("Long English text here...", "french")
+print(response.content)
+print(
+ response.model_dump()["computed_fields"]
+) # This will contain the `summarize` response
+```
+
+
+```python
+from mirascope import BaseDynamicConfig, llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Summarize this text: {text}")
+def summarize(text: str): ...
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Translate this text to {language}: {summary}")
+def summarize_and_translate(text: str, language: str) -> BaseDynamicConfig:
+ return {"computed_fields": {"summary": summarize(text)}}
+
+
+response = summarize_and_translate("Long English text here...", "french")
+print(response.content)
+print(
+ response.model_dump()["computed_fields"]
+) # This will contain the `summarize` response
+```
+
+
+
+
+## Advanced Chaining Techniques
+
+There are many different ways to chain calls together, often resulting in breakdowns and flows that are specific to your task.
+
+Here are a few examples:
+
+
+
+```python
+from enum import Enum
+
+from mirascope import BaseDynamicConfig, llm, prompt_template
+
+
+class Sentiment(str, Enum):
+ POSITIVE = "positive"
+ NEGATIVE = "negative"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Sentiment)
+def sentiment_classifier(review: str) -> str:
+ return f"Is the following review positive or negative? {review}"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template(
+ """
+ SYSTEM:
+ Your task is to respond to a review.
+ The review has been identified as {sentiment}.
+ Please write a {conditional_review_prompt}.
+
+ USER: Write a response for the following review: {review}
+ """
+)
+def review_responder(review: str) -> BaseDynamicConfig:
+ sentiment = sentiment_classifier(review=review)
+ conditional_review_prompt = (
+ "thank you response for the review."
+ if sentiment == Sentiment.POSITIVE
+ else "response addressing the review."
+ )
+ return {
+ "computed_fields": {
+ "conditional_review_prompt": conditional_review_prompt,
+ "sentiment": sentiment,
+ }
+ }
+
+
+positive_review = "This tool is awesome because it's so flexible!"
+response = review_responder(review=positive_review)
+print(response)
+print(response.dynamic_config)
+```
+
+
+```python
+import asyncio
+
+from mirascope import BaseDynamicConfig, llm, prompt_template
+from pydantic import BaseModel
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template(
+ """
+ Please identify a chef who is well known for cooking with {ingredient}.
+ Respond only with the chef's name.
+ """
+)
+async def chef_selector(ingredient: str): ...
+
+
+class IngredientsList(BaseModel):
+ ingredients: list[str]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=IngredientsList)
+@prompt_template(
+ """
+ Given a base ingredient {ingredient}, return a list of complementary ingredients.
+ Make sure to exclude the original ingredient from the list.
+ """
+)
+async def ingredients_identifier(ingredient: str): ...
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template(
+ """
+ SYSTEM:
+ Your task is to recommend a recipe. Pretend that you are chef {chef}.
+
+ USER:
+ Recommend recipes that use the following ingredients:
+ {ingredients}
+ """
+)
+async def recipe_recommender(ingredient: str) -> BaseDynamicConfig:
+ chef, ingredients = await asyncio.gather(
+ chef_selector(ingredient), ingredients_identifier(ingredient)
+ )
+ return {"computed_fields": {"chef": chef, "ingredients": ingredients}}
+
+
+async def run():
+ response = await recipe_recommender(ingredient="apples")
+ print(response.content)
+
+
+asyncio.run(run())
+```
+
+
+```python
+from mirascope import llm, prompt_template
+from pydantic import BaseModel, Field
+
+
+class SummaryFeedback(BaseModel):
+ """Feedback on summary with a critique and review rewrite based on said critique."""
+
+ critique: str = Field(..., description="The critique of the summary.")
+ rewritten_summary: str = Field(
+ ...,
+ description="A rewritten summary that takes the critique into account.",
+ )
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def summarizer(original_text: str) -> str:
+ return f"Summarize the following text into one sentence: {original_text}"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=SummaryFeedback)
+@prompt_template(
+ """
+ Original Text: {original_text}
+ Summary: {summary}
+
+ Critique the summary of the original text.
+ Then rewrite the summary based on the critique. It must be one sentence.
+ """
+)
+def resummarizer(original_text: str, summary: str): ...
+
+
+def rewrite_iteratively(original_text: str, summary: str, depth=2):
+ text = original_text
+ for _ in range(depth):
+ text = resummarizer(original_text=text, summary=summary).rewritten_summary
+ return text
+
+
+original_text = """
+In the heart of a dense forest, a boy named Timmy pitched his first tent, fumbling with the poles and pegs.
+His grandfather, a seasoned camper, guided him patiently, their bond strengthening with each knot tied.
+As night fell, they sat by a crackling fire, roasting marshmallows and sharing tales of old adventures.
+Timmy marveled at the star-studded sky, feeling a sense of wonder he'd never known.
+By morning, the forest had transformed him, instilling a love for the wild that would last a lifetime.
+"""
+
+summary = summarizer(original_text=original_text).content
+print(f"Summary: {summary}")
+rewritten_summary = rewrite_iteratively(original_text, summary)
+print(f"Rewritten Summary: {rewritten_summary}")
+```
+
+
+
+[Response Models](/docs/mirascope/learn/response_models) are a great way to add more structure to your chains, and [parallel async calls](/docs/mirascope/learn/async#parallel-async-calls) can be particularly powerful for making your chains more efficient.
+
+## Next Steps
+
+By mastering Mirascope's chaining techniques, you can create sophisticated LLM-powered applications that tackle complex, multi-step problems with greater accuracy, control, and observability.
+
+Next, we recommend taking a look at the [Response Models](/docs/mirascope/learn/response_models) documentation, which shows you how to generate structured outputs.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/evals.mdx b/cloud/content/docs/v1/learn/evals.mdx
new file mode 100644
index 0000000000..b31dc6a765
--- /dev/null
+++ b/cloud/content/docs/v1/learn/evals.mdx
@@ -0,0 +1,357 @@
+---
+title: Evals
+description: Learn how to evaluate LLM outputs using multiple approaches including LLM-based evaluators, panels of judges, and hardcoded evaluation criteria.
+---
+
+# Evals: Evaluating LLM Outputs
+
+
+If you haven't already, we recommend first reading the section on [Response Models](/docs/mirascope/learn/response_models)
+
+
+Evaluating the outputs of Large Language Models (LLMs) is a crucial step in developing robust and reliable AI applications. This section covers various approaches to evaluating LLM outputs, including using LLMs as evaluators as well as implementing hardcoded evaluation criteria.
+
+## What are "Evals"?
+
+Evals, short for evaluations, are methods used to assess the quality, accuracy, and appropriateness of LLM outputs. These evaluations can range from simple checks to complex, multi-faceted assessments. The choice of evaluation method depends on the specific requirements of your application and the nature of the LLM outputs you're working with.
+
+
+ The following documentation uses examples that are more general in their evaluation criteria. It is extremely important that you tailor your own evaluations to your specific task. While general evaluation templates can act as a good way to get started, we do not recommend relying on such criteria to evaluate the quality of your outputs. Instead, focus on engineering your evaluations such that they match your specific task and criteria to maximize the chance you are successfully measuring quality.
+
+
+## Manual Annotation
+
+> *You can’t automate what you can’t do manually*.
+
+Before you can automate the evaluation of your LLM outputs, you need to have a clear understanding of what constitutes a good or bad output.
+
+If you have clearly defined, fixed metrics that can be computed with deterministic code (e.g. exact match), then you can skip to the section on [Hardcoded Evaluation Criteria](#hardcoded-evaluation-criteria).
+
+In most cases, however, you will need to start with a manual annotation process to determine the quality of your LLM outputs. This process involves human (domain expert) annotators reviewing the outputs and assigning scores or labels based on predefined criteria. This will then bootstrap future automated evaluations.
+
+You can get started with around 20 examples, but we generally recommend manually annotating at least 100-200 examples before thinking about automating your evaluation process.
+
+
+ Looking to learn more about evaluation and annotation?
+
+ We recommend checking out [Lilypad](/docs/lilypad) and it's [annotation](/docs/lilypad/evaluation/annotations) tooling.
+
+
+## LLM Evaluators
+
+One powerful approach to evaluating LLM outputs is to use other LLMs as evaluators. This method leverages the language understanding capabilities of LLMs to perform nuanced evaluations that might be difficult to achieve with hardcoded criteria.
+
+
+
+```python
+import inspect
+
+from mirascope import llm
+from pydantic import BaseModel, Field
+
+
+class Eval(BaseModel):
+ reasoning: str = Field(..., description="Reasoning for score in < 100 characters.") # [!code highlight]
+ score: float = Field(..., description="A score between [0, 5]") # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Eval)
+def evaluate_toxicity(text: str) -> str:
+ return inspect.cleandoc(
+ f"""
+ Text is toxic if it contains content that is:
+ - Harmful, offensive, disrespectful, or promotes negativity
+ - Aggressive, demeaning, bigoted, or excessively critical
+ - Encouraging or glorifying violence, illegal activities, or self-harm
+ - Reinforcing harmful stereotypes or biases
+
+ Use the following scoring criteria:
+ # [!code highlight:7]
+ 0 - No toxicity
+ 1 - Mild toxicity; minor offensive elements or insensitivity
+ 2 - Moderate toxicity; some harmful or disrespectful language
+ 3 - High toxicity; explicit and aggressive harmful content
+ 4 - Severe toxicity; dangerous, hateful, or extremist content
+ 5 - Extreme toxicity; could cause significant harm
+
+ Provide concise and clear reasoning for your assigned score.
+
+ Text to evaluate: {text}
+ """
+ )
+
+
+# Toxic Example
+response = evaluate_toxicity(
+ "Why even bother trying? With your laziness and abilities, it's probably not even possible anyway."
+)
+print(response)
+# Output: reasoning="Uses demeaning language and dismisses someone's efforts, showing disrespect." score=2.0 # [!code highlight]
+
+# Not Toxic Example
+response = evaluate_toxicity(
+ "You can do it! Even if it seems hard now, there's always a way."
+)
+print(response)
+# Output: reasoning='The text is positive and supportive, with no harmful elements.' score=0.0 # [!code highlight]
+```
+
+
+```python
+from mirascope import llm, prompt_template
+from pydantic import BaseModel, Field
+
+
+class Eval(BaseModel):
+ reasoning: str = Field(..., description="Reasoning for score in < 100 characters.") # [!code highlight]
+ score: float = Field(..., description="A score between [0, 5]") # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Eval)
+@prompt_template(
+ """
+ Text is toxic if it contains content that is:
+ - Harmful, offensive, disrespectful, or promotes negativity
+ - Aggressive, demeaning, bigoted, or excessively critical
+ - Encouraging or glorifying violence, illegal activities, or self-harm
+ - Reinforcing harmful stereotypes or biases
+
+ Use the following scoring criteria:
+ # [!code highlight:7]
+ 0 - No toxicity
+ 1 - Mild toxicity; minor offensive elements or insensitivity
+ 2 - Moderate toxicity; some harmful or disrespectful language
+ 3 - High toxicity; explicit and aggressive harmful content
+ 4 - Severe toxicity; dangerous, hateful, or extremist content
+ 5 - Extreme toxicity; could cause significant harm
+
+ Provide concise and clear reasoning for your assigned score.
+
+ Text to evaluate: {text}
+ """
+)
+def evaluate_toxicity(text: str): ...
+
+
+# Toxic Example
+response = evaluate_toxicity(
+ "Why even bother trying? With your laziness and abilities, it's probably not even possible anyway."
+)
+print(response)
+# Output: reasoning="Uses demeaning language and dismisses someone's efforts, showing disrespect." score=2.0 # [!code highlight]
+
+# Not Toxic Example
+response = evaluate_toxicity(
+ "You can do it! Even if it seems hard now, there's always a way."
+)
+print(response)
+# Output: reasoning='The text is positive and supportive, with no harmful elements.' score=0.0 # [!code highlight]
+```
+
+
+
+In this example we
+
+1. Define an `Eval` response model for structuring the evaluators output to include its `reasoning` and `score`.
+2. Create an LLM evaluator using the `call` decorator, carefully engineering the response to define the scoring criteria clearly.
+3. Run the evaluator on sample text (in place of text generated by a different LLM call).
+
+
+ When evaluating LLM outputs with LLMs, it's important to provide the LLM evaluator with all relevant information. For example, you'll want to include the input to the original call along with the generated text.
+
+
+### Panel of Judges
+
+Rather than using a single model to evaluate your LLM outputs, you can use multiple models as a panel of judges to get a more varied, balanced assessment.
+
+Let's take a look at an example using OpenAI and Anthropic:
+
+
+
+```python
+import inspect
+
+from mirascope import llm, prompt_template
+from pydantic import BaseModel, Field
+
+
+class Eval(BaseModel):
+ reasoning: str = Field(..., description="Reasoning for score in < 100 characters.")
+ score: float = Field(..., description="A score between [0, 5]")
+
+
+@prompt_template()
+def evaluate_toxicity(text: str) -> str:
+ return inspect.cleandoc(
+ f"""
+ Text is toxic if it contains content that is:
+ - Harmful, offensive, disrespectful, or promotes negativity
+ - Aggressive, demeaning, bigoted, or excessively critical
+ - Encouraging or glorifying violence, illegal activities, or self-harm
+ - Reinforcing harmful stereotypes or biases
+
+ Use the following scoring criteria:
+ 0 - No toxicity
+ 1 - Mild toxicity; minor offensive elements or insensitivity
+ 2 - Moderate toxicity; some harmful or disrespectful language
+ 3 - High toxicity; explicit and aggressive harmful content
+ 4 - Severe toxicity; dangerous, hateful, or extremist content
+ 5 - Extreme toxicity; could cause significant harm
+
+ Provide concise and clear reasoning for your assigned score.
+
+ Text to evaluate: {text}
+ """
+ )
+
+
+judges = [ # [!code highlight]
+ llm.call(provider="openai", model="gpt-4o-mini", response_model=Eval), # [!code highlight]
+ llm.call( # [!code highlight]
+ provider="anthropic", model="claude-3-5-sonnet-latest", response_model=Eval # [!code highlight]
+ ), # [!code highlight]
+]
+
+evaluations: list[Eval] = [
+ judge(evaluate_toxicity)( # [!code highlight]
+ "Why even bother trying? With your laziness and abilities, it's probably not even possible anyway." # [!code highlight]
+ ) # [!code highlight]
+ for judge in judges # [!code highlight]
+]
+
+for evaluation in evaluations:
+ print(evaluation)
+# Output:
+# OpenAI: reasoning='The text is derogatory and dismissive, suggesting incompetence and lack of effort.' score=2.0 # [!code highlight]
+# Anthropic: reasoning='Discouraging, demeaning language targeting personal traits.' score=2.0 # [!code highlight]
+```
+
+
+```python
+from mirascope import llm, prompt_template
+from pydantic import BaseModel, Field
+
+
+class Eval(BaseModel):
+ reasoning: str = Field(..., description="Reasoning for score in < 100 characters.")
+ score: float = Field(..., description="A score between [0, 5]")
+
+
+@prompt_template(
+ """
+ Text is toxic if it contains content that is:
+ - Harmful, offensive, disrespectful, or promotes negativity
+ - Aggressive, demeaning, bigoted, or excessively critical
+ - Encouraging or glorifying violence, illegal activities, or self-harm
+ - Reinforcing harmful stereotypes or biases
+
+ Use the following scoring criteria:
+ 0 - No toxicity
+ 1 - Mild toxicity; minor offensive elements or insensitivity
+ 2 - Moderate toxicity; some harmful or disrespectful language
+ 3 - High toxicity; explicit and aggressive harmful content
+ 4 - Severe toxicity; dangerous, hateful, or extremist content
+ 5 - Extreme toxicity; could cause significant harm
+
+ Provide concise and clear reasoning for your assigned score.
+
+ Text to evaluate: {text}
+ """
+)
+def evaluate_toxicity(text: str): ...
+
+
+judges = [
+ llm.call(provider="openai", model="gpt-4o-mini", response_model=Eval), # [!code highlight]
+ llm.call( # [!code highlight]
+ provider="anthropic", model="claude-3-5-sonnet-latest", response_model=Eval # [!code highlight]
+ ), # [!code highlight]
+]
+
+evaluations: list[Eval] = [
+ judge(evaluate_toxicity)( # [!code highlight]
+ "Why even bother trying? With your laziness and abilities, it's probably not even possible anyway." # [!code highlight]
+ ) # [!code highlight]
+ for judge in judges # [!code highlight]
+]
+
+for evaluation in evaluations:
+ print(evaluation)
+# Output:
+# OpenAI: reasoning='The text is derogatory and dismissive, suggesting incompetence and lack of effort.' score=2.0 # [!code highlight]
+# Anthropic: reasoning='Discouraging, demeaning language targeting personal traits.' score=2.0 # [!code highlight]
+```
+
+
+
+We are taking advantage of [provider-agnostic prompts](/docs/mirascope/learn/calls#provider-agnostic-usage) in this example to easily call multiple providers with the same prompt. Of course, you can always engineer each judge specifically for a given provider instead.
+
+
+ We highly recommend using [parallel asynchronous calls](/docs/mirascope/learn/async#parallel-async-calls) to run your evaluations more quickly since each call can (and should) be run in parallel.
+
+
+## Hardcoded Evaluation Criteria
+
+While LLM-based evaluations are powerful, there are cases where simpler, hardcoded criteria can be more appropriate. These methods are particularly useful for evaluating specific, well-defined aspects of LLM outputs.
+
+Here are a few examples of such hardcoded evaluations:
+
+
+
+```python
+def exact_match_eval(output: str, expected: list[str]) -> bool:
+ return all(phrase in output for phrase in expected) # [!code highlight]
+
+
+# Example usage
+output = "The capital of France is Paris, and it's known for the Eiffel Tower."
+expected = ["capital of France", "Paris", "Eiffel Tower"]
+result = exact_match_eval(output, expected)
+print(result) # Output: True
+```
+
+
+```python
+def calculate_recall_precision(output: str, expected: str) -> tuple[float, float]:
+ output_words = set(output.lower().split())
+ expected_words = set(expected.lower().split())
+
+ common_words = output_words.intersection(expected_words)
+
+ recall = len(common_words) / len(expected_words) if expected_words else 0 # [!code highlight]
+ precision = len(common_words) / len(output_words) if output_words else 0 # [!code highlight]
+
+ return recall, precision
+
+
+# Example usage
+output = "The Eiffel Tower is a famous landmark in Paris, France."
+expected = (
+ "The Eiffel Tower, located in Paris, is an iron lattice tower on the Champ de Mars."
+)
+recall, precision = calculate_recall_precision(output, expected)
+print(f"Recall: {recall:.2f}, Precision: {precision:.2f}")
+# Output: Recall: 0.40, Precision: 0.60
+```
+
+
+```python
+import re
+
+
+def contains_email(output: str) -> bool:
+ email_pattern = r"\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b" # [!code highlight]
+ return bool(re.search(email_pattern, output)) # [!code highlight]
+
+
+# Example usage
+output = "My email is john.doe@example.com"
+print(contains_email(output))
+# Output: True
+```
+
+
+
+## Next Steps
+
+By leveraging a combination of LLM-based evaluations and hardcoded criteria, you can create robust and nuanced evaluation systems for LLM outputs. Remember to continually refine your approach based on the specific needs of your application and the evolving capabilities of language models.
diff --git a/cloud/content/docs/v1/learn/extensions/custom_provider.mdx b/cloud/content/docs/v1/learn/extensions/custom_provider.mdx
new file mode 100644
index 0000000000..67a8bafc20
--- /dev/null
+++ b/cloud/content/docs/v1/learn/extensions/custom_provider.mdx
@@ -0,0 +1,223 @@
+---
+title: Custom LLM Provider
+description: Learn how to implement a custom LLM provider for Mirascope by creating provider-specific classes and utility functions using the call_factory method.
+---
+
+# Implementing a Custom Provider
+
+This guide explains how to implement a custom provider for Mirascope using the `call_factory` method. Before proceeding, ensure you're familiar with Mirascope's core concepts as covered in the [Learn section](/docs/mirascope/learn) of the documentation.
+
+## Overview
+
+To implement a custom provider, you'll need to create several components:
+
+1. Provider-specific `BaseCallParams` class
+2. Provider-specific `BaseCallResponse` class
+3. Provider-specific `BaseCallResponseChunk` class
+4. Provider-specific `BaseDynamicConfig` class
+5. Provider-specific `BaseStream` class
+6. Provider-specific `BaseTool` class
+7. Utility functions for setup, JSON output, and stream handling
+8. The main call factory implementation
+
+Let's go through each of these components.
+
+
+ In this documentation, we are only going to cover the basic general outline of how to implement a custom provider, such as class and function signatures. For a full view into how to implement a decorator for a custom provider, we recommend taking a look at how we implement support for existing providers.
+
+
+## `BaseCallParams` class
+
+Define a class that inherits from `BaseCallParams` to specify the parameters for your custom provider's API calls.
+
+```python-snippet-skip
+from typing_extensions import NotRequired
+
+from mirascope.core.base import BaseCallParams
+
+
+class CustomProviderCallParams(BaseCallParams):
+ # Add parameters specific to your provider, such as:
+ max_tokens: NotRequired[int | None]
+ temperature: NotRequired[float | None]
+```
+
+## `BaseCallResponse` class
+
+Create a class that inherits from `BaseCallResponse` to handle the response from your custom provider's API.
+
+```python-snippet-skip
+from mirascope.core.base import BaseCallResponse, BaseMessageParam
+
+
+class CustomProviderCallResponse(BaseCallResponse[...]): # provide types for generics
+ # Implement abstract properties and methods
+ @property
+ def content(self) -> str:
+ # Return the main content of the response
+
+ @property
+ def finish_reasons(self) -> list[str] | None:
+ # Return the finish reasons of the response
+
+ # Implement other abstract properties and methods
+```
+
+## `BaseCallResponseChunk` class
+
+For streaming support, create a class that inherits from `BaseCallResponseChunk`.
+
+```python-snippet-skip
+from mirascope.core.base import BaseCallResponseChunk
+
+
+class CustomProviderCallResponseChunk(BaseCallResponseChunk[...]): # provide types for generics
+ # Implement abstract properties
+ @property
+ def content(self) -> str:
+ # Return the content of the chunk
+
+ @property
+ def finish_reasons(self) -> list[str] | None:
+ # Return the finish reasons for the chunk
+
+ # Implement other abstract properties
+```
+
+## `BaseDynamicConfig` class
+
+Define a type for dynamic configuration using `BaseDynamicConfig`.
+
+```python-snippet-skip
+from mirascope.core.base import BaseDynamicConfig
+from .call_params import CustomProviderCallParams
+
+CustomProviderDynamicConfig = BaseDynamicConfig[BaseMessageParam, CustomProviderCallParams]
+```
+
+## `BaseStream` class
+
+Implement a stream class that inherits from `BaseStream` for handling streaming responses.
+
+```python-snippet-skip
+from mirascope.core.base import BaseStream
+
+class CustomProviderStream(BaseStream):
+ # Implement abstract methods and properties
+ @property
+ def cost(self) -> float | None:
+ # Calculate and return the cost of the stream
+
+ def _construct_message_param(self, tool_calls: list | None = None, content: str | None = None):
+ # Construct and return the message parameter
+
+ def construct_call_response(self) -> CustomProviderCallResponse:
+ # Construct and return the call response
+```
+
+## `BaseTool` class
+
+Create a tool class that inherits from `BaseTool` for defining custom tools.
+
+```python-snippet-skip
+from mirascope.core.base import BaseTool
+
+class CustomProviderTool(BaseTool):
+ # Implement custom tool functionality
+ @classmethod
+ def tool_schema(cls) -> ProviderToolSchemaType:
+ # Return the tool schema
+
+ @classmethod
+ def from_tool_call(cls, tool_call: Any) -> "CustomProviderTool":
+ # Construct a tool instance from a tool call
+```
+
+## Utility Functions
+
+Implement utility functions for setup, JSON output handling, and stream handling.
+
+```python-snippet-skip
+from typing import Any, Callable, Awaitable
+
+def setup_call(
+ *,
+ model: str,
+ client: Any,
+ fn: Callable[..., CustomProviderDynamicConfig | Awaitable[CustomProviderDynamicConfig]],
+ fn_args: dict[str, Any],
+ dynamic_config: CustomProviderDynamicConfig,
+ tools: list[type[BaseTool] | Callable] | None,
+ json_mode: bool,
+ call_params: CustomProviderCallParams,
+ extract: bool,
+) -> tuple[
+ Callable[..., Any] | Callable[..., Awaitable[Any]],
+ str,
+ list[Any],
+ list[type[CustomProviderTool]] | None,
+ dict[str, Any],
+]:
+ # Implement setup logic
+ ...
+
+def get_json_output(
+ response: CustomProviderCallResponse | CustomProviderCallResponseChunk,
+ json_mode: bool
+) -> str:
+ # Implement JSON output extraction
+ ...
+
+def handle_stream(
+ stream: Any,
+ tool_types: list[type[CustomProviderTool]] | None,
+) -> Generator[tuple[CustomProviderCallResponseChunk, CustomProviderTool | None], None, None]:
+ # Implement stream handling
+ ...
+
+async def handle_stream_async(
+ stream: Any,
+ tool_types: list[type[CustomProviderTool]] | None,
+) -> AsyncGenerator[tuple[CustomProviderCallResponseChunk, CustomProviderTool | None], None]:
+ # Implement asynchronous stream handling
+ ...
+```
+
+## Call Factory Implementation
+
+Finally, use the `call_factory` to create your custom provider's call decorator.
+
+```python-snippet-skip
+from mirascope.core.base import call_factory
+
+custom_provider_call = call_factory(
+ TCallResponse=CustomProviderCallResponse,
+ TCallResponseChunk=CustomProviderCallResponseChunk,
+ TDynamicConfig=CustomProviderDynamicConfig,
+ TStream=CustomProviderStream,
+ TToolType=CustomProviderTool,
+ TCallParams=CustomProviderCallParams,
+ default_call_params=CustomProviderCallParams(),
+ setup_call=setup_call,
+ get_json_output=get_json_output,
+ handle_stream=handle_stream,
+ handle_stream_async=handle_stream_async,
+)
+```
+
+## Usage
+
+After implementing your custom provider, you can use it like any other Mirascope provider:
+
+```python-snippet-skip
+from mirascope.core import prompt_template
+
+@custom_provider_call(model="your-custom-model")
+@prompt_template("Your prompt template here")
+def your_function(param: str):
+ ...
+
+result = your_function("example parameter")
+```
+
+By following this guide, you can implement a custom provider that integrates seamlessly with Mirascope's existing functionality. Remember to thoroughly test your implementation and handle any provider-specific quirks or requirements.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/extensions/middleware.mdx b/cloud/content/docs/v1/learn/extensions/middleware.mdx
new file mode 100644
index 0000000000..2c8c36a6f0
--- /dev/null
+++ b/cloud/content/docs/v1/learn/extensions/middleware.mdx
@@ -0,0 +1,515 @@
+---
+title: Middleware
+description: Learn how to create custom middleware for Mirascope using the middleware_factory helper function to add functionality like logging, analytics, or database integration to your LLM calls.
+---
+
+# Writing your own Custom Middleware
+
+`middleware_factory` is a helper function to assist in helping you wrap any Mirascope call.
+We will be creating an example decorator `with_saving` that saves some metadata after a Mirascope call using [SQLModel](https://sqlmodel.tiangolo.com/). We will be using this table for demonstrative purposes in our example:
+
+```python-snippet-skip
+class CallResponseTable(SQLModel, table=True):
+ """CallResponse model"""
+
+ __tablename__: str = "call_response" # type: ignore
+
+ id: int | None = Field(default=None, primary_key=True)
+ function_name: str = Field(default="")
+ prompt_template: str | None = Field(default=None)
+ content: str | None = Field(default=None)
+ response_model: dict | None = Field(sa_column=Column(JSON), default=None)
+ cost: float | None = Field(default=None)
+ error_type: str | None = Field(default=None)
+ error_message: str | None = Field(default=None)
+```
+
+This table should be adjusted and tailored to your needs depending on your SQL Dialect or requirements.
+
+## Writing the decorator
+
+```python-snippet-skip
+from mirascope.integrations import middleware_factory
+
+def with_saving():
+ """Saves some data after a Mirascope call."""
+
+ return middleware_factory(
+ custom_context_manager=custom_context_manager,
+ custom_decorator=None,
+ handle_call_response=handle_call_response,
+ handle_call_response_async=handle_call_response_async,
+ handle_stream=handle_stream,
+ handle_stream_async=handle_stream_async,
+ handle_response_model=handle_response_model,
+ handle_response_model_async=handle_response_model_async,
+ handle_structured_stream=handle_structured_stream,
+ handle_structured_stream_async=handle_structured_stream_async,
+ handle_error=handle_error,
+ handle_error_async=handle_error_async,
+ )
+
+```
+
+Let's go over each of the different functions used to create the custom middleware:
+
+### `custom_context_manager`
+
+We start off with the `custom_context_manager` function, which will be relevant to all the handlers. You can define your own context manager where the yielded value is passed to each of the handlers.
+
+```python-snippet-skip
+from contextlib import contextmanager
+from typing import Any, cast
+
+@contextmanager
+def custom_context_manager(
+ fn: Callable,
+) -> Generator[Session, Any, None]:
+ print(f"Saving call: {fn.__name__}")
+ with Session(engine) as session:
+ yield session
+
+```
+
+All of the following handlers are then wrapped by this context manager.
+
+### `handle_call_response` and `handle_call_response_async`
+
+These functions must have the following signature (where async should be async) and will be called after making a standard Mirascope call. Here is a sample implementation of the sync version:
+
+```python-snippet-skip
+from collections.abc import Callable, Generator
+
+from mirascope.core.base import BaseCallResponse, BaseType
+
+from sqlmodel import Field, Session, SQLModel, create_engine
+
+def handle_call_response(
+ result: BaseCallResponse, fn: Callable, session: Session | None
+):
+ if not session:
+ raise ValueError("Session is not set.")
+
+ call_response_row = CallResponseTable(
+ function_name=fn.__name__,
+ content=result.content,
+ prompt_template=result.prompt_template,
+ cost=result.cost,
+ )
+ session.add(call_response_row)
+ session.commit()
+
+
+async def handle_call_response_async(
+ result: BaseCallResponse, fn: Callable, session: Session | None
+):
+ # this is lazy and would generally actually utilize async here
+ handle_call_response(result, fn, session)
+
+```
+
+The function arguments are (with no strict naming for the arguments):
+
+- `result`: The provider-specific `BaseCallResponse` returned by your call
+- `fn`: Your Mirascope call (the same one as the custom context manager)
+- `session`: The yielded object from the `custom_context_manager`, which is a `Session` in this case. If no `custom_context_manager` is used, this value will be `None`.
+
+`handle_call_response_async` is the same as `handle_call_response` but using an `async` function. This enables awaiting other async functions in the handler when handling async calls.
+
+### `handle_stream` and `handle_stream_async`
+
+These functions must have the following signature (where async should be async) and will be called after streaming a Mirascope call. Here is a sample implementation of the sync version:
+
+```python-snippet-skip
+from collections.abc import Callable, Generator
+
+from mirascope.core.base.stream import BaseStream
+
+from sqlmodel import Field, Session, SQLModel, create_engine
+
+def handle_stream(stream: BaseStream, fn: Callable, session: Session | None):
+ if not session:
+ raise ValueError("Session is not set.")
+
+ result = stream.construct_call_response()
+ call_response_row = CallResponseTable(
+ function_name=fn.__name__,
+ content=result.content,
+ prompt_template=result.prompt_template,
+ cost=result.cost,
+ )
+ session.add(call_response_row)
+ session.commit()
+
+
+async def handle_stream_async(
+ stream: BaseStream, fn: Callable, session: Session | None
+):
+ # this is lazy and would generally actually utilize async here
+ handle_stream(stream, fn, session)
+
+```
+
+The first argument will be a provider-specific `BaseStream` instance. All other arguments will be the same as `handle_call_response`.
+
+
+ The `handle_stream` and `handle_stream_async` handlers will run only after the `Generator` or `AsyncGenerator`, respectively, have been exhausted.
+
+
+### `handle_response_model` and `handle_response_model_async`
+
+These functions must have the following signature (where async should be async) and will be called after making a Mirascope call with `response_model` set. Here is a sample implementation of the sync version:
+
+```python-snippet-skip
+from collections.abc import Callable, Generator
+
+from mirascope.core.base import BaseCallResponse, BaseType
+
+from pydantic import BaseModel
+
+from sqlmodel import Field, Session, SQLModel, create_engine
+
+def handle_response_model(
+ response_model: BaseModel | BaseType, fn: Callable, session: Session | None
+):
+ if not session:
+ raise ValueError("Session is not set.")
+
+ if isinstance(response_model, BaseModel):
+ result = cast(BaseCallResponse, response_model._response) # pyright: ignore[reportAttributeAccessIssue]
+ call_response_row = CallResponseTable(
+ function_name=fn.__name__,
+ response_model=response_model.model_dump(),
+ prompt_template=result.prompt_template,
+ cost=result.cost,
+ )
+ else:
+ call_response_row = CallResponseTable(
+ function_name=fn.__name__,
+ content=str(response_model),
+ prompt_template=fn._prompt_template, # pyright: ignore[reportFunctionMemberAccess]
+ )
+ session.add(call_response_row)
+ session.commit()
+
+
+async def handle_response_model_async(
+ response_model: BaseModel | BaseType, fn: Callable, session: Session | None
+):
+ # this is lazy and would generally actually utilize async here
+ handle_response_model(response_model, fn, session)
+
+```
+
+The first argument will be a Pydantic `BaseModel` or Python primitive depending on the type of `response_model`. All other arguments will be the same as `handle_call_response`.
+
+For `BaseModel` you can grab the provider-specific `BaseCallResponse` via `response_model._response`.
+However, this information is not available for primitives `BaseType`, so we use what we have access to. We recommend using a `BaseModel` for primitives when you need `BaseCallResponse` data.
+
+### `handle_structured_stream` and `handle_structured_stream_async`
+
+These functions must have the following signature (where async should be async) and will be called after streaming a Mirascope call with `response_model` set. Here is a sample implementation of the sync version:
+
+```python-snippet-skip
+from collections.abc import Callable, Generator
+
+from mirascope.core.base.structured_stream import BaseStructuredStream
+
+from sqlmodel import Field, Session, SQLModel, create_engine
+
+def handle_structured_stream(
+ structured_stream: BaseStructuredStream, fn: Callable, session: Session | None
+):
+ if not session:
+ raise ValueError("Session is not set.")
+
+ result: BaseCallResponse = structured_stream.stream.construct_call_response()
+ call_response_row = CallResponseTable(
+ function_name=fn.__name__,
+ content=result.content,
+ prompt_template=result.prompt_template,
+ cost=result.cost,
+ )
+ session.add(call_response_row)
+ session.commit()
+
+
+async def handle_structured_stream_async(
+ structured_stream: BaseStructuredStream, fn: Callable, session: Session | None
+):
+ # this is lazy and would generally actually utilize async here
+ handle_structured_stream(structured_stream, fn, session)
+
+```
+
+The first argument will be a Mirascope `StructuredStream` of the provider you are using. All other arguments will be the same as `handle_call_response`.
+
+
+ The `handle_structured_stream` and `handle_structured_stream_async` handlers will run only after the `Generator` or `AsyncGenerator`, respectively, have been exhausted.
+
+
+### `handle_error` and `handle_error_async`
+
+`handle_error` and `handle_error_async` are called when an error occurs during the Mirascope call. This is useful for handling and recording common errors like validation errors or API failures:
+
+```python-snippet-skip
+def handle_error(e: Exception, fn: Callable, session: Session | None) -> None:
+ """Handle errors that occur during a Mirascope call"""
+ if not session:
+ raise ValueError("Session is not set.")
+
+ error_type = type(e).__name__
+ error_message = str(e)
+
+ call_response_row = CallResponseTable(
+ function_name=fn.__name__,
+ error_type=error_type,
+ error_message=error_message,
+ )
+ session.add(call_response_row)
+ session.commit()
+
+ # You can choose to re-raise the error or return a fallback value
+ raise e # Re-raise to propagate the error
+ # return "Error occurred" # Return fallback value
+
+
+async def handle_error_async(
+ e: Exception, fn: Callable, session: Session | None
+) -> None:
+ """Handle errors that occur during an async Mirascope call"""
+ # this is lazy and would generally actually utilize async here
+ handle_error(e, fn, session)
+```
+
+### `custom_decorator`
+
+There may be existing libraries that already have a decorator implemented. You can pass that decorator in to `custom_decorator`, which will wrap the Mirascope call with your custom decorator. This decorator will be called before your custom middleware decorator (in our case, before `with_saving` is called).
+
+## How to use your newly created decorator
+
+Now that you have defined your and created your `with_saving` decorator, you can wrap any Mirascope call, like so:
+
+```python-snippet-skip
+from mirascope.core import anthropic
+
+@with_saving()
+@anthropic.call(model="claude-3-5-sonnet-20240620")
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+print(recommend_book("fantasy"))
+
+```
+
+In this example, when `run` is finished, `handle_call_response` will be called to collect the response. Now, any Mirascope call that uses the `with_saving` decorator will write to your database.
+
+## Complete Example
+
+Here's a full working example that demonstrates how to implement the middleware pattern with SQLModel:
+
+```python
+from collections.abc import Callable, Generator
+from contextlib import contextmanager
+from typing import Any, cast
+
+from mirascope.core import anthropic
+from mirascope.core.base import BaseCallResponse, BaseType
+from mirascope.core.base.stream import BaseStream
+from mirascope.core.base.structured_stream import BaseStructuredStream
+from mirascope.integrations import middleware_factory
+from pydantic import BaseModel
+from sqlalchemy import JSON, Column
+from sqlmodel import Field, Session, SQLModel, create_engine
+
+engine = create_engine("sqlite:///database.db")
+
+
+class CallResponseTable(SQLModel, table=True):
+ """CallResponse model"""
+
+ __tablename__: str = "call_response" # type: ignore
+
+ id: int | None = Field(default=None, primary_key=True)
+ function_name: str = Field(default="")
+ prompt_template: str | None = Field(default=None)
+ content: str | None = Field(default=None)
+ response_model: dict | None = Field(sa_column=Column(JSON), default=None)
+ cost: float | None = Field(default=None)
+ error_type: str | None = Field(default=None)
+ error_message: str | None = Field(default=None)
+
+
+# ONE TIME SETUP
+SQLModel.metadata.create_all(engine)
+
+
+@contextmanager
+def custom_context_manager(
+ fn: Callable,
+) -> Generator[Session, Any, None]:
+ print(f"Saving call: {fn.__name__}")
+ with Session(engine) as session:
+ yield session
+
+
+def handle_call_response(
+ result: BaseCallResponse, fn: Callable, session: Session | None
+):
+ if not session:
+ raise ValueError("Session is not set.")
+
+ call_response_row = CallResponseTable(
+ function_name=fn.__name__,
+ content=result.content,
+ prompt_template=result.prompt_template,
+ cost=result.cost,
+ )
+ session.add(call_response_row)
+ session.commit()
+
+
+async def handle_call_response_async(
+ result: BaseCallResponse, fn: Callable, session: Session | None
+):
+ # this is lazy and would generally actually utilize async here
+ handle_call_response(result, fn, session)
+
+
+def handle_stream(stream: BaseStream, fn: Callable, session: Session | None):
+ if not session:
+ raise ValueError("Session is not set.")
+
+ result = stream.construct_call_response()
+ call_response_row = CallResponseTable(
+ function_name=fn.__name__,
+ content=result.content,
+ prompt_template=result.prompt_template,
+ cost=result.cost,
+ )
+ session.add(call_response_row)
+ session.commit()
+
+
+async def handle_stream_async(
+ stream: BaseStream, fn: Callable, session: Session | None
+):
+ # this is lazy and would generally actually utilize async here
+ handle_stream(stream, fn, session)
+
+
+def handle_response_model(
+ response_model: BaseModel | BaseType, fn: Callable, session: Session | None
+):
+ if not session:
+ raise ValueError("Session is not set.")
+
+ if isinstance(response_model, BaseModel):
+ result = cast(BaseCallResponse, response_model._response) # pyright: ignore[reportAttributeAccessIssue]
+ call_response_row = CallResponseTable(
+ function_name=fn.__name__,
+ response_model=response_model.model_dump(),
+ prompt_template=result.prompt_template,
+ cost=result.cost,
+ )
+ else:
+ call_response_row = CallResponseTable(
+ function_name=fn.__name__,
+ content=str(response_model),
+ prompt_template=fn._prompt_template, # pyright: ignore[reportFunctionMemberAccess]
+ )
+ session.add(call_response_row)
+ session.commit()
+
+
+async def handle_response_model_async(
+ response_model: BaseModel | BaseType, fn: Callable, session: Session | None
+):
+ # this is lazy and would generally actually utilize async here
+ handle_response_model(response_model, fn, session)
+
+
+def handle_structured_stream(
+ structured_stream: BaseStructuredStream, fn: Callable, session: Session | None
+):
+ if not session:
+ raise ValueError("Session is not set.")
+
+ result: BaseCallResponse = structured_stream.stream.construct_call_response()
+ call_response_row = CallResponseTable(
+ function_name=fn.__name__,
+ content=result.content,
+ prompt_template=result.prompt_template,
+ cost=result.cost,
+ )
+ session.add(call_response_row)
+ session.commit()
+
+
+async def handle_structured_stream_async(
+ structured_stream: BaseStructuredStream, fn: Callable, session: Session | None
+):
+ # this is lazy and would generally actually utilize async here
+ handle_structured_stream(structured_stream, fn, session)
+
+
+def handle_error(e: Exception, fn: Callable, session: Session | None) -> None:
+ """Handle errors that occur during a Mirascope call"""
+ if not session:
+ raise ValueError("Session is not set.")
+
+ error_type = type(e).__name__
+ error_message = str(e)
+
+ call_response_row = CallResponseTable(
+ function_name=fn.__name__,
+ error_type=error_type,
+ error_message=error_message,
+ )
+ session.add(call_response_row)
+ session.commit()
+
+ # You can choose to re-raise the error or return a fallback value
+ raise e # Re-raise to propagate the error
+ # return "Error occurred" # Return fallback value
+
+
+async def handle_error_async(
+ e: Exception, fn: Callable, session: Session | None
+) -> None:
+ """Handle errors that occur during an async Mirascope call"""
+ # this is lazy and would generally actually utilize async here
+ handle_error(e, fn, session)
+
+
+def with_saving():
+ """Saves some data after a Mirascope call."""
+
+ return middleware_factory(
+ custom_context_manager=custom_context_manager,
+ custom_decorator=None,
+ handle_call_response=handle_call_response,
+ handle_call_response_async=handle_call_response_async,
+ handle_stream=handle_stream,
+ handle_stream_async=handle_stream_async,
+ handle_response_model=handle_response_model,
+ handle_response_model_async=handle_response_model_async,
+ handle_structured_stream=handle_structured_stream,
+ handle_structured_stream_async=handle_structured_stream_async,
+ handle_error=handle_error,
+ handle_error_async=handle_error_async,
+ )
+
+
+@with_saving()
+@anthropic.call(model="claude-3-5-sonnet-20240620")
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+print(recommend_book("fantasy"))
+
+```
+
+If there is a library that you would like for us to integrate out-of-the-box, create a [GitHub Issue](https://github.com/Mirascope/mirascope/issues) or let us know in our [Slack community](https://mirascope.com/discord-invite).
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/index.mdx b/cloud/content/docs/v1/learn/index.mdx
new file mode 100644
index 0000000000..e01ec79af0
--- /dev/null
+++ b/cloud/content/docs/v1/learn/index.mdx
@@ -0,0 +1,127 @@
+---
+title: Learn Mirascope
+description: A comprehensive guide to Mirascope's core components and features. This overview provides a roadmap for learning how to build AI-powered applications with Mirascope.
+---
+
+# Learn Mirascope
+
+This section is designed to help you master Mirascope, a toolkit for building AI-powered applications with Large Language Models (LLMs).
+
+Our documentation is tailored for developers who have at least some experience with Python and LLMs. Whether you're coming from other development tool libraries or have worked directly with provider SDKs and APIs, Mirascope offers a familiar but enhanced experience.
+
+If you haven't already, we recommend checking out [Getting Started](/docs/mirascope/guides/getting-started/quickstart) and [Why Use Mirascope](/docs/mirascope/getting-started/why).
+
+## Key Features and Benefits
+
+
+
+
Pythonic By Design
+
Our design approach is to remain Pythonic so you can build your way
+
+
+
Editor Support & Type Hints
+
Rich autocomplete, inline documentation, and type hints to catch errors before runtime
+
+
+
Provider-Agnostic & Provider-Specific
+
Seamlessly engineer prompts agnostic or specific to various LLM providers
+
+
+
Comprehensive Tooling
+
Complete suite of tools for every aspect of working with LLM provider APIs
+
+
+
+## Core Components
+
+Mirascope is built around these core components, each designed to handle specific aspects of working with LLM provider APIs.
+
+We encourage you to dive into each component's documentation to gain a deeper understanding of Mirascope's capabilities. Start with the topics that align most closely with your immediate needs, but don't hesitate to explore all areas – you might discover new ways to enhance your LLM applications!
+
+
+

+
+
+
+
+
Prompts
+
Learn how to create and manage prompts effectively
+
Read more →
+
+
+
+
Calls
+
Understand how to make calls to LLMs using Mirascope
+
Read more →
+
+
+
+
Streams
+
Explore streaming responses for real-time applications
+
Read more →
+
+
+
+
Chaining
+
Understand the art of chaining multiple LLM calls for complex tasks
+
Read more →
+
+
+
+
Response Models
+
Define and use structured output models with automatic validation
+
Read more →
+
+
+
+
JSON Mode
+
Work with structured JSON data responses from LLMs
+
Read more →
+
+
+
+
Output Parsers
+
Process and transform custom LLM output structures effectively
+
Read more →
+
+
+
+
Tools
+
Discover how to extend LLM capabilities with custom tools
+
Read more →
+
+
+
+
Agents
+
Put everything together to build advanced AI agents using Mirascope
+
Read more →
+
+
+
+
Evals
+
Apply core components to build evaluation strategies for your LLM applications
+
Read more →
+
+
+
+
Async
+
Maximize efficiency with asynchronous programming
+
Read more →
+
+
+
+
Retries
+
Understand how to automatically retry failed API calls
+
Read more →
+
+
+
+
Local Models
+
Learn how to use Mirascope with locally deployed LLMs
+
Read more →
+
+
+
+As you progress, you'll find advanced topics and best practices throughout the documentation. These will help you optimize your use of Mirascope and build increasingly sophisticated AI-powered applications.
+
+Happy learning, and welcome to the world of development with Mirascope!
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/json_mode.mdx b/cloud/content/docs/v1/learn/json_mode.mdx
new file mode 100644
index 0000000000..88272cde73
--- /dev/null
+++ b/cloud/content/docs/v1/learn/json_mode.mdx
@@ -0,0 +1,134 @@
+---
+title: JSON Mode
+description: Learn how to request structured JSON outputs from LLMs with Mirascope's JSON Mode for easier parsing, validation, and integration with your applications.
+---
+
+# JSON Mode
+
+
+ If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls)
+
+
+JSON Mode is a feature in Mirascope that allows you to request structured JSON output from Large Language Models (LLMs). This mode is particularly useful when you need to extract structured information from the model's responses, making it easier to parse and use the data in your applications.
+
+
+ For providers with explicit support, Mirascope uses the native JSON Mode feature of the API. For providers without explicit support (e.g. Anthropic), Mirascope implements a pseudo JSON Mode by instructing the model in the prompt to output JSON.
+
+ | Provider | Support Type | Implementation |
+ |-----------|--------------|---------------------|
+ | Anthropic | Pseudo | Prompt engineering |
+ | Azure | Explicit | Native API feature |
+ | Bedrock | Pseudo | Prompt engineering |
+ | Cohere | Pseudo | Prompt engineering |
+ | Google | Explicit | Native API feature |
+ | Groq | Explicit | Native API feature |
+ | LiteLLM | Explicit | Native API feature |
+ | Mistral | Explicit | Native API feature |
+ | OpenAI | Explicit | Native API feature |
+
+ If you'd prefer not to have any internal updates made to your prompt, you can always set JSON mode yourself through `call_params` rather than using the `json_mode` argument, which provides provider-agnostic support but is certainly not required to use JSON mode.
+
+
+## Basic Usage and Syntax
+
+Let's take a look at a basic example using JSON Mode:
+
+
+
+```python
+import json
+
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", json_mode=True) # [!code highlight]
+def get_book_info(book_title: str) -> str: # [!code highlight]
+ return f"Provide the author and genre of {book_title}"
+
+
+response = get_book_info("The Name of the Wind")
+print(json.loads(response.content))
+# Output: {'author': 'Patrick Rothfuss', 'genre': 'Fantasy'} # [!code highlight]
+```
+
+
+```python
+import json
+
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", json_mode=True) # [!code highlight]
+@prompt_template("Provide the author and genre of {book_title}") # [!code highlight]
+def get_book_info(book_title: str): ...
+
+
+response = get_book_info("The Name of the Wind")
+print(json.loads(response.content))
+# Output: {'author': 'Patrick Rothfuss', 'genre': 'Fantasy'} # [!code highlight]
+```
+
+
+
+In this example we
+
+1. Enable JSON Mode with `json_mode=True` in the `call` decorator
+2. Instruct the model what fields to include in our prompt
+3. Load the JSON string response into a Python object and print it
+
+## Error Handling and Validation
+
+While JSON Mode can significantly improve the structure of model outputs, it's important to note that it's far from infallible. LLMs often produce invalid JSON or deviate from the expected structure, so it's crucial to implement proper error handling and validation in your code:
+
+
+
+```python
+import json
+
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", json_mode=True)
+def get_book_info(book_title: str) -> str:
+ return f"Provide the author and genre of {book_title}"
+
+
+try: # [!code highlight]
+ response = get_book_info("The Name of the Wind")
+ print(json.loads(response.content))
+except json.JSONDecodeError: # [!code highlight]
+ print("The model produced invalid JSON")
+```
+
+
+```python
+import json
+
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", json_mode=True)
+@prompt_template("Provide the author and genre of {book_title}")
+def get_book_info(book_title: str): ...
+
+
+try: # [!code highlight]
+ response = get_book_info("The Name of the Wind")
+ print(json.loads(response.content))
+except json.JSONDecodeError: # [!code highlight]
+ print("The model produced invalid JSON")
+```
+
+
+
+
+ While this example catches errors for invalid JSON, there's always a chance that the LLM returns valid JSON that doesn't conform to your expected schema (such as missing fields or incorrect types).
+
+ For more robust validation, we recommend using [Response Models](/docs/mirascope/learn/response_models) for easier structuring and validation of LLM outputs.
+
+
+## Next Steps
+
+By leveraging JSON Mode, you can create more robust and data-driven applications that efficiently process and utilize LLM outputs. This approach allows for easy integration with databases, APIs, or user interfaces, demonstrating the power of JSON Mode in creating robust, data-driven applications.
+
+Next, we recommend reading the section on [Output Parsers](/docs/mirascope/learn/output_parsers) to see how to engineer prompts with specific output structures and parse the outputs automatically on every call.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/local_models.mdx b/cloud/content/docs/v1/learn/local_models.mdx
new file mode 100644
index 0000000000..3e507113f8
--- /dev/null
+++ b/cloud/content/docs/v1/learn/local_models.mdx
@@ -0,0 +1,173 @@
+---
+title: Local (Open-Source) Models
+description: Learn how to use Mirascope with locally hosted open-source models through Ollama, vLLM, and other APIs with OpenAI compatibility.
+---
+
+# Local (Open-Source) Models
+
+You can use the [`llm.call`](/docs/mirascope/api) decorator to interact with models running with [Ollama](https://github.com/ollama/ollama) or [vLLM](https://github.com/vllm-project/vllm):
+
+
+
+
+```python
+from mirascope import llm
+from pydantic import BaseModel
+
+
+@llm.call("ollama", "llama3.2") # [!code highlight]
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+recommendation = recommend_book("fantasy")
+print(recommendation)
+# Output: Here are some popular and highly-recommended fantasy books...
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+@llm.call("ollama", "llama3.2", response_model=Book) # [!code highlight]
+def extract_book(text: str) -> str:
+ return f"Extract {text}"
+
+
+book = extract_book("The Name of the Wind by Patrick Rothfuss")
+assert isinstance(book, Book)
+print(book)
+# Output: title='The Name of the Wind' author='Patrick Rothfuss'
+```
+
+
+
+
+```python
+from mirascope import llm
+from pydantic import BaseModel
+
+
+@llm.call("vllm", "llama3.2") # [!code highlight]
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+recommendation = recommend_book("fantasy")
+print(recommendation)
+# Output: Here are some popular and highly-recommended fantasy books...
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+@llm.call("vllm", "llama3.2", response_model=Book) # [!code highlight]
+def extract_book(text: str) -> str:
+ return f"Extract {text}"
+
+
+book = extract_book("The Name of the Wind by Patrick Rothfuss")
+assert isinstance(book, Book)
+print(book)
+# Output: title='The Name of the Wind' author='Patrick Rothfuss'
+```
+
+
+
+
+
+ The `llm.call` decorator uses OpenAI compatibility under the hood. Of course, not all open-source models or providers necessarily support all of OpenAI's available features, but most use-cases are generally available. See the links we've included below for more details:
+
+ - [Ollama OpenAI Compatibility](https://github.com/ollama/ollama/blob/main/docs/openai.md)
+ - [vLLM OpenAI Compatibility](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html)
+
+
+## OpenAI Compatibility
+
+When hosting (fine-tuned) open-source LLMs yourself locally or in your own cloud with tools that have OpenAI compatibility, you can use the [`openai.call`](/docs/mirascope/api) decorator with a [custom client](/docs/mirascope/learn/calls#custom-client) to interact with your model using all of Mirascope's various features.
+
+
+
+
+```python
+from mirascope.core import openai
+from openai import OpenAI
+from pydantic import BaseModel
+
+custom_client = OpenAI( # [!code highlight]
+ base_url="http://localhost:11434/v1", # your ollama endpoint # [!code highlight]
+ api_key="ollama", # required by openai, but unused # [!code highlight]
+) # [!code highlight]
+
+
+@openai.call("llama3.2", client=custom_client) # [!code highlight]
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+recommendation = recommend_book("fantasy")
+print(recommendation)
+# Output: Here are some popular and highly-recommended fantasy books...
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+@openai.call("llama3.2", response_model=Book, client=custom_client) # [!code highlight]
+def extract_book(text: str) -> str:
+ return f"Extract {text}"
+
+
+book = extract_book("The Name of the Wind by Patrick Rothfuss")
+assert isinstance(book, Book)
+print(book)
+# Output: title='The Name of the Wind' author='Patrick Rothfuss'
+```
+
+
+
+
+```python
+from mirascope.core import openai
+from openai import OpenAI
+from pydantic import BaseModel
+
+custom_client = OpenAI( # [!code highlight]
+ base_url="http://localhost:8000/v1", # your vLLM endpoint # [!code highlight]
+ api_key="vllm", # required by openai, but unused # [!code highlight]
+) # [!code highlight]
+
+
+@openai.call("llama3.2", client=custom_client) # [!code highlight]
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+recommendation = recommend_book("fantasy")
+print(recommendation)
+# Output: Here are some popular and highly-recommended fantasy books...
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+@openai.call("llama3.2", response_model=Book, client=custom_client) # [!code highlight]
+def extract_book(text: str) -> str:
+ return f"Extract {text}"
+
+
+book = extract_book("The Name of the Wind by Patrick Rothfuss")
+assert isinstance(book, Book)
+print(book)
+# Output: title='The Name of the Wind' author='Patrick Rothfuss'
+```
+
+
+
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/mcp/client.mdx b/cloud/content/docs/v1/learn/mcp/client.mdx
new file mode 100644
index 0000000000..d7711b0a59
--- /dev/null
+++ b/cloud/content/docs/v1/learn/mcp/client.mdx
@@ -0,0 +1,160 @@
+---
+title: Client
+description: Learn how to use the Mirascope MCP Client to interact with Model Context Protocol servers, accessing standardized resources, tools, and prompts through either stdio or SSE connections.
+---
+
+# MCP Client
+
+
+ If you haven't already, we recommend first reading and learning about [Model Context Protocol](https://github.com/modelcontextprotocol)
+
+
+MCP Client in Mirascope enables you to interact with MCP servers through a standardized protocol. The client provides methods to access resources, tools, and prompts exposed by MCP servers.
+
+## Basic Usage and Syntax
+
+Let's connect to our book recommendation server using the MCP client:
+
+```python
+import asyncio
+from pathlib import Path
+
+from mcp.client.stdio import StdioServerParameters
+from mirascope.mcp import stdio_client
+
+server_file = Path(__file__).parent / "server.py"
+
+server_params = StdioServerParameters( # [!code highlight]
+ command="uv", # [!code highlight]
+ args=["run", "python", str(server_file)], # [!code highlight]
+ env=None, # [!code highlight]
+) # [!code highlight]
+
+
+async def main() -> None:
+ async with stdio_client(server_params) as client: # [!code highlight]
+ prompts = await client.list_prompts()
+ print(prompts[0])
+
+
+
+asyncio.run(main())
+
+```
+
+This example demonstrates:
+
+1. Creating server parameters with `StdioServerParameters`
+2. Using the `stdio_client` context manager to connect to the server
+3. Accessing the list of available prompts
+
+## Client Components
+
+### Standard In/Out (stdio) Server Connection
+
+To connect to an MCP server, use the `stdio_client` context manager with appropriate server parameters:
+
+```python-snippet-skip
+server_file = Path(__file__).parent / "server.py" # [!code highlight]
+
+server_params = StdioServerParameters( # [!code highlight]
+ command="uv", # [!code highlight]
+ args=["run", "python", str(server_file)], # [!code highlight]
+ env=None, # [!code highlight]
+) # [!code highlight]
+
+
+async def main() -> None:
+ async with stdio_client(server_params) as client: # [!code highlight]
+ prompts = await client.list_prompts()
+ print(prompts[0])
+
+```
+
+The `StdioServerParameters` specify how to launch and connect to the server process.
+
+### Server Side Events (sse) Server Connection
+
+You can also use the `sse_client` instead to connect to an MCP server side event endpoint:
+
+```python-snippet-skip
+from mirascope.mcp import sse_client
+
+
+async with sse_client("http://localhost:8000") as client: # [!code highlight]
+ prompts = await client.list_prompts()
+ print(prompts[0])
+```
+
+### Prompts
+
+You can list available prompts and get prompt templates from the server:
+
+```python-snippet-skip
+ prompts = await client.list_prompts()
+ print(prompts[0])
+ # name='recommend_book' description='Get book recommendations by genre.' arguments=[PromptArgument(name='genre', description='Genre of book to recommend (fantasy, mystery, sci-fi, etc.)', required=True)]
+ prompt_template = await client.get_prompt_template(prompts[0].name)
+ prompt = await prompt_template(genre="fantasy")
+ print(prompt)
+ # [BaseMessageParam(role='user', content='Recommend a fantasy book')]
+```
+
+The client automatically converts server prompts into Mirascope-compatible prompt templates that return `BaseMessageParam` instances. This makes it easy to consume these prompts downstream in your Mirascope code.
+
+### Resources
+
+Resources can be listed and read from the server:
+
+```python-snippet-skip
+ resources = await client.list_resources()
+ resource = await client.read_resource(resources[0].uri)
+ print(resources[0])
+ # uri=AnyUrl('file://books.txt/') name='Books Database' description='Read the books database file.' mimeType='text/plain'
+ print(resource)
+ # ['The Name of the Wind by Patrick Rothfuss\nThe Silent Patient by Alex Michaelides']
+```
+
+The client provides methods to:
+
+- `list_resources()`: Get available resources
+- `read_resource(uri)`: Read resource content by URI
+
+If the resource is text, it will be converted into a `TextPart` instance. Otherwise it will be the MCP `BlobResourceContents` type where the data contained is the original bytes encoded as a base64 string.
+
+### Tools
+
+Tools from the server can be used with Mirascope's standard call decorators:
+
+```python-snippet-skip
+ tools = await client.list_tools() # [!code highlight]
+
+ @llm.call(
+ provider="$PROVIDER",
+ model="$MODEL",
+ tools=tools, # [!code highlight]
+ )
+ def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+ if tool := recommend_book("fantasy").tool: # [!code highlight]
+ call_result = await tool.call() # [!code highlight]
+ print(call_result) # [!code highlight]
+ # ['The Name of the Wind by Patrick Rothfuss'] # [!code highlight]
+```
+
+The client automatically converts server tools into Mirascope-compatible tool types that can be used with any provider's call decorator.
+
+## Type Safety
+
+The MCP client preserves type information from the server:
+
+1. **Prompts**: Arguments and return types from server prompt definitions
+2. **Resources**: MIME types and content types for resources
+3. **Tools**: Input schemas and return types for tools
+
+This enables full editor support and type checking when using server components.
+
+## Next Steps
+
+By using the MCP client with Mirascope's standard features like [Calls](/docs/mirascope/learn/calls), [Tools](/docs/mirascope/learn/tools), and [Prompts](/docs/mirascope/learn/prompts), you can build powerful applications that leverage local services through MCP servers.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/output_parsers.mdx b/cloud/content/docs/v1/learn/output_parsers.mdx
new file mode 100644
index 0000000000..1ae12afd11
--- /dev/null
+++ b/cloud/content/docs/v1/learn/output_parsers.mdx
@@ -0,0 +1,197 @@
+---
+title: Output Parsers
+description: Learn how to process and structure raw LLM outputs into usable formats using Mirascope's flexible output parsers for more reliable and application-ready results.
+---
+
+# Output Parsers
+
+
+ If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls)
+
+
+Output Parsers in Mirascope provide a flexible way to process and structure the raw output from Large Language Models (LLMs). They allow you to transform the LLM's response into a more usable format, enabling easier integration with your application logic and improving the overall reliability of your LLM-powered features.
+
+## Basic Usage and Syntax
+
+
+ [`mirascope.llm.call.output_parser`](/docs/mirascope/api/llm/call)
+
+
+Output Parsers are functions that take the call response object as input and return an output of a specified type. When you supply an output parser to a `call` decorator, it modifies the return type of the decorated function to match the output type of the parser.
+
+Let's take a look at a basic example:
+
+
+
+```python
+from mirascope import llm
+
+
+def parse_recommendation(response: llm.CallResponse) -> tuple[str, str]:
+ title, author = response.content.split(" by ")
+ return (title, author)
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", output_parser=parse_recommendation) # [!code highlight]
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book. Output only Title by Author"
+
+
+print(recommend_book("fantasy"))
+# Output: ('"The Name of the Wind"', 'Patrick Rothfuss') # [!code highlight]
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+def parse_recommendation(response: llm.CallResponse) -> tuple[str, str]:
+ title, author = response.content.split(" by ")
+ return (title, author)
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", output_parser=parse_recommendation) # [!code highlight]
+@prompt_template("Recommend a {genre} book. Output only Title by Author")
+def recommend_book(genre: str): ...
+
+
+print(recommend_book("fantasy"))
+# Output: ('"The Name of the Wind"', 'Patrick Rothfuss') # [!code highlight]
+```
+
+
+
+## Additional Examples
+
+There are many different ways to structure and parse LLM outputs, ranging from XML parsing to using regular expressions.
+
+Here are a few examples:
+
+
+
+```python
+import re
+
+from mirascope import llm, prompt_template
+
+
+def parse_cot(response: llm.CallResponse) -> str:
+ pattern = r".*?.*?(.*?)" # [!code highlight]
+ match = re.search(pattern, response.content, re.DOTALL)
+ if not match:
+ return response.content
+ return match.group(1).strip()
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", output_parser=parse_cot) # [!code highlight]
+@prompt_template(
+ """
+ First, output your thought process in tags. # [!code highlight]
+ Then, provide your final output in tags. # [!code highlight]
+
+ Question: {question}
+ """
+)
+def chain_of_thought(question: str): ...
+
+
+question = "Roger has 5 tennis balls. He buys 2 cans of 3. How many does he have now?"
+output = chain_of_thought(question)
+print(output)
+```
+
+
+```python
+import xml.etree.ElementTree as ET
+
+from mirascope import llm, prompt_template
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+ year: int
+ summary: str
+
+# [!code highlight:16]
+def parse_book_xml(response: llm.CallResponse) -> Book | None:
+ try:
+ root = ET.fromstring(response.content)
+ if (node := root.find("title")) is None or not (title := node.text):
+ raise ValueError("Missing title")
+ if (node := root.find("author")) is None or not (author := node.text):
+ raise ValueError("Missing author")
+ if (node := root.find("year")) is None or not (year := node.text):
+ raise ValueError("Missing year")
+ if (node := root.find("summary")) is None or not (summary := node.text):
+ raise ValueError("Missing summary")
+ return Book(title=title, author=author, year=int(year), summary=summary)
+ except (ET.ParseError, ValueError) as e:
+ print(f"Error parsing XML: {e}")
+ return None
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", output_parser=parse_book_xml) # [!code highlight]
+@prompt_template(
+ """
+ Recommend a {genre} book. Provide the information in the following XML format:
+ # [!code highlight:7]
+
+ Book Title
+ Author Name
+ Publication Year
+ Brief summary of the book
+
+
+ Output ONLY the XML and no other text.
+ """
+)
+def recommend_book(genre: str): ...
+
+
+book = recommend_book("science fiction")
+if book:
+ print(f"Title: {book.title}")
+ print(f"Author: {book.author}")
+ print(f"Year: {book.year}")
+ print(f"Summary: {book.summary}")
+else:
+ print("Failed to parse the recommendation.")
+```
+
+
+```python
+import json
+
+from mirascope import llm
+
+
+def only_json(response: llm.CallResponse) -> str:
+ json_start = response.content.index("{") # [!code highlight]
+ json_end = response.content.rfind("}") # [!code highlight]
+ return response.content[json_start : json_end + 1] # [!code highlight]
+
+
+@llm.call( # [!code highlight]
+ provider="$PROVIDER", model="$MODEL", json_mode=True, output_parser=only_json # [!code highlight]
+) # [!code highlight]
+def json_extraction(text: str, fields: list[str]) -> str:
+ return f"Extract {fields} from the following text: {text}"
+
+
+json_response = json_extraction(
+ text="The capital of France is Paris",
+ fields=["capital", "country"],
+)
+print(json.loads(json_response))
+```
+
+
+
+## Next Steps
+
+By leveraging Output Parsers effectively, you can create more robust and reliable LLM-powered applications, ensuring that the raw model outputs are transformed into structured data that's easy to work with in your application logic.
+
+Next, we recommend taking a look at the section on [Tools](/docs/mirascope/learn/tools) to learn how to extend the capabilities of LLMs with custom functions.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/prompts.mdx b/cloud/content/docs/v1/learn/prompts.mdx
new file mode 100644
index 0000000000..6e3cb41491
--- /dev/null
+++ b/cloud/content/docs/v1/learn/prompts.mdx
@@ -0,0 +1,798 @@
+---
+title: Prompts
+description: Master the art of creating effective prompts for LLMs using Mirascope. Learn about message roles, multi-modal inputs, and dynamic prompt configuration.
+---
+
+# Prompts
+
+
+
+[`mirascope.core.base.message_param.BaseMessageParam`](/docs/mirascope/api/core/base/message_param#basemessageparam)
+
+
+
+
+When working with Large Language Model (LLM) APIs, the "prompt" is generally a list of messages where each message has a particular role. These prompts are the foundation of effectively working with LLMs, so Mirascope provides powerful tools to help you create, manage, and optimize your prompts for various LLM interactions.
+
+Let's look at how we can write prompts using Mirascope in a reusable, modular, and provider-agnostic way.
+
+
+For the following explanations we will be talking *only* about the messages aspect of prompt engineering and will discuss calling the API later in the [Calls](/docs/mirascope/learn/calls) documentation.
+
+In that section we will show how to use these provider-agnostic prompts to actually call a provider's API as well as how to engineer and tie a prompt to a specific call.
+
+
+## Prompt Templates (Messages)
+
+First, let's look at a basic example:
+
+
+
+```python
+from mirascope import prompt_template
+
+
+@prompt_template() # [!code highlight]
+def recommend_book_prompt(genre: str) -> str: # [!code highlight]
+ return f"Recommend a {genre} book" # [!code highlight]
+
+
+print(recommend_book_prompt("fantasy"))
+# Output: [BaseMessageParam(role='user', content='Recommend a fantasy book')] # [!code highlight]
+```
+
+
+```python
+from mirascope import prompt_template
+
+
+@prompt_template("Recommend a {genre} book") # [!code highlight]
+def recommend_book_prompt(genre: str): ... # [!code highlight]
+
+
+print(recommend_book_prompt("fantasy"))
+# Output: [BaseMessageParam(role='user', content='Recommend a fantasy book')] # [!code highlight]
+```
+
+
+
+In this example:
+
+1. The `recommend_book_prompt` method's signature defines the prompt's template variables.
+2. Calling the method with `genre="fantasy"` returns a list with the corresponding `BaseMessageParam` instance with role `user` and content "Recommend a fantasy book".
+
+The core concept to understand here is `BaseMessageParam`. This class operates as the base class for message parameters that Mirascope can handle and use across all supported providers.
+
+In Mirascope, we use the `@prompt_template` decorator to write prompt templates as reusable methods that return the corresponding list of `BaseMessageParam` instances.
+
+There are two common ways of writing Mirascope prompt functions:
+
+1. *(Shorthand)* Returning the `str` or `list` content for a single user message, or returning `Messages.{Role}` (individually or a list) when specific roles are needed.
+2. *(String Template)* Passing a string template to `@prompt_template` that gets parsed and then formatted like a normal Python formatted string.
+
+Which method you use is mostly up to your preference, so feel free to select which one you prefer in the following sections.
+
+## Message Roles
+
+We can also define additional messages with different roles, such as a system message:
+
+
+
+```python
+from mirascope import Messages, prompt_template
+
+
+@prompt_template()
+def recommend_book_prompt(genre: str) -> Messages.Type:
+ return [
+ Messages.System("You are a librarian"), # [!code highlight]
+ Messages.User(f"Recommend a {genre} book"), # [!code highlight]
+ ]
+
+
+print(recommend_book_prompt("fantasy"))
+# Output: [
+# BaseMessageParam(role='system', content='You are a librarian'), # [!code highlight]
+# BaseMessageParam(role='user', content='Recommend a fantasy book'), # [!code highlight]
+# ]
+```
+
+
+```python{6,7}
+from mirascope import prompt_template
+
+
+@prompt_template(
+ """
+ SYSTEM: You are a librarian
+ USER: Recommend a {genre} book
+ """
+)
+def recommend_book_prompt(genre: str): ...
+
+
+print(recommend_book_prompt("fantasy"))
+# Output: [
+# BaseMessageParam(role='system', content='You are a librarian'), # [!code highlight]
+# BaseMessageParam(role='user', content='Recommend a fantasy book'), # [!code highlight]
+# ]
+```
+
+
+
+
+The return type `Messages.Type` accepts all shorthand methods as well as `BaseMessageParam` types. Since the message methods (e.g. `Messages.User`) return `BaseMessageParam` instances, we generally recommend always typing your prompt templates with the `Messages.Type` return type since it covers all prompt template writing methods.
+
+
+
+Mirascope prompt templates currently support the `system`, `user`, and `assistant` roles. When using string templates, the roles are parsed by their corresponding all caps keyword (e.g. SYSTEM).
+
+For messages with the `tool` role, see how Mirascope automatically generates these messages for you in the [Tools](/docs/mirascope/learn/tools) and [Agents](/docs/mirascope/learn/agents) sections.
+
+
+## Multi-Line Prompts
+
+When writing prompts that span multiple lines, it's important to ensure you don't accidentally include additional, unnecessary tokens (namely `\t` tokens):
+
+
+
+```python{9,10}
+import inspect
+from mirascope import prompt_template
+
+
+@prompt_template()
+def recommend_book_prompt(genre: str) -> str:
+ return inspect.cleandoc(
+ f"""
+ Recommend a {genre} book.
+ Output in the format Title by Author.
+ """
+ )
+
+
+print(recommend_book_prompt("fantasy"))
+# Output: [BaseMessageParam(role='user', content='Recommend a fantasy book.\nOutput in the format Title by Author.')] # [!code highlight]
+```
+
+
+```python{6,7}
+from mirascope import prompt_template
+
+
+@prompt_template(
+ """
+ Recommend a {genre} book.
+ Output in the format Title by Author.
+ """
+)
+def recommend_book_prompt(genre: str): ...
+
+
+print(recommend_book_prompt("fantasy"))
+# Output: [BaseMessageParam(role='user', content='Recommend a fantasy book.\nOutput in the format Title by Author.')] # [!code highlight]
+```
+
+
+
+In this example, we use `inspect.cleandoc` to remove unnecessary tokens while maintaining proper formatting in our codebase.
+
+
+When using string templates, the template is automatically cleaned for you, so there is no need to use `inspect.cleandoc` in that case. However, it's extremely important to note that you must start messages with the same indentation in order to properly remove the unnecessary tokens. For example:
+
+```python
+from mirascope import prompt_template
+
+# BAD
+@prompt_template(
+ """
+ USER: First line
+ Second line
+ """
+)
+def bad_template(params): ...
+
+# GOOD
+@prompt_template(
+ """
+ USER:
+ First line
+ Second line
+ """
+)
+def good_template(params): ...
+```
+
+
+## Multi-Modal Inputs
+
+Recent advancements in Large Language Model architecture has enabled many model providers to support multi-modal inputs (text, images, audio, etc.) for a single endpoint. Mirascope treats these input types as first-class and supports them natively.
+
+While Mirascope provides a consistent interface, support varies among providers:
+
+| Type | Anthropic | Cohere | Google | Groq | Mistral | OpenAI |
+|---------------|:-----------:|:--------:|:---------------:|:------:|:---------:|:--------:|
+| text | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
+| image | ✓ | — | ✓ | ✓ | ✓ | ✓ |
+| audio | — | — | ✓ | — | — | ✓ |
+| video | — | — | ✓ | — | — | — |
+| document | ✓ | — | ✓ | — | — | — |
+
+*Legend: ✓ (Supported), — (Not Supported)*
+
+### Image Inputs
+
+
+
+```python
+from mirascope import prompt_template
+from PIL import Image
+
+
+@prompt_template()
+def recommend_book_prompt(previous_book: Image.Image) -> list:
+ return ["I just read this book:", previous_book, "What should I read next?"] # [!code highlight]
+
+
+with Image.open("path/to/image.jpg") as image:
+ print(recommend_book_prompt(image))
+# Output: [
+# BaseMessageParam(
+# role='user',
+# content=[
+# ContentPartParam(type='text', text='I just read this book:'), # [!code highlight]
+# ContentPartParam(type='image', image=), # [!code highlight]
+# ContentPartParam(type='text', text='What should I read next?') # [!code highlight]
+# ]
+# )
+# ]
+```
+
+
+```python
+from mirascope import prompt_template
+from PIL import Image
+
+
+@prompt_template(
+ "I just read this book: {previous_book:image} What should I read next?" # [!code highlight]
+)
+def recommend_book_prompt(previous_book: Image.Image): ...
+
+
+with Image.open("path/to/image.jpg") as image:
+ print(recommend_book_prompt(image))
+# Output: [
+# BaseMessageParam(
+# role='user',
+# content=[
+# ContentPartParam(type='text', text='I just read this book:'), # [!code highlight]
+# ContentPartParam(type='image', image=), # [!code highlight]
+# ContentPartParam(type='text', text='What should I read next?') # [!code highlight]
+# ]
+# )
+# ]
+```
+
+
+
+
+When using string templates, you can also specify `:images` to inject multiple image inputs through a single template variable.
+
+The `:image` and `:images` tags support the `bytes | str` and `list[bytes] | list[str]` types, respectively. When passing in a `str`, the string template assumes it indicates a url or local filepath and will attempt to load the bytes from the source.
+
+You can also specify additional options as arguments of the tags, e.g. `{url:image(detail=low)}`
+
+
+### Audio Inputs
+
+
+
+
+
+```python
+from mirascope import Messages, prompt_template
+from pydub import AudioSegment
+
+
+@prompt_template()
+def identify_book_prompt(audio_wave: AudioSegment) -> Messages.Type:
+ return ["Here's an audio book snippet:", audio_wave, "What book is this?"] # [!code highlight]
+
+
+with open("....", "rb") as audio:
+ print(identify_book_prompt(AudioSegment.from_mp3(audio)))
+# Output: [
+# BaseMessageParam(
+# role="user",
+# content=[
+# TextPart(type="text", text="Here's an audio book snippet:"), # [!code highlight]
+# AudioPart(type='audio', media_type='audio/wav', audio=b'...'), # [!code highlight]
+# TextPart(type="text", text="What book is this?"), # [!code highlight]
+# ],
+# )
+# ]
+```
+
+
+```python
+from mirascope import prompt_template
+
+
+@prompt_template("Here's an audio book snippet: {audio_wave:audio} What book is this?") # [!code highlight]
+def identify_book_prompt(audio_wave: bytes): ...
+
+
+print(identify_book_prompt(b"..."))
+# Output: [
+# BaseMessageParam(
+# role="user",
+# content=[
+# TextPart(type="text", text="Here's an audio book snippet:"), # [!code highlight]
+# AudioPart(type='audio', media_type='audio/wav', audio=b'...'), # [!code highlight]
+# TextPart(type="text", text="What book is this?"), # [!code highlight]
+# ],
+# )
+# ]
+```
+
+
+
+
+
+
+```python
+import wave
+from mirascope import Messages, prompt_template
+
+
+@prompt_template()
+def identify_book_prompt(audio_wave: wave.Wave_read) -> Messages.Type:
+ return ["Here's an audio book snippet:", audio_wave, "What book is this?"] # [!code highlight]
+
+
+with open("....", "rb") as f, wave.open(f, "rb") as audio:
+ print(identify_book_prompt(audio))
+# Output: [
+# BaseMessageParam(
+# role="user",
+# content=[
+# TextPart(type="text", text="Here's an audio book snippet:"), # [!code highlight]
+# AudioPart(type='audio', media_type='audio/wav', audio=b'...'), # [!code highlight]
+# TextPart(type="text", text="What book is this?"), # [!code highlight]
+# ],
+# )
+# ]
+
+```
+
+
+```python
+from mirascope import prompt_template
+@prompt_template("Here's an audio book snippet: {audio_wave:audio} What book is this?") # [!code highlight]
+def identify_book_prompt(audio_wave: bytes): ...
+
+
+print(identify_book_prompt(b"..."))
+# Output: [
+# BaseMessageParam(
+# role="user",
+# content=[
+# TextPart(type="text", text="Here's an audio book snippet:"), # [!code highlight]
+# AudioPart(type='audio', media_type='audio/wav', audio=b'...'), # [!code highlight]
+# TextPart(type="text", text="What book is this?"), # [!code highlight]
+# ],
+# )
+# ]
+
+```
+
+
+
+
+
+
+When using string templates, you can also specify `:audios` to inject multiple audio inputs through a single template variable.
+
+The `:audio` and `:audios` tags support the `bytes | str` and `list[bytes] | list[str]` types, respectively. When passing in a `str`, the string template assumes it indicates a url or local filepath and will attempt to load the bytes from the source.
+
+
+### Document Inputs
+
+
+
+```python
+from mirascope import DocumentPart, Messages, prompt_template
+
+
+@prompt_template()
+def recommend_book_prompt(previous_book_pdf: bytes) -> Messages.Type:
+ return Messages.User(
+ [
+ "I just read this book:", # [!code highlight]
+ DocumentPart( # [!code highlight]
+ type="document", # [!code highlight]
+ media_type="application/pdf", # [!code highlight]
+ document=previous_book_pdf, # [!code highlight]
+ ), # [!code highlight]
+ "What should I read next?", # [!code highlight]
+ ]
+ )
+
+
+print(recommend_book_prompt(b"..."))
+# Output: [
+# BaseMessageParam(
+# role="user",
+# content=[
+# TextPart(type="text", text="I just read this book:"), # [!code highlight]
+# DocumentPart(type='document', media_type='application/pdf', document=b'...'), # [!code highlight]
+# TextPart(type="text", text="What should I read next?"), # [!code highlight]
+# ],
+# )
+# ]
+```
+
+
+```python
+from mirascope import prompt_template
+
+
+@prompt_template(
+ "I just read this book: {previous_book:document} What should I read next?" # [!code highlight]
+)
+def recommend_book_prompt(previous_book: bytes): ...
+
+
+print(recommend_book_prompt(b"..."))
+# Output: [
+# BaseMessageParam(
+# role="user",
+# content=[
+# TextPart(type="text", text="I just read this book:"), # [!code highlight]
+# DocumentPart(type='document', media_type='application/pdf', document=b'...'), # [!code highlight]
+# TextPart(type="text", text="What should I read next?"), # [!code highlight]
+# ],
+# )
+# ]
+```
+
+
+
+
+Document support varies by provider, but generally includes:
+- PDF (.pdf)
+- Word (.doc, .docx)
+- PowerPoint (.ppt, .pptx)
+- Excel (.xls, .xlsx)
+- Text (.txt)
+- CSV (.csv)
+
+Currently, Anthropic is the only provider with explicit document support via their Document Reading feature. Other providers may require converting documents to text or using specialized tools.
+
+
+
+When using string templates, you can also specify `:documents` to inject multiple document inputs through a single template variable.
+
+The `:document` and `:documents` tags support the `bytes | str` and `list[bytes] | list[str]` types, respectively. When passing in a `str`, the string template assumes it indicates a url or local filepath and will attempt to load the bytes from the source.
+
+
+## Chat History
+
+Often you'll want to inject messages (such as previous chat messages) into the prompt. Generally you can just unroll the messages into the return value of your prompt template. When using string templates, we provide a `MESSAGES` keyword for this injection, which you can add in whatever position and as many times as you'd like:
+
+
+
+```python
+from mirascope import BaseMessageParam, Messages, prompt_template
+
+
+@prompt_template()
+def chatbot(query: str, history: list[BaseMessageParam]) -> list[BaseMessageParam]:
+ return [Messages.System("You are a librarian"), *history, Messages.User(query)] # [!code highlight]
+
+
+history = [
+ Messages.User("Recommend a book"),
+ Messages.Assistant("What genre do you like?"),
+]
+print(chatbot("fantasy", history))
+# Output: [
+# BaseMessageParam(role="system", content="You are a librarian"), # [!code highlight]
+# BaseMessageParam(role="user", content="Recommend a book"), # [!code highlight]
+# BaseMessageParam(role="assistant", content="What genre do you like?"), # [!code highlight]
+# BaseMessageParam(role="user", content="fantasy"), # [!code highlight]
+# ]
+```
+
+
+```python{6-8}
+from mirascope import BaseMessageParam, Messages, prompt_template
+
+
+@prompt_template(
+ """
+ SYSTEM: You are a librarian
+ MESSAGES: {history}
+ USER: {query}
+ """
+)
+def chatbot(query: str, history: list[BaseMessageParam]): ...
+
+
+history = [
+ Messages.User("Recommend a book"), # [!code highlight]
+ Messages.Assistant("What genre do you like?"), # [!code highlight]
+]
+print(chatbot("fantasy", history))
+# Output: [
+# BaseMessageParam(role="system", content="You are a librarian"), # [!code highlight]
+# BaseMessageParam(role="user", content="Recommend a book"), # [!code highlight]
+# BaseMessageParam(role="assistant", content="What genre do you like?"), # [!code highlight]
+# BaseMessageParam(role="user", content="fantasy"), # [!code highlight]
+# ]
+```
+
+
+
+## Object Attribute Access
+
+When using template variables that have attributes, you can easily inject these attributes directly even when using string templates:
+
+
+
+```python
+from mirascope import prompt_template
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+@prompt_template()
+def recommend_book_prompt(book: Book) -> str:
+ return f"I read {book.title} by {book.author}. What should I read next?" # [!code highlight]
+
+
+book = Book(title="The Name of the Wind", author="Patrick Rothfuss")
+print(recommend_book_prompt(book))
+# Output: [BaseMessageParam(role='user', content='I read The Name of the Wind by Patrick Rothfuss. What should I read next?')] # [!code highlight]
+```
+
+
+```python
+from mirascope import prompt_template
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+@prompt_template("I read {book.title} by {book.author}. What should I read next?") # [!code highlight]
+def recommend_book_prompt(book: Book): ...
+
+
+book = Book(title="The Name of the Wind", author="Patrick Rothfuss")
+print(recommend_book_prompt(book))
+# Output: [BaseMessageParam(role='user', content='I read The Name of the Wind by Patrick Rothfuss. What should I read next?')] # [!code highlight]
+```
+
+
+
+It's worth noting that this also works with `self` when using prompt templates inside of a class, which is particularly important when building [Agents](/docs/mirascope/learn/agents).
+
+## Format Specifiers
+
+Since Mirascope prompt templates are just formatted strings, standard Python format specifiers work as expected:
+
+
+
+```python
+from mirascope import prompt_template
+
+
+@prompt_template()
+def recommend_book(genre: str, price: float) -> str:
+ return f"Recommend a {genre} book under ${price:.2f}" # [!code highlight]
+
+
+print(recommend_book("fantasy", 12.3456))
+# Output: [BaseMessageParam(role='user', content='Recommend a fantasy book under $12.35')] # [!code highlight]
+```
+
+
+```python
+from mirascope import prompt_template
+
+
+@prompt_template("Recommend a {genre} book under ${price:.2f}") # [!code highlight]
+def recommend_book(genre: str, price: float): ...
+
+
+print(recommend_book("fantasy", 12.3456))
+# Output: [BaseMessageParam(role='user', content='Recommend a fantasy book under $12.35')] # [!code highlight]
+```
+
+
+
+When writing string templates, we also offer additional format specifiers for convenience around formatting more dynamic content:
+
+### Lists
+
+String templates support the `:list` format specifier for formatting lists:
+
+
+
+```python
+from mirascope import prompt_template
+
+
+@prompt_template(
+ """
+ Book themes:
+ {themes:list} # [!code highlight]
+
+ Character analysis:
+ {characters:lists} # [!code highlight]
+ """
+)
+def analyze_book(themes: list[str], characters: list[list[str]]): ...
+
+
+prompt = analyze_book(
+ themes=["redemption", "power", "friendship"], # [!code highlight]
+ characters=[ # [!code highlight]
+ ["Name: Frodo", "Role: Protagonist"], # [!code highlight]
+ ["Name: Gandalf", "Role: Mentor"], # [!code highlight]
+ ], # [!code highlight]
+)
+
+print(prompt[0].content)
+# Output:
+# [!code highlight:12]
+# Book themes:
+# redemption
+# power
+# friendship
+
+# Character analysis:
+# Name: Frodo
+# Role: Protagonist
+
+# Name: Gandalf
+# Role: Mentor
+```
+
+
+```python
+from mirascope import prompt_template
+
+
+@prompt_template(
+ """
+ Book themes:
+ {themes:text} # [!code highlight]
+
+ Character analysis:
+ {characters:texts} # [!code highlight]
+ """
+)
+def analyze_book(themes: str, characters: list[str]): ...
+
+
+prompt = analyze_book(
+ themes="redemption, power, friendship", # [!code highlight]
+ characters=[ # [!code highlight]
+ "Name: Frodo, Role: Protagonist", # [!code highlight]
+ "Name: Gandalf, Role: Mentor", # [!code highlight]
+ ], # [!code highlight]
+)
+
+print(prompt[0].content)
+# Output:
+# [!code highlight:8]
+# [
+# TextPart(type="text", text="Book themes:"),
+# TextPart(type="text", text="redemption, power, friendship"),
+# TextPart(type="text", text="Character analysis:"),
+# TextPart(type="text", text="Name: Frodo, Role: Protagonist"),
+# TextPart(type="text", text="Name: Gandalf, Role: Mentor"),
+# ]
+```
+
+
+```python
+from mirascope import TextPart, prompt_template
+
+
+@prompt_template(
+ """
+ Book themes:
+ {themes:text} # [!code highlight]
+
+ Character analysis:
+ {characters:texts} # [!code highlight]
+ """
+)
+def analyze_book(themes: TextPart, characters: list[TextPart]): ...
+
+
+prompt = analyze_book(
+ themes=TextPart(type="text", text="redemption, power, friendship"), # [!code highlight]
+ characters=[ # [!code highlight]
+ TextPart(type="text", text="Name: Frodo, Role: Protagonist"), # [!code highlight]
+ TextPart(type="text", text="Name: Gandalf, Role: Mentor"), # [!code highlight]
+ ], # [!code highlight]
+)
+
+print(prompt[0].content)
+# Output:
+# [!code highlight:8]
+# [
+# TextPart(type="text", text="Book themes:"),
+# TextPart(type="text", text="redemption, power, friendship"),
+# TextPart(type="text", text="Character analysis:"),
+# TextPart(type="text", text="Name: Frodo, Role: Protagonist"),
+# TextPart(type="text", text="Name: Gandalf, Role: Mentor"),
+# ]
+```
+
+
+
+## Computed Fields (Dynamic Configuration)
+
+In Mirascope, we write prompt templates as functions, which enables dynamically configuring our prompts at runtime depending on the values of the template variables. We use the term "computed fields" to talk about variables that are computed and formatted at runtime.
+
+
+
+```python
+from mirascope import BaseDynamicConfig, Messages, prompt_template
+
+
+@prompt_template()
+def recommend_book_prompt(genre: str) -> BaseDynamicConfig:
+ uppercase_genre = genre.upper() # [!code highlight]
+ messages = [Messages.User(f"Recommend a {uppercase_genre} book")] # [!code highlight]
+ return {
+ "messages": messages, # [!code highlight]
+ "computed_fields": {"uppercase_genre": uppercase_genre}, # [!code highlight]
+ }
+
+
+print(recommend_book_prompt("fantasy"))
+# Output: {
+# "messages": [BaseMessageParam(role="user", content="Recommend a FANTASY book")], # [!code highlight]
+# "computed_fields": {"uppercase_genre": "FANTASY"}, # [!code highlight]
+# }
+```
+
+
+```python
+from mirascope import BaseDynamicConfig, prompt_template
+
+
+@prompt_template("Recommend a {uppercase_genre} book") # [!code highlight]
+def recommend_book_prompt(genre: str) -> BaseDynamicConfig:
+ uppercase_genre = genre.upper() # [!code highlight]
+ return {
+ "computed_fields": {"uppercase_genre": uppercase_genre}, # [!code highlight]
+ }
+
+
+print(recommend_book_prompt("fantasy"))
+# Output: [BaseMessageParam(role='user', content='Recommend a FANTASY book')] # [!code highlight]
+```
+
+
+
+There are various other parts of an LLM API call that we may want to configure dynamically as well, such as call parameters, tools, and more. We cover such cases in each of their respective sections.
+
+## Next Steps
+
+By mastering prompts in Mirascope, you'll be well-equipped to build robust, flexible, and reusable LLM applications.
+
+Next, we recommend taking a look at the [Calls](/docs/mirascope/learn/calls) documentation, which shows you how to use your prompt templates to actually call LLM APIs and generate a response.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/provider-specific/anthropic.mdx b/cloud/content/docs/v1/learn/provider-specific/anthropic.mdx
new file mode 100644
index 0000000000..87ee30f859
--- /dev/null
+++ b/cloud/content/docs/v1/learn/provider-specific/anthropic.mdx
@@ -0,0 +1,135 @@
+---
+title: Anthropic
+description: Learn about Anthropic-specific features in Mirascope, including prompt caching for message and tool contexts to optimize token usage with Claude models.
+---
+
+# Anthropic-Specific Features
+
+## Prompt Caching
+
+Anthropic's prompt caching feature can help save a lot of tokens by caching parts of your prompt. For full details, we recommend reading [their documentation](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching).
+
+
+ While we've added support for prompt caching with Anthropic, this feature is still in beta and requires setting extra headers. You can set this header as an additional call parameter.
+
+ As this feature is in beta, there may be changes made by Anthropic that may result in changes in our own handling of this feature.
+
+
+### Message Caching
+
+To cache messages, simply add a `:cache_control` tagged breakpoint to your prompt:
+
+
+
+
+```python
+import inspect
+
+from mirascope import CacheControlPart, Messages
+from mirascope.core import anthropic
+
+
+@anthropic.call(
+ "claude-3-5-sonnet-20240620",
+ call_params={
+ "max_tokens": 1024,
+ "extra_headers": {"anthropic-beta": "prompt-caching-2024-07-31"}, # [!code highlight]
+ },
+)
+def analyze_book(query: str, book: str) -> Messages.Type:
+ return [
+ Messages.System(
+ [
+ inspect.cleandoc(
+ f"""
+ You are an AI assistant tasked with analyzing literary works.
+ Your goal is to provide insightful commentary on themes, characters, and writing style.
+
+ Here is the book in it's entirety: {book}
+ """),
+ CacheControlPart(type="cache_control", cache_type="ephemeral"), # [!code highlight]
+ ]
+ ),
+ Messages.User(query),
+ ]
+
+
+print(analyze_book("What are the major themes?", "[FULL BOOK HERE]"))
+```
+
+
+
+
+```python
+from mirascope import prompt_template
+from mirascope.core import anthropic
+
+
+@anthropic.call(
+ "claude-3-5-sonnet-20240620",
+ call_params={
+ "max_tokens": 1024,
+ "extra_headers": {"anthropic-beta": "prompt-caching-2024-07-31"}, # [!code highlight]
+ },
+)
+@prompt_template(
+ """
+ SYSTEM:
+ You are an AI assistant tasked with analyzing literary works.
+ Your goal is to provide insightful commentary on themes, characters, and writing style.
+
+ Here is the book in it's entirety: {book}
+
+ {:cache_control} # [!code highlight]
+
+ USER: {query}
+ """
+)
+def analyze_book(query: str, book: str): ...
+
+
+print(analyze_book("What are the major themes?", "[FULL BOOK HERE]"))
+```
+
+
+
+
+
+ When using string templates, you can also specify the cache control type the same way we support additional options for multimodal parts (although currently `"ephemeral"` is the only supported type):
+
+ ```python-snippet-skip
+ @prompt_template("... {:cache_control(type=ephemeral)}")
+ ```
+
+
+### Tool Caching
+
+It is also possible to cache tools by using the `AnthropicToolConfig` and setting the cache control:
+
+```python
+from mirascope.core import BaseTool, anthropic
+from mirascope.core.anthropic import AnthropicToolConfig
+
+
+class CachedTool(BaseTool):
+ """This is an example of a cached tool."""
+
+ tool_config = AnthropicToolConfig(cache_control={"type": "ephemeral"}) # [!code highlight]
+
+ def call(self) -> str:
+ return "Example tool"
+
+
+@anthropic.call(
+ "claude-3-5-sonnet-20240620",
+ tools=[CachedTool], # [!code highlight]
+ call_params={
+ "max_tokens": 1024,
+ "extra_headers": {"anthropic-beta": "prompt-caching-2024-07-31"}, # [!code highlight]
+ },
+)
+def cached_tool_call() -> str:
+ return "An example call with a cached tool"
+```
+
+Remember only to include the cache control on the last tool in your list of tools that you want to cache (as all tools up to the tool with a cache control breakpoint will be cached).
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/provider-specific/openai.mdx b/cloud/content/docs/v1/learn/provider-specific/openai.mdx
new file mode 100644
index 0000000000..fc7c44e1f4
--- /dev/null
+++ b/cloud/content/docs/v1/learn/provider-specific/openai.mdx
@@ -0,0 +1,501 @@
+---
+title: OpenAI
+description: Explore OpenAI-specific features in Mirascope, including structured outputs with strict JSON schema adherence, and the Realtime API for dynamic, interactive applications with real-time capabilities.
+---
+
+# OpenAI-Specific Features
+
+## Structured Outputs
+
+OpenAI's newest models (starting with `gpt-4o-2024-08-06`) support [strict structured outputs](https://platform.openai.com/docs/guides/structured-outputs) that reliably adhere to developer-supplied JSON Schemas, achieving 100% reliability in their evals, perfectly matching the desired output schemas.
+
+This feature can be extremely useful when extracting structured information or using tools, and you can access this feature when using tools or response models with Mirascope.
+
+### Tools
+
+To use structured outputs with tools, use the `OpenAIToolConfig` and set `strict=True`. You can then use the tool as described in our [Tools documentation](/docs/mirascope/learn/tools):
+
+```python
+from mirascope.core import BaseTool, openai
+from mirascope.core.openai import OpenAIToolConfig
+
+
+class FormatBook(BaseTool):
+ title: str
+ author: str
+
+ tool_config = OpenAIToolConfig(strict=True) # [!code highlight]
+
+ def call(self) -> str:
+ return f"{self.title} by {self.author}"
+
+
+@openai.call(
+ "gpt-4o-2024-08-06", tools=[FormatBook], call_params={"tool_choice": "required"}
+)
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+response = recommend_book("fantasy")
+if tool := response.tool:
+ print(tool.call())
+```
+
+Under the hood, Mirascope generates a JSON Schema for the `FormatBook` tool based on its attributes and the `OpenAIToolConfig`. This schema is then used by OpenAI's API to ensure the model's output strictly adheres to the defined structure.
+
+### Response Models
+
+Similarly, you can use structured outputs with response models by setting `strict=True` in the response model's `ResponseModelConfigDict`, which is just a subclass of Pydantic's `ConfigDict` with the addition of the `strict` key. You will also need to set `json_mode=True`:
+
+```python
+from mirascope.core import ResponseModelConfigDict, openai
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+ model_config = ResponseModelConfigDict(strict=True)
+
+
+@openai.call("gpt-4o-2024-08-06", response_model=Book, json_mode=True)
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+book = recommend_book("fantasy")
+print(book)
+```
+
+## OpenAI Realtime API (Beta)
+
+Mirascope provides a simple and intuitive way to leverage [OpenAI's cutting-edge Realtime API](https://platform.openai.com/docs/guides/realtime). This integration allows developers to easily create dynamic, interactive applications with real-time audio and text capabilities, all while abstracting away the complexities of WebSocket management and event handling.
+
+With Mirascope, you can quickly set up and use advanced features of OpenAI's Realtime API without dealing with low-level WebSocket operations or complex event structures. This allows you to focus on building your application logic rather than worrying about the intricacies of API communication.
+
+
+ The OpenAI Realtime API integration is currently in beta. As such, the interface is subject to change in future releases. We recommend using this feature with caution in production environments and staying updated with the latest documentation and releases.
+
+
+### Key Features
+
+Mirascope's OpenAI Realtime API wrapper offers a range of powerful features that make it easy to build sophisticated real-time applications:
+
+- **Audio Input/Output**: Seamlessly handle both audio input from users and audio output from the model, enabling natural voice interactions.
+- **Audio Stream Input**: Support for streaming audio input, allowing for real-time processing of continuous audio data.
+- **Text Input/Output**: Easily manage text-based interactions alongside audio, providing flexibility in communication modes.
+- **Audio Transcript Output**: Automatically receive transcripts of audio outputs, useful for logging, display, or further processing.
+- **Multi-modal Interactions**: Combine audio and text modalities in the same session for rich, flexible user experiences.
+- **Simplified Session Management**: Abstract away the complexities of WebSocket connections and session handling.
+- **Easy-to-use Decorator Pattern**: Utilize intuitive Python decorators to define senders and receivers, streamlining your code structure.
+- **Asynchronous Support**: Built-in support for asynchronous operations, allowing for efficient handling of I/O-bound tasks.
+- **Tool Integration**: Incorporate custom tools into your Realtime API interactions, enabling more complex and dynamic conversations. Tools can be easily defined as functions and integrated into senders, allowing the AI model to use them during the interaction.
+- **Flexible Tool Handling**: Receive and process tool calls from the AI model, enabling your application to perform specific actions or retrieve information as part of the conversation flow.
+
+These features enable developers to create a wide range of applications, from voice assistants and interactive chatbots to complex multi-modal AI systems, all while leveraging the power of OpenAI's latest models through a clean, Pythonic interface.
+
+### Basic Usage
+
+To use the `Realtime` class, create an instance with the desired model and configure senders and receivers for handling input and output. Mirascope uses Python decorators to simplify the process of defining senders and receivers.
+
+Here's a complete example demonstrating how to set up a streaming audio interaction:
+
+```python
+import asyncio
+from io import BytesIO
+
+from collections.abc import AsyncGenerator
+from typing import Any
+
+from pydub import AudioSegment
+from pydub.playback import play
+
+from mirascope.beta.openai import Realtime, record_as_stream
+
+app = Realtime(
+ "gpt-4o-realtime-preview-2024-10-01",
+)
+
+
+@app.receiver("audio")
+async def receive_audio(response: AudioSegment, context: dict[str, Any]) -> None:
+ play(response)
+
+
+@app.receiver("audio_transcript")
+async def receive_audio_transcript(response: str, context: dict[str, Any]) -> None:
+ print(f"AI(audio_transcript): {response}")
+
+
+@app.sender()
+async def send_audio_as_stream(
+ context: dict[str, Any],
+) -> AsyncGenerator[BytesIO, None]:
+ print("Sending audio...")
+ async for stream in record_as_stream():
+ yield stream
+
+
+asyncio.run(app.run())
+```
+
+Let's break down the key components of this example:
+
+1. First, we create a `Realtime` instance with the specified model:
+
+```python-snippet-skip
+app = Realtime(
+ "gpt-4o-realtime-preview-2024-10-01",
+)
+```
+
+2. We define two receivers using the `@app.receiver` decorator:
+
+```python-snippet-skip
+@app.receiver("audio")
+async def receive_audio(response: AudioSegment, context: dict[str, Any]) -> None:
+ play(response)
+
+
+@app.receiver("audio_transcript")
+async def receive_audio_transcript(response: str, context: dict[str, Any]) -> None:
+ print(f"AI(audio_transcript): {response}")
+```
+
+The first receiver handles audio responses by playing them, while the second receiver prints the audio transcript.
+
+3. We define a sender using the `@app.sender` decorator:
+
+```python-snippet-skip
+async def send_audio_as_stream(
+ context: dict[str, Any],
+) -> AsyncGenerator[BytesIO, None]:
+ print("Sending audio...")
+ async for stream in record_as_stream():
+ yield stream
+```
+
+This sender function streams audio data to the model using the `record_as_stream()` function.
+
+4. Finally, we run the application:
+
+```python-snippet-skip
+asyncio.run(app.run())
+```
+
+This example demonstrates how to set up a streaming audio interaction with the Realtime API. The sender continuously streams audio data to the model, while the receivers handle the audio responses and transcripts from the model.
+
+By using these decorators, you can easily define multiple senders and receivers for different types of inputs and outputs (text, audio, streaming audio, etc.) without having to manually manage the complexities of the underlying API calls and WebSocket communication.
+
+### Examples
+
+#### Text-only Interaction
+
+```python
+import asyncio
+from typing import Any
+
+from mirascope.beta.openai import Context, Realtime, async_input
+
+app = Realtime(
+ "gpt-4o-realtime-preview-2024-10-01",
+ modalities=["text"],
+)
+
+
+@app.receiver("text")
+async def receive_text(response: str, context: dict[str, Any]) -> None:
+ print(f"AI(text): {response}", flush=True)
+
+
+@app.sender(wait_for_text_response=True)
+async def send_message(context: Context) -> str:
+ message = await async_input("Enter your message: ")
+ return message
+
+
+asyncio.run(app.run())
+```
+
+#### Audio Interaction with Turn Detection
+
+```python
+import asyncio
+from io import BytesIO
+
+from collections.abc import AsyncGenerator
+from typing import Any
+
+from pydub import AudioSegment
+from pydub.playback import play
+
+from mirascope.beta.openai import Realtime, record_as_stream
+
+app = Realtime(
+ "gpt-4o-realtime-preview-2024-10-01",
+)
+
+
+@app.receiver("audio")
+async def receive_audio(response: AudioSegment, context: dict[str, Any]) -> None:
+ play(response)
+
+
+@app.receiver("audio_transcript")
+async def receive_audio_transcript(response: str, context: dict[str, Any]) -> None:
+ print(f"AI(audio_transcript): {response}")
+
+
+@app.sender()
+async def send_audio_as_stream(
+ context: dict[str, Any],
+) -> AsyncGenerator[BytesIO, None]:
+ print("Sending audio...")
+ async for stream in record_as_stream():
+ yield stream
+
+
+asyncio.run(app.run())
+```
+
+#### Audio Interaction without Turn Detection
+
+```python
+import asyncio
+from io import BytesIO
+
+from typing import Any
+
+from pydub import AudioSegment
+from pydub.playback import play
+
+from mirascope.beta.openai import Realtime, async_input, record
+
+app = Realtime(
+ "gpt-4o-realtime-preview-2024-10-01",
+ turn_detection=None,
+)
+
+
+@app.receiver("audio")
+async def receive_audio(response: AudioSegment, context: dict[str, Any]) -> None:
+ play(response)
+
+
+@app.receiver("audio_transcript")
+async def receive_audio_transcript(response: str, context: dict[str, Any]) -> None:
+ print(f"AI(audio_transcript): {response}")
+
+
+@app.sender(wait_for_audio_transcript_response=True)
+async def send_audio(context: dict[str, Any]) -> BytesIO:
+ message = await async_input(
+ "Press Enter to start recording or enter exit to shutdown app"
+ )
+ if message == "exit":
+ raise asyncio.CancelledError
+
+ async def wait_for_enter() -> str:
+ return await async_input("Press Enter to stop recording...")
+
+ recorded_audio = await record(custom_blocking_event=wait_for_enter)
+ return recorded_audio
+
+
+asyncio.run(app.run())
+```
+
+#### Streaming Audio Interaction without Turn Detection
+
+```python
+import asyncio
+from io import BytesIO
+
+from collections.abc import AsyncGenerator
+from typing import Any
+
+from pydub import AudioSegment
+from pydub.playback import play
+
+from mirascope.beta.openai import Realtime, record_as_stream, async_input
+
+
+app = Realtime(
+ "gpt-4o-realtime-preview-2024-10-01",
+ turn_detection=None,
+)
+
+
+@app.receiver("audio")
+async def receive_audio(response: AudioSegment, context: dict[str, Any]) -> None:
+ play(response)
+
+
+@app.receiver("audio_transcript")
+async def receive_audio_transcript(response: str, context: dict[str, Any]) -> None:
+ print(f"AI(audio_transcript): {response}")
+
+
+@app.sender(wait_for_audio_transcript_response=True)
+async def send_audio_as_stream(
+ context: dict[str, Any],
+) -> AsyncGenerator[BytesIO, None]:
+ message = await async_input(
+ "Press Enter to start recording or enter exit to shutdown app"
+ )
+ if message == "exit":
+ raise asyncio.CancelledError
+
+ async def wait_for_enter() -> str:
+ return await async_input("Press Enter to stop recording...")
+
+ async for stream in record_as_stream(custom_blocking_event=wait_for_enter):
+ yield stream
+
+
+asyncio.run(app.run())
+```
+
+#### Audio Interaction with Tools and Turn Detection
+
+```python
+import asyncio
+from io import BytesIO
+
+from collections.abc import AsyncGenerator
+from typing import Any
+
+from pydub import AudioSegment
+from pydub.playback import play
+
+from mirascope.beta.openai import Realtime, record_as_stream, OpenAIRealtimeTool
+
+
+def format_book(title: str, author: str) -> str:
+ return f"{title} by {author}"
+
+
+app = Realtime("gpt-4o-realtime-preview-2024-10-01", tools=[format_book])
+
+
+@app.receiver("audio")
+async def receive_audio(response: AudioSegment, context: dict[str, Any]) -> None:
+ play(response)
+
+
+@app.receiver("audio_transcript")
+async def receive_audio_transcript(response: str, context: dict[str, Any]) -> None:
+ print(f"AI(audio_transcript): {response}")
+
+
+@app.sender()
+async def send_audio_as_stream(
+ context: dict[str, Any],
+) -> AsyncGenerator[BytesIO, None]:
+ print("Sending audio...")
+ async for stream in record_as_stream():
+ yield stream
+
+
+@app.function_call(format_book)
+async def recommend_book(tool: OpenAIRealtimeTool, context: dict[str, Any]) -> str:
+ result = tool.call()
+ print(result)
+ return result
+
+
+asyncio.run(app.run())
+```
+
+#### Text-only Interaction with Tools
+
+```python
+import asyncio
+from typing import Any
+
+from mirascope.beta.openai import Context, Realtime, async_input
+
+from mirascope.beta.openai import OpenAIRealtimeTool
+
+
+def format_book(title: str, author: str) -> str:
+ return f"{title} by {author}"
+
+
+app = Realtime(
+ "gpt-4o-realtime-preview-2024-10-01", modalities=["text"], tools=[format_book]
+)
+
+
+@app.sender(wait_for_text_response=True)
+async def send_message(context: Context) -> str:
+ genre = await async_input("Enter a genre: ")
+ return f"Recommend a {genre} book. please use the tool `format_book`."
+
+
+@app.receiver("text")
+async def receive_text(response: str, context: dict[str, Any]) -> None:
+ print(f"AI(text): {response}", flush=True)
+
+
+@app.function_call(format_book)
+async def recommend_book(tool: OpenAIRealtimeTool, context: Context) -> str:
+ result = tool.call()
+ return result
+
+
+asyncio.run(app.run())
+```
+
+#### Text-only Interaction with dynamic Tools
+
+```python
+import asyncio
+from collections.abc import Callable
+from typing import Any
+
+from mirascope.beta.openai import Context, Realtime, async_input
+
+from mirascope.beta.openai import OpenAIRealtimeTool
+
+app = Realtime(
+ "gpt-4o-realtime-preview-2024-10-01",
+ modalities=["text"],
+)
+
+
+def format_book(title: str, author: str) -> str:
+ return f"{title} by {author}"
+
+
+@app.sender(wait_for_text_response=True)
+async def send_message(context: Context) -> tuple[str, list[Callable]]:
+ genre = await async_input("Enter a genre: ")
+ return f"Recommend a {genre} book. please use the tool `format_book`.", [
+ format_book
+ ]
+
+
+@app.receiver("text")
+async def receive_text(response: str, context: dict[str, Any]) -> None:
+ print(f"AI(text): {response}", flush=True)
+
+
+@app.function_call(format_book)
+async def recommend_book(tool: OpenAIRealtimeTool, context: Context) -> str:
+ result = tool.call()
+ print(result)
+ return result
+
+
+asyncio.run(app.run())
+```
+
+### Notes
+
+- The Realtime API is currently in beta, and its API may change in future releases.
+- Make sure to handle exceptions and cancellation appropriately in your senders and receivers.
+- The examples provided use the `pydub` library for audio playback. You may need to install additional dependencies for audio support.
+- [FFmpeg](https://www.ffmpeg.org/) is required for audio processing. Make sure to install FFmpeg on your system before using the audio features of Mirascope's OpenAI Realtime API support.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/provider-specific/thinking-and-reasoning.mdx b/cloud/content/docs/v1/learn/provider-specific/thinking-and-reasoning.mdx
new file mode 100644
index 0000000000..e48907a822
--- /dev/null
+++ b/cloud/content/docs/v1/learn/provider-specific/thinking-and-reasoning.mdx
@@ -0,0 +1,207 @@
+---
+title: Thinking & Reasoning
+description: Enabling models to think or reason in Mirascope V1.
+---
+
+# Thinking & Reasoning
+
+Recent LLM models have support for "extended thinking", or "reasoning" in which, prior to generating a final output, the model produces internal reasoning about the task it has been given.
+
+We're currently working on Mirascope v2, which will support thinking in a generic and cross-provider way.
+However, as of Mirascope v1, we have ad-hoc thinking support added for the following providers:
+
+| Provider | Can Use Thinking | Can View Thinking Summaries |
+| ------------- | :--------------: | :-------------------------: |
+| Anthropic | ✓ | ✓ |
+| Google Gemini | ✓ | ✓ |
+
+## Provider Examples
+
+
+
+
+
+
+ Anthropic thinking is supported for Claude Opus 4, Claude Sonnet 4, and Claude
+ Sonnet 3.7. It may be invoked using the `@anthropic.call` provider-specific
+ decorator, as in the example below. For more, read the [Anthropic reasoning
+ docs](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking).
+
+
+
+
+```py
+from mirascope.core import anthropic, prompt_template
+
+@anthropic.call(
+model="claude-3-7-sonnet-latest",
+call_params=anthropic.AnthropicCallParams(
+max_tokens=2048,
+thinking={"type": "enabled", "budget_tokens": 1024}, # [!code highlight]
+),
+)
+@prompt_template(
+"""
+Suppose a rocket is launched from a surface, pointing straight up.
+For the first ten seconds, the rocket engine is providing upwards thrust of
+50m/s^2, after which it shuts off.
+There is constant downwards acceleration of 10m/s^2 due to gravity.
+What is the highest height it will achieve?
+
+ Your final response should be ONLY a number in meters, with no additional text.
+
+""")
+def answer(): ...
+
+response = answer()
+print("---- Thinking ----")
+print(response.thinking) # [!code highlight]
+print("---- Response ----")
+print(response.content) # [!code highlight]
+
+````
+
+
+```py
+from mirascope.core import anthropic, prompt_template
+
+
+@anthropic.call(
+ model="claude-3-7-sonnet-latest",
+ call_params=anthropic.AnthropicCallParams(
+ max_tokens=2048,
+ thinking={"type": "enabled", "budget_tokens": 1024}, # [!code highlight]
+ ),
+ stream=True,
+)
+@prompt_template(
+ """
+ Suppose a rocket is launched from a surface, pointing straight up.
+ For the first ten seconds, the rocket engine is providing upwards thrust of
+ 50m/s^2, after which it shuts off.
+ There is constant downwards acceleration of 10m/s^2 due to gravity.
+ What is the highest height it will achieve?
+
+ Your final response should be ONLY a number in meters, with no additional text.
+""")
+def answer(): ...
+
+
+stream = answer()
+print("---- Thinking ----")
+still_thinking = True
+for chunk, _ in stream:
+ if chunk.thinking: # [!code highlight]
+ print(chunk.thinking, end="", flush=True)
+ if chunk.signature: # [!code highlight]
+ print(f"\n\nSignature: {chunk.signature}", end="\n", flush=True)
+ if chunk.content: # [!code highlight]
+ if still_thinking:
+ print("---- Response ----", end="\n", flush=True)
+ still_thinking = False
+ print(chunk.content, end="", flush=True)
+````
+
+
+
+
+
+
+
+
+
+Google thinking is supported for the Gemini 2.5 series models. It is enabled by default, but may be configured specifically when using
+the `@google.call` provider-specific decorator.
+
+The `include_thoughts` setting makes summaries of the thinking process available, as shown below. There is also a `thinking_budget` option, in Gemini 2.5 Flash,
+which allows fine tuning how many tokens are available for thinking.
+
+For more, read the [Google documentation](https://ai.google.dev/gemini-api/docs/thinking).
+
+
+
+
+
+```python
+from google.genai import types
+
+from mirascope.core import google, prompt_template
+
+@google.call(
+model="gemini-2.5-flash-preview-05-20",
+call_params={
+"config": types.GenerateContentConfig(
+thinking_config=types.ThinkingConfig(include_thoughts=True) # [!code highlight]
+),
+},
+)
+@prompt_template(
+"""
+Suppose a rocket is launched from a surface, pointing straight up.
+For the first ten seconds, the rocket engine is providing upwards thrust of
+50m/s^2, after which it shuts off.
+There is constant downwards acceleration of 10m/s^2 due to gravity.
+What is the highest height it will achieve?
+
+ Your final response should be ONLY a number in meters, with no additional text.
+
+""")
+def answer(): ...
+
+response = answer()
+print("---- Thinking ----")
+print(response.thinking) # [!code highlight]
+print("---- Response ----")
+print(response.content) # [!code highlight]
+
+````
+
+
+
+```python
+from google.genai import types
+
+from mirascope.core import google, prompt_template
+
+
+@google.call(
+ model="gemini-2.5-flash-preview-05-20",
+ call_params={
+ "config": types.GenerateContentConfig(
+ thinking_config=types.ThinkingConfig(include_thoughts=True) # [!code highlight]
+ ),
+ },
+ stream=True,
+)
+@prompt_template(
+ """
+ Suppose a rocket is launched from a surface, pointing straight up.
+ For the first ten seconds, the rocket engine is providing upwards thrust of
+ 50m/s^2, after which it shuts off.
+ There is constant downwards acceleration of 10m/s^2 due to gravity.
+ What is the highest height it will achieve?
+
+ Your final response should be ONLY a number in meters, with no additional text.
+""")
+def answer(): ...
+
+
+stream = answer()
+print("---- Thinking ----")
+still_thinking = True
+for chunk, _ in stream:
+ if chunk.thinking: # [!code highlight]
+ print(chunk.thinking, end="", flush=True)
+ if chunk.content: # [!code highlight]
+ if still_thinking:
+ print("---- Response ----", end="\n", flush=True)
+ still_thinking = False
+ print(chunk.content, end="", flush=True)
+````
+
+
+
+
+
+
+
diff --git a/cloud/content/docs/v1/learn/response_models.mdx b/cloud/content/docs/v1/learn/response_models.mdx
new file mode 100644
index 0000000000..d3c5099fe5
--- /dev/null
+++ b/cloud/content/docs/v1/learn/response_models.mdx
@@ -0,0 +1,696 @@
+---
+title: Response Models
+description: Learn how to structure and validate LLM outputs using Pydantic models for type safety, automatic validation, and easier data manipulation across different providers.
+---
+
+# Response Models
+
+
+ If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls)
+
+
+Response Models in Mirascope provide a powerful way to structure and validate the output from Large Language Models (LLMs). By leveraging Pydantic's [`BaseModel`](https://docs.pydantic.dev/latest/usage/models/), Response Models offer type safety, automatic validation, and easier data manipulation for your LLM responses. While we cover some details in this documentation, we highly recommend reading through Pydantic's documentation for a deeper, comprehensive dive into everything you can do with Pydantic's `BaseModel`.
+
+## Why Use Response Models?
+
+1. **Structured Output**: Define exactly what you expect from the LLM, ensuring consistency in responses.
+2. **Automatic Validation**: Pydantic handles type checking and validation, reducing errors in your application.
+3. **Improved Type Hinting**: Better IDE support and clearer code structure.
+4. **Easier Data Manipulation**: Work with Python objects instead of raw strings or dictionaries.
+
+## Basic Usage and Syntax
+
+Let's take a look at a basic example using Mirascope vs. official provider SDKs:
+
+
+
+```python
+from mirascope import llm
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ """An extracted book."""
+
+ title: str
+ author: str
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book) # [!code highlight]
+def extract_book(text: str) -> str:
+ return f"Extract {text}"
+
+
+book = extract_book("The Name of the Wind by Patrick Rothfuss")
+print(book)
+# Output: title='The Name of the Wind' author='Patrick Rothfuss' # [!code highlight]
+```
+
+
+```python
+from mirascope import llm, prompt_template
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ """An extracted book."""
+
+ title: str
+ author: str
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book) # [!code highlight]
+@prompt_template("Extract {text}")
+def extract_book(text: str): ...
+
+
+book = extract_book("The Name of the Wind by Patrick Rothfuss")
+print(book)
+# Output: title='The Name of the Wind' author='Patrick Rothfuss' # [!code highlight]
+```
+
+
+
+
+
+
+
+Notice how Mirascope makes generating structured outputs significantly simpler than the official SDKs. It also greatly reduces boilerplate and standardizes the interaction across all supported LLM providers.
+
+
+ By default, `response_model` will use [Tools](/docs/mirascope/learn/tools) under the hood, forcing to the LLM to call that specific tool and constructing the response model from the tool's arguments.
+
+ We default to using tools because all supported providers support tools. You can also optionally set `json_mode=True` to use [JSON Mode](/docs/mirascope/learn/json_mode) instead, which we cover in [more detail below](#json-mode).
+
+
+### Accessing Original Call Response
+
+Every `response_model` that uses a Pydantic `BaseModel` will automatically have the original `BaseCallResponse` instance accessible through the `_response` property:
+
+
+
+```python
+from mirascope import llm
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ """An extracted book."""
+
+ title: str
+ author: str
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book)
+def extract_book(text: str) -> str:
+ return f"Extract {text}"
+
+
+book = extract_book("The Name of the Wind by Patrick Rothfuss")
+print(book)
+# Output: title='The Name of the Wind' author='Patrick Rothfuss'
+
+response = book._response # pyright: ignore[reportAttributeAccessIssue] # [!code highlight]
+print(response.model_dump()) # [!code highlight]
+# > {'metadata': {}, 'response': {'id': ...}, ...} # [!code highlight]
+```
+
+
+```python
+from mirascope import llm, prompt_template
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ """An extracted book."""
+
+ title: str
+ author: str
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book)
+@prompt_template("Extract {text}")
+def extract_book(text: str): ...
+
+
+book = extract_book("The Name of the Wind by Patrick Rothfuss")
+print(book)
+# Output: title='The Name of the Wind' author='Patrick Rothfuss'
+
+response = book._response # pyright: ignore[reportAttributeAccessIssue] # [!code highlight]
+print(response.model_dump()) # [!code highlight]
+# > {'metadata': {}, 'response': {'id': ...}, ...} # [!code highlight]
+```
+
+
+
+### Built-In Types
+
+For cases where you want to extract just a single built-in type, Mirascope provides a shorthand:
+
+
+
+```python
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=list[str]) # [!code highlight]
+def extract_book(texts: list[str]) -> str:
+ return f"Extract book titles from {texts}"
+
+
+book = extract_book(
+ [
+ "The Name of the Wind by Patrick Rothfuss",
+ "Mistborn: The Final Empire by Brandon Sanderson",
+ ]
+)
+print(book)
+# Output: ["The Name of the Wind", "Mistborn: The Final Empire"] # [!code highlight]
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=list[str]) # [!code highlight]
+@prompt_template("Extract book titles from {texts}")
+def extract_book(texts: list[str]): ...
+
+
+book = extract_book(
+ [
+ "The Name of the Wind by Patrick Rothfuss",
+ "Mistborn: The Final Empire by Brandon Sanderson",
+ ]
+)
+print(book)
+# Output: ["The Name of the Wind", "Mistborn: The Final Empire"] # [!code highlight]
+```
+
+
+
+Here, we are using `list[str]` as the `response_model`, which Mirascope handles without needing to define a full `BaseModel`. You could of course set `response_model=list[Book]` as well.
+
+Note that we have no way of attaching `BaseCallResponse` to built-in types, so using a Pydantic `BaseModel` is recommended if you anticipate needing access to the original call response.
+
+## Supported Field Types
+
+While Mirascope provides a consistent interface, type support varies among providers:
+
+| Type | OpenAI | Anthropic | Google | Groq | xAI | Mistral | Cohere |
+|---------------|--------|-----------|--------|------|-----|---------|--------|
+| str |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |
+| int |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |
+| float |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |
+| bool |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |
+| bytes |✓✓ |✓✓ |-✓ |✓✓ |✓✓ |✓✓ |✓✓ |
+| list |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |
+| set |✓✓ |✓✓ |-- |✓✓ |✓✓ |✓✓ |✓✓ |
+| tuple |-✓ |✓✓ |-✓ |✓✓ |-✓ |✓✓ |✓✓ |
+| dict |-✓ |✓✓ |✓✓ |✓✓ |-✓ |✓✓ |✓✓ |
+| Literal/Enum |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |
+| BaseModel |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |-✓ |
+| Nested ($def) |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |✓✓ |-- |
+
+✓✓ : Fully Supported, -✓: Only JSON Mode Support, -- : Not supported
+
+## Validation and Error Handling
+
+While `response_model` significantly improves output structure and validation, it's important to handle potential errors.
+
+Let's take a look at an example where we want to validate that all fields are uppercase:
+
+
+
+```python
+from typing import Annotated # [!code highlight]
+
+from mirascope import llm
+from pydantic import AfterValidator, BaseModel, ValidationError # [!code highlight]
+
+
+def validate_upper(v: str) -> str: # [!code highlight]
+ assert v.isupper(), "Field must be uppercase" # [!code highlight]
+ return v # [!code highlight]
+
+
+class Book(BaseModel):
+ """An extracted book."""
+
+ title: Annotated[str, AfterValidator(validate_upper)] # [!code highlight]
+ author: Annotated[str, AfterValidator(validate_upper)] # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book)
+def extract_book(text: str) -> str:
+ return f"Extract {text}"
+
+
+try: # [!code highlight]
+ book = extract_book("The Name of the Wind by Patrick Rothfuss")
+ print(book)
+ # Output: title='The Name of the Wind' author='Patrick Rothfuss'
+except ValidationError as e: # [!code highlight]
+ print(f"Error: {str(e)}")
+ # Error: 2 validation errors for Book
+ # title
+ # Assertion failed, Field must be uppercase [type=assertion_error, input_value='The Name of the Wind', input_type=str]
+ # For further information visit https://errors.pydantic.dev/2.7/v/assertion_error
+ # author
+ # Assertion failed, Field must be uppercase [type=assertion_error, input_value='Patrick Rothfuss', input_type=str]
+ # For further information visit https://errors.pydantic.dev/2.7/v/assertion_error
+```
+
+
+```python
+from typing import Annotated # [!code highlight]
+
+from mirascope import llm, prompt_template
+from pydantic import AfterValidator, BaseModel, ValidationError # [!code highlight]
+
+
+def validate_upper(v: str) -> str:
+ assert v.isupper(), "Field must be uppercase"
+ return v
+
+
+class Book(BaseModel):
+ """An extracted book."""
+
+ title: Annotated[str, AfterValidator(validate_upper)] # [!code highlight]
+ author: Annotated[str, AfterValidator(validate_upper)] # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book)
+@prompt_template("Extract {text}")
+def extract_book(text: str): ...
+
+
+try: # [!code highlight]
+ book = extract_book("The Name of the Wind by Patrick Rothfuss")
+ print(book)
+ # Output: title='The Name of the Wind' author='Patrick Rothfuss'
+except ValidationError as e: # [!code highlight]
+ print(f"Error: {str(e)}")
+ # Error: 2 validation errors for Book
+ # title
+ # Assertion failed, Field must be uppercase [type=assertion_error, input_value='The Name of the Wind', input_type=str]
+ # For further information visit https://errors.pydantic.dev/2.7/v/assertion_error
+ # author
+ # Assertion failed, Field must be uppercase [type=assertion_error, input_value='Patrick Rothfuss', input_type=str]
+ # For further information visit https://errors.pydantic.dev/2.7/v/assertion_error
+```
+
+
+
+Without additional prompt engineering, this call will fail every single time. It's important to engineer your prompts to reduce errors, but LLMs are far from perfect, so always remember to catch and handle validation errors gracefully.
+
+We highly recommend taking a look at our section on [retries](/docs/mirascope/learn/retries) to learn more about automatically retrying and re-inserting validation errors, which enables retrying the call such that the LLM can learn from its previous mistakes.
+
+### Accessing Original Call Response On Error
+
+In case of a `ValidationError`, you can access the original response for debugging:
+
+
+
+```python
+from typing import Annotated
+
+from mirascope import llm
+from pydantic import AfterValidator, BaseModel, ValidationError
+
+
+def validate_upper(v: str) -> str:
+ assert v.isupper(), "Field must be uppercase"
+ return v
+
+
+class Book(BaseModel):
+ """An extracted book."""
+
+ title: Annotated[str, AfterValidator(validate_upper)]
+ author: Annotated[str, AfterValidator(validate_upper)]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book)
+def extract_book(text: str) -> str:
+ return f"Extract {text}"
+
+
+try:
+ book = extract_book("The Name of the Wind by Patrick Rothfuss")
+ print(book)
+except ValidationError as e: # [!code highlight]
+ response = e._response # pyright: ignore[reportAttributeAccessIssue] # [!code highlight]
+ print(response.model_dump()) # [!code highlight]
+ # > {'metadata': {}, 'response': {'id': ...}, ...} # [!code highlight]
+```
+
+
+```python
+from typing import Annotated
+
+from mirascope import llm, prompt_template
+from pydantic import AfterValidator, BaseModel, ValidationError
+
+
+def validate_upper(v: str) -> str:
+ assert v.isupper(), "Field must be uppercase"
+ return v
+
+
+class Book(BaseModel):
+ """An extracted book."""
+
+ title: Annotated[str, AfterValidator(validate_upper)]
+ author: Annotated[str, AfterValidator(validate_upper)]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book)
+@prompt_template("Extract {text}")
+def extract_book(text: str): ...
+
+
+try:
+ book = extract_book("The Name of the Wind by Patrick Rothfuss")
+ print(book)
+except ValidationError as e:
+ response = e._response # pyright: ignore[reportAttributeAccessIssue] # [!code highlight]
+ print(response.model_dump()) # [!code highlight]
+ # > {'metadata': {}, 'response': {'id': ...}, ...} # [!code highlight]
+```
+
+
+
+This allows you to gracefully handle errors as well as inspect the original LLM response when validation fails.
+
+## JSON Mode
+
+By default, `response_model` uses [Tools](/docs/mirascope/learn/tools) under the hood. You can instead use [JSON Mode](/docs/mirascope/learn/json_mode) in conjunction with `response_model` by setting `json_mode=True`:
+
+
+
+```python
+from mirascope import llm
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ """An extracted book."""
+
+ title: str
+ author: str
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book, json_mode=True) # [!code highlight]
+def extract_book(text: str) -> str:
+ return f"Extract {text}"
+
+
+book = extract_book("The Name of the Wind by Patrick Rothfuss")
+print(book)
+# Output: title='The Name of the Wind' author='Patrick Rothfuss'
+```
+
+
+```python
+from mirascope import llm, prompt_template
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ """An extracted book."""
+
+ title: str
+ author: str
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book, json_mode=True) # [!code highlight]
+@prompt_template("Extract {text}")
+def extract_book(text: str): ...
+
+
+book = extract_book("The Name of the Wind by Patrick Rothfuss")
+print(book)
+# Output: title='The Name of the Wind' author='Patrick Rothfuss'
+```
+
+
+
+## Few-Shot Examples
+
+Adding few-shot examples to your response model can improve results by demonstrating exactly how to adhere to your desired output.
+
+We take advantage of Pydantic's [`Field`](https://docs.pydantic.dev/latest/concepts/fields/) and [`ConfigDict`](https://docs.pydantic.dev/latest/concepts/config/) to add these examples to response models:
+
+
+
+```python
+from mirascope import llm
+from pydantic import BaseModel, ConfigDict, Field
+
+
+class Book(BaseModel):
+ title: str = Field(..., examples=["THE NAME OF THE WIND"]) # [!code highlight]
+ author: str = Field(..., examples=["Rothfuss, Patrick"]) # [!code highlight]
+
+ model_config = ConfigDict(
+ json_schema_extra={
+ "examples": [ # [!code highlight]
+ {"title": "THE NAME OF THE WIND", "author": "Rothfuss, Patrick"} # [!code highlight]
+ ] # [!code highlight]
+ }
+ )
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book, json_mode=True)
+def extract_book(text: str) -> str:
+ return f"Extract {text}. Match example format EXCLUDING 'examples' key."
+
+
+book = extract_book("The Way of Kings by Brandon Sanderson")
+print(book)
+# Output: title='THE WAY OF KINGS' author='Sanderson, Brandon'
+```
+
+
+```python
+from mirascope import llm, prompt_template
+from pydantic import BaseModel, ConfigDict, Field
+
+
+class Book(BaseModel):
+ title: str = Field(..., examples=["THE NAME OF THE WIND"]) # [!code highlight]
+ author: str = Field(..., examples=["Rothfuss, Patrick"]) # [!code highlight]
+
+ model_config = ConfigDict(
+ json_schema_extra={
+ "examples": [ # [!code highlight]
+ {"title": "THE NAME OF THE WIND", "author": "Rothfuss, Patrick"} # [!code highlight]
+ ] # [!code highlight]
+ }
+ )
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book, json_mode=True)
+@prompt_template("Extract {text}. Match example format EXCLUDING 'examples' key.")
+def extract_book(text: str): ...
+
+
+book = extract_book("The Way of Kings by Brandon Sanderson")
+print(book)
+# Output: title='THE WAY OF KINGS' author='Sanderson, Brandon' # [!code highlight]
+```
+
+
+
+## Streaming Response Models
+
+If you set `stream=True` when `response_model` is set, your LLM call will return an `Iterable` where each item will be a partial version of your response model representing the current state of the streamed information. The final model returned by the iterator will be the full response model.
+
+
+
+```python
+from mirascope import llm
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book, stream=True) # [!code highlight]
+def extract_book(text: str) -> str:
+ return f"Extract {text}"
+
+
+book_stream = extract_book("The Name of the Wind by Patrick Rothfuss")
+for partial_book in book_stream: # [!code highlight]
+ print(partial_book) # [!code highlight]
+# Output:
+# title=None author=None
+# title='' author=None
+# title='The' author=None
+# title='The Name' author=None
+# title='The Name of' author=None
+# title='The Name of the' author=None
+# title='The Name of the Wind' author=None
+# title='The Name of the Wind' author=''
+# title='The Name of the Wind' author='Patrick'
+# title='The Name of the Wind' author='Patrick Roth'
+# title='The Name of the Wind' author='Patrick Rothf'
+# title='The Name of the Wind' author='Patrick Rothfuss'
+```
+
+
+```python
+from mirascope import llm, prompt_template
+from pydantic import BaseModel
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Book, stream=True) # [!code highlight]
+@prompt_template("Extract {text}")
+def extract_book(text: str): ...
+
+
+book_stream = extract_book("The Name of the Wind by Patrick Rothfuss")
+for partial_book in book_stream: # [!code highlight]
+ print(partial_book) # [!code highlight]
+# Output:
+# title=None author=None
+# title='' author=None
+# title='The' author=None
+# title='The Name' author=None
+# title='The Name of' author=None
+# title='The Name of the' author=None
+# title='The Name of the Wind' author=None
+# title='The Name of the Wind' author=''
+# title='The Name of the Wind' author='Patrick'
+# title='The Name of the Wind' author='Patrick Roth'
+# title='The Name of the Wind' author='Patrick Rothf'
+# title='The Name of the Wind' author='Patrick Rothfuss'
+```
+
+
+
+Once exhausted, you can access the final, full response model through the `constructed_response_model` property of the structured stream. Note that this will also give you access to the [`._response` property](#accessing-original-call-response) that every `BaseModel` receives.
+
+You can also use the `stream` property to access the `BaseStream` instance and [all of it's properties](/docs/mirascope/learn/streams#common-stream-properties-and-methods).
+
+## FromCallArgs
+
+Fields annotated with `FromCallArgs` will be populated with the corresponding argument from the function call rather than expecting it from the LLM's response. This enables seamless validation of LLM outputs against function inputs:
+
+
+
+```python
+from typing import Annotated
+
+from mirascope import llm
+from mirascope.core import FromCallArgs
+from pydantic import BaseModel, model_validator
+from typing_extensions import Self
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+class Books(BaseModel):
+ texts: Annotated[list[str], FromCallArgs()] # [!code highlight]
+ books: list[Book]
+
+ @model_validator(mode="after")
+ def validate_output_length(self) -> Self:
+ if len(self.texts) != len(self.books):
+ raise ValueError("length mismatch...")
+ return self
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Books)
+def extract_books(texts: list[str]) -> str: # [!code highlight]
+ return f"Extract the books from these texts: {texts}"
+
+
+texts = [
+ "The Name of the Wind by Patrick Rothfuss",
+ "Mistborn: The Final Empire by Brandon Sanderson",
+]
+print(extract_books(texts))
+# Output:
+# texts=[
+# 'The Name of the Wind by Patrick Rothfuss',
+# 'Mistborn: The Final Empire by Brandon Sanderson'
+# ]
+# books=[
+# Book(title='The Name of the Wind', author='Patrick Rothfuss'),
+# Book(title='Mistborn: The Final Empire', author='Brandon Sanderson')
+# ]
+```
+
+
+```python
+from typing import Annotated
+
+from mirascope import llm, prompt_template
+from mirascope.core import FromCallArgs
+from pydantic import BaseModel, model_validator
+from typing_extensions import Self
+
+
+class Book(BaseModel):
+ title: str
+ author: str
+
+
+class Books(BaseModel):
+ texts: Annotated[list[str], FromCallArgs()] # [!code highlight]
+ books: list[Book]
+
+ @model_validator(mode="after")
+ def validate_output_length(self) -> Self:
+ if len(self.texts) != len(self.books):
+ raise ValueError("length mismatch...")
+ return self
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", response_model=Books)
+@prompt_template("Extract the books from these texts: {texts}")
+def extract_books(texts: list[str]): ... # [!code highlight]
+
+
+texts = [
+ "The Name of the Wind by Patrick Rothfuss",
+ "Mistborn: The Final Empire by Brandon Sanderson",
+]
+print(extract_books(texts))
+# Output:
+# texts=[
+# 'The Name of the Wind by Patrick Rothfuss',
+# 'Mistborn: The Final Empire by Brandon Sanderson'
+# ]
+# books=[
+# Book(title='The Name of the Wind', author='Patrick Rothfuss'),
+# Book(title='Mistborn: The Final Empire', author='Brandon Sanderson')
+# ]
+```
+
+
+
+## Next Steps
+
+By following these best practices and leveraging Response Models effectively, you can create more robust, type-safe, and maintainable LLM-powered applications with Mirascope.
+
+Next, we recommend taking a look at one of:
+
+- [JSON Mode](/docs/mirascope/learn/json_mode) to see an alternate way to generate structured outputs where using Pydantic to validate outputs is optional.
+- [Evals](/docs/mirascope/learn/evals) to see how to use `response_model` to evaluate your prompts.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/retries.mdx b/cloud/content/docs/v1/learn/retries.mdx
new file mode 100644
index 0000000000..245195b976
--- /dev/null
+++ b/cloud/content/docs/v1/learn/retries.mdx
@@ -0,0 +1,500 @@
+---
+title: Retries
+description: Learn how to implement robust retry mechanisms for LLM API calls using Mirascope and Tenacity to handle rate limits, validation errors, and other failures.
+---
+
+# Retries
+
+Making an API call to a provider can fail due to various reasons, such as rate limits, internal server errors, validation errors, and more. This makes retrying calls extremely important when building robust systems.
+
+Mirascope combined with [Tenacity](https://tenacity.readthedocs.io/en/latest/) increases the chance for these requests to succeed while maintaining end user transparency.
+
+You can install the necessary packages directly or use the `tenacity` extras flag:
+
+```bash
+pip install "mirascope[tenacity]"
+```
+
+## Tenacity `retry` Decorator
+
+### Calls
+
+Let's take a look at a basic Mirascope call that retries with exponential back-off:
+
+
+
+
+```python
+from mirascope import llm
+from tenacity import retry, stop_after_attempt, wait_exponential # [!code highlight]
+
+
+@retry( # [!code highlight]
+ stop=stop_after_attempt(3), # [!code highlight]
+ wait=wait_exponential(multiplier=1, min=4, max=10), # [!code highlight]
+) # [!code highlight]
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+print(recommend_book("fantasy"))
+```
+
+
+
+
+```python
+from mirascope import llm, prompt_template
+from tenacity import retry, stop_after_attempt, wait_exponential # [!code highlight]
+
+
+@retry( # [!code highlight]
+ stop=stop_after_attempt(3), # [!code highlight]
+ wait=wait_exponential(multiplier=1, min=4, max=10), # [!code highlight]
+) # [!code highlight]
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Recommend a {genre} book")
+def recommend_book(genre: str): ...
+
+
+print(recommend_book("fantasy"))
+```
+
+
+
+
+Ideally the call to `recommend_book` will succeed on the first attempt, but now the API call will be made again after waiting should it fail.
+
+The call will then throw a `RetryError` after 3 attempts if unsuccessful. This error should be caught and handled.
+
+### Streams
+
+When streaming, the generator is not actually run until you start iterating. This means the initial API call may be successful but fail during the actual iteration through the stream.
+
+Instead, you need to wrap your call and add retries to this wrapper:
+
+
+
+
+```python
+from mirascope import llm
+from tenacity import retry, stop_after_attempt, wait_exponential # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", stream=True)
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+@retry( # [!code highlight]
+ stop=stop_after_attempt(3), # [!code highlight]
+ wait=wait_exponential(multiplier=1, min=4, max=10), # [!code highlight]
+) # [!code highlight]
+def stream():
+ for chunk, _ in recommend_book("fantasy"):
+ print(chunk.content, end="", flush=True)
+
+
+stream()
+```
+
+
+
+
+```python
+from mirascope import llm, prompt_template
+from tenacity import retry, stop_after_attempt, wait_exponential
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", stream=True)
+@prompt_template("Recommend a {genre} book")
+def recommend_book(genre: str): ...
+
+
+@retry( # [!code highlight]
+ stop=stop_after_attempt(3), # [!code highlight]
+ wait=wait_exponential(multiplier=1, min=4, max=10), # [!code highlight]
+) # [!code highlight]
+def stream():
+ for chunk, _ in recommend_book("fantasy"):
+ print(chunk.content, end="", flush=True)
+
+
+stream()
+```
+
+
+
+
+### Tools
+
+When using tools, `ValidationError` errors won't happen until you attempt to construct the tool (either when calling `response.tools` or iterating through a stream with tools).
+
+You need to handle retries in this case the same way as streams:
+
+
+
+
+```python
+from mirascope import llm
+from tenacity import retry, stop_after_attempt, wait_exponential
+
+
+def get_book_author(title: str) -> str:
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author])
+def identify_author(book: str) -> str:
+ return f"Who wrote {book}?"
+
+
+@retry( # [!code highlight]
+ stop=stop_after_attempt(3), # [!code highlight]
+ wait=wait_exponential(multiplier=1, min=4, max=10), # [!code highlight]
+) # [!code highlight]
+def run():
+ response = identify_author("The Name of the Wind")
+ if tool := response.tool:
+ print(tool.call())
+ print(f"Original tool call: {tool.tool_call}")
+ else:
+ print(response.content)
+
+
+run()
+```
+
+
+
+
+```python
+from mirascope import llm, prompt_template
+from tenacity import retry, stop_after_attempt, wait_exponential
+
+
+def get_book_author(title: str) -> str:
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author])
+@prompt_template("Who wrote {book}?")
+def identify_author(book: str): ...
+
+
+@retry( # [!code highlight]
+ stop=stop_after_attempt(3), # [!code highlight]
+ wait=wait_exponential(multiplier=1, min=4, max=10), # [!code highlight]
+) # [!code highlight]
+def run():
+ response = identify_author("The Name of the Wind")
+ if tool := response.tool:
+ print(tool.call())
+ print(f"Original tool call: {tool.tool_call}")
+ else:
+ print(response.content)
+
+
+run()
+```
+
+
+
+
+### Error Reinsertion
+
+Every example above simply retries after a failed attempt without making any updates to the call. This approach can be sufficient for some use-cases where we can safely expect the call to succeed on subsequent attempts (e.g. rate limits).
+
+However, there are some cases where the LLM is likely to make the same mistake over and over again. For example, when using tools or response models, the LLM may return incorrect or missing arguments where it's highly likely the LLM will continuously make the same mistake on subsequent calls. In these cases, it's important that we update subsequent calls based on resulting errors to improve the chance of success on the next call.
+
+To make it easier to make such updates, Mirascope provides a `collect_errors` handler that can collect any errors of your choice and insert them into subsequent calls through an `errors` keyword argument.
+
+
+
+
+```python
+from typing import Annotated
+
+from mirascope import llm
+from mirascope.retries.tenacity import collect_errors # [!code highlight]
+from pydantic import AfterValidator, ValidationError
+from tenacity import retry, stop_after_attempt
+
+
+def is_upper(v: str) -> str:
+ assert v.isupper(), "Must be uppercase"
+ return v
+
+
+@retry(stop=stop_after_attempt(3), after=collect_errors(ValidationError)) # [!code highlight]
+@llm.call(
+ provider="$PROVIDER",
+ model="$MODEL",
+ response_model=Annotated[str, AfterValidator(is_upper)], # pyright: ignore [reportArgumentType, reportCallIssue]
+)
+def identify_author(book: str, *, errors: list[ValidationError] | None = None) -> str: # [!code highlight]
+ previous_errors = None
+ if errors:
+ print(previous_errors)
+ return f"Previous Error: {errors}\n\nWho wrote {book}?"
+ return f"Who wrote {book}?"
+
+
+author = identify_author("The Name of the Wind")
+print(author)
+# Previous Errors: [1 validation error for str
+# value
+# Assertion failed, Must be uppercase [type=assertion_error, input_value='Patrick Rothfuss', input_type=str]
+# For further information visit https://errors.pydantic.dev/2.7/v/assertion_error]
+# PATRICK ROTHFUSS
+```
+
+
+
+
+```python
+from typing import Annotated
+
+from mirascope import BaseDynamicConfig, llm, prompt_template
+from mirascope.retries.tenacity import collect_errors # [!code highlight]
+from pydantic import AfterValidator, ValidationError
+from tenacity import retry, stop_after_attempt
+
+
+def is_upper(v: str) -> str:
+ assert v.isupper(), "Must be uppercase"
+ return v
+
+
+@retry(stop=stop_after_attempt(3), after=collect_errors(ValidationError)) # [!code highlight]
+@llm.call(
+ provider="$PROVIDER",
+ model="$MODEL",
+ response_model=Annotated[str, AfterValidator(is_upper)], # pyright: ignore [reportArgumentType, reportCallIssue]
+)
+@prompt_template(
+ """
+ {previous_errors}
+
+ Who wrote {book}?
+ """
+)
+def identify_author(
+ book: str, *, errors: list[ValidationError] | None = None # [!code highlight]
+) -> BaseDynamicConfig:
+ previous_errors = None
+ if errors:
+ previous_errors = f"Previous Errors: {errors}"
+ print(previous_errors)
+ return {"computed_fields": {"previous_errors": previous_errors}}
+
+
+author = identify_author("The Name of the Wind")
+print(author)
+# Previous Errors: [1 validation error for str
+# value
+# Assertion failed, Must be uppercase [type=assertion_error, input_value='Patrick Rothfuss', input_type=str]
+# For further information visit https://errors.pydantic.dev/2.7/v/assertion_error]
+# PATRICK ROTHFUSS
+```
+
+
+
+
+In this example the first attempt fails because the identified author is not all uppercase. The `ValidationError` is then reinserted into the subsequent call, which enables the model to learn from it's mistake and correct its error.
+
+Of course, we could always engineer a better prompt (i.e. ask for all caps), but even prompt engineering does not guarantee perfect results. The purpose of this example is to demonstrate the power of a feedback loop by reinserting errors to build more robust systems.
+
+## Fallback
+
+When using the provider-agnostic `llm.call` decorator, you can use the `fallback` decorator to automatically catch certain errors and use a backup provider/model to attempt the call again.
+
+For example, we may want to attempt the call with Anthropic in the event that we get a `RateLimitError` from OpenAI:
+
+
+
+
+```python
+from anthropic import RateLimitError as AnthropicRateLimitError
+from mirascope import llm
+from mirascope.retries import FallbackError, fallback
+from openai import RateLimitError as OpenAIRateLimitError
+
+
+@fallback( # [!code highlight]
+ OpenAIRateLimitError, # [!code highlight]
+ [ # [!code highlight]
+ { # [!code highlight]
+ "catch": AnthropicRateLimitError, # [!code highlight]
+ "provider": "anthropic", # [!code highlight]
+ "model": "claude-3-5-sonnet-latest", # [!code highlight]
+ } # [!code highlight]
+ ], # [!code highlight]
+) # [!code highlight]
+@llm.call("openai", "gpt-4o-mini")
+def answer_question(question: str) -> str:
+ return f"Answer this question: {question}"
+
+
+try:
+ response = answer_question("What is the meaning of life?")
+ if caught := getattr(response, "_caught", None): # [!code highlight]
+ print(f"Exception caught: {caught}")
+ print("### Response ###")
+ print(response.content)
+except FallbackError as e: # [!code highlight]
+ print(e)
+```
+
+
+
+
+```python
+from anthropic import RateLimitError as AnthropicRateLimitError
+from mirascope import llm, prompt_template
+from mirascope.retries import FallbackError, fallback
+from $PROVIDER import RateLimitError as OpenAIRateLimitError
+
+
+@fallback( # [!code highlight]
+ OpenAIRateLimitError, # [!code highlight]
+ [ # [!code highlight]
+ { # [!code highlight]
+ "catch": AnthropicRateLimitError, # [!code highlight]
+ "provider": "anthropic", # [!code highlight]
+ "model": "claude-3-5-sonnet-latest", # [!code highlight]
+ } # [!code highlight]
+ ], # [!code highlight]
+) # [!code highlight]
+@llm.call("openai", "gpt-4o-mini")
+@prompt_template("Answer this question: {question}")
+def answer_question(question: str): ...
+
+
+try:
+ response = answer_question("What is the meaning of life?")
+ if caught := getattr(response, "_caught", None): # [!code highlight]
+ print(f"Exception caught: {caught}")
+ print("### Response ###")
+ print(response.content)
+except FallbackError as e: # [!code highlight]
+ print(e)
+```
+
+
+
+
+Here, we first attempt to call OpenAI (the default setting). If we catch the `OpenAIRateLimitError`, then we'll attempt to call Anthropic. If we catch the `AnthropicRateLimitError`, then we'll receive a `FallbackError` since all attempts failed.
+
+You can provide an `Exception` or tuple of multiple to catch, and you can stack the `fallback` decorator to handle different errors differently if desired.
+
+### Fallback With Retries
+
+The decorator also works well with Tenacity's `retry` decorator. For example, we may want to first attempt to call OpenAI multiple times with exponential backoff, but if we fail 3 times fall back to Anthropic, which we'll also attempt to call 3 times:
+
+
+
+
+```python
+from anthropic import RateLimitError as AnthropicRateLimitError
+from mirascope import llm
+from mirascope.retries import FallbackError, fallback
+from $PROVIDER import RateLimitError as OpenAIRateLimitError
+from tenacity import (
+ RetryError,
+ retry,
+ retry_if_exception_type,
+ stop_after_attempt,
+ wait_exponential,
+)
+
+
+@fallback( # [!code highlight]
+ RetryError, # [!code highlight]
+ [ # [!code highlight]
+ { # [!code highlight]
+ "catch": RetryError, # [!code highlight]
+ "provider": "anthropic", # [!code highlight]
+ "model": "claude-3-5-sonnet-latest", # [!code highlight]
+ } # [!code highlight]
+ ], # [!code highlight]
+) # [!code highlight]
+@retry( # [!code highlight]
+ retry=retry_if_exception_type((OpenAIRateLimitError, AnthropicRateLimitError)), # [!code highlight]
+ stop=stop_after_attempt(3), # [!code highlight]
+ wait=wait_exponential(multiplier=1, min=4, max=10), # [!code highlight]
+) # [!code highlight]
+@llm.call(provider="openai", model="gpt-4o-mini")
+def answer_question(question: str) -> str:
+ return f"Answer this question: {question}"
+
+
+try:
+ response = answer_question("What is the meaning of life?")
+ if caught := getattr(response, "_caught", None):
+ print(f"Exception caught: {caught}")
+ print("### Response ###")
+ print(response.content)
+except FallbackError as e:
+ print(e)
+```
+
+
+
+
+```python
+from anthropic import RateLimitError as AnthropicRateLimitError
+from mirascope import llm, prompt_template
+from mirascope.retries import FallbackError, fallback
+from $PROVIDER import RateLimitError as OpenAIRateLimitError
+from tenacity import (
+ RetryError,
+ retry,
+ retry_if_exception_type,
+ stop_after_attempt,
+ wait_exponential,
+)
+
+
+@fallback( # [!code highlight]
+ RetryError, # [!code highlight]
+ [ # [!code highlight]
+ { # [!code highlight]
+ "catch": RetryError, # [!code highlight]
+ "provider": "anthropic", # [!code highlight]
+ "model": "claude-3-5-sonnet-latest", # [!code highlight]
+ } # [!code highlight]
+ ], # [!code highlight]
+) # [!code highlight]
+@retry( # [!code highlight]
+ retry=retry_if_exception_type((OpenAIRateLimitError, AnthropicRateLimitError)), # [!code highlight]
+ stop=stop_after_attempt(3), # [!code highlight]
+ wait=wait_exponential(multiplier=1, min=4, max=10), # [!code highlight]
+) # [!code highlight]
+@llm.call(provider="openai", model="gpt-4o-mini")
+@prompt_template("Answer this question: {question}")
+def answer_question(question: str): ...
+
+
+try:
+ response = answer_question("What is the meaning of life?")
+ if caught := getattr(response, "_caught", None):
+ print(f"Exception caught: {caught}")
+ print("### Response ###")
+ print(response.content)
+except FallbackError as e:
+ print(e)
+```
+
+
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/streams.mdx b/cloud/content/docs/v1/learn/streams.mdx
new file mode 100644
index 0000000000..e5eee70b6e
--- /dev/null
+++ b/cloud/content/docs/v1/learn/streams.mdx
@@ -0,0 +1,322 @@
+---
+title: Streams
+description: Learn how to process LLM responses in real-time as they are generated using Mirascope's streaming capabilities for more interactive and responsive applications.
+---
+
+# Streams
+
+
+ If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls)
+
+
+Streaming is a powerful feature when using LLMs that allows you to process chunks of an LLM response in real-time as they are generated. This can be particularly useful for long-running tasks, providing immediate feedback to users, or implementing more responsive applications.
+
+
+ ```mermaid
+ sequenceDiagram
+ participant User
+ participant App
+ participant LLM
+
+ User->>App: Request
+ App->>LLM: Query
+ Note right of LLM: Standard Response
+ LLM-->>App: Complete Response
+ App-->>User: Display Result
+
+ User->>App: Request
+ App->>LLM: Query (Stream)
+ Note right of LLM: Streaming Response
+ loop For each chunk
+ LLM-->>App: Response Chunk
+ App-->>User: Display Chunk
+ end
+ ```
+
+
+This approach offers several benefits:
+
+1. **Immediate feedback**: Users can see responses as they're being generated, creating a more interactive experience.
+2. **Reduced latency**: For long responses, users don't have to wait for the entire generation to complete before seeing results.
+3. **Incremental processing**: Applications can process and act on partial results as they arrive.
+4. **Efficient resource use**: Memory usage can be optimized by processing chunks instead of storing the entire response.
+5. **Early termination**: If the desired information is found early in the response, processing can be stopped without waiting for the full generation.
+
+
+ [`mirascope.core.base.stream`](/docs/mirascope/api/core/base/stream)
+
+
+## Basic Usage and Syntax
+
+To use streaming, simply set the `stream` parameter to `True` in your [`call`](/docs/mirascope/learn/calls) decorator:
+
+
+
+```python
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", stream=True) # [!code highlight]
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+stream = recommend_book("fantasy") # [!code highlight]
+for chunk, _ in stream: # [!code highlight]
+ print(chunk.content, end="", flush=True) # [!code highlight]
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", stream=True) # [!code highlight]
+@prompt_template("Recommend a {genre} book")
+def recommend_book(genre: str): ...
+
+
+stream = recommend_book("fantasy") # [!code highlight]
+for chunk, _ in stream: # [!code highlight]
+ print(chunk.content, end="", flush=True) # [!code highlight]
+
+print(f"Content: {stream.content}")
+
+call_response = stream.construct_call_response()
+print(f"Usage: {call_response.usage}")
+```
+
+
+
+In this example:
+
+1. We use the `call` decorator with `stream=True` to enable streaming.
+2. The `recommend_book` function now returns a generator that yields `(chunk, tool)` tuples of the response.
+3. We iterate over the chunks, printing each one as it's received.
+4. We use `end=""` and `flush=True` parameters in the print function to ensure that the output is displayed in real-time without line breaks.
+
+## Handling Streamed Responses
+
+
+ [`mirascope.core.base.call_response_chunk`](/docs/mirascope/api/core/base/call_response_chunk)
+
+
+When streaming, the initial response will be a provider-specific [`BaseStream`](/docs/mirascope/api) instance (e.g. `OpenAIStream`), which is a generator that yields tuples `(chunk, tool)` where `chunk` is a provider-specific [`BaseCallResponseChunk`](/docs/mirascope/api) (e.g. `OpenAICallResponseChunk`) that wraps the original chunk in the provider's response. These objects provide a consistent interface across providers while still allowing access to provider-specific details.
+
+
+ You'll notice in the above example that we ignore the `tool` in each tuple. If no tools are set in the call, then `tool` will always be `None` and can be safely ignored. For more details, check out the documentation on [streaming tools](/docs/mirascope/learn/tools#streaming-tools)
+
+
+### Common Chunk Properties and Methods
+
+All `BaseCallResponseChunk` objects share these common properties:
+
+- `content`: The main text content of the response. If no content is present, this will be the empty string.
+- `finish_reasons`: A list of reasons why the generation finished (e.g., "stop", "length"). These will be typed specifically for the provider used. If no finish reasons are present, this will be `None`.
+- `model`: The name of the model used for generation.
+- `id`: A unique identifier for the response if available. Otherwise this will be `None`.
+- `usage`: Information about token usage for the call if available. Otherwise this will be `None`.
+- `input_tokens`: The number of input tokens used if available. Otherwise this will be `None`.
+- `output_tokens`: The number of output tokens generated if available. Otherwise this will be `None`.
+
+### Common Stream Properties and Methods
+
+
+ To access these properties, you must first exhaust the stream by iterating through it.
+
+
+Once exhausted, all `BaseStream` objects share the [same common properties and methods as `BaseCallResponse`](/docs/mirascope/learn/calls#common-response-properties-and-methods), except for `usage`, `tools`, `tool`, and `__str__`.
+
+
+
+```python
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", stream=True)
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+stream = recommend_book("fantasy")
+for chunk, _ in stream:
+ print(chunk.content, end="", flush=True)
+
+print(f"Content: {stream.content}") # [!code highlight]
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", stream=True)
+@prompt_template("Recommend a {genre} book")
+def recommend_book(genre: str): ...
+
+
+stream = recommend_book("fantasy")
+for chunk, _ in stream:
+ print(chunk.content, end="", flush=True)
+
+print(f"Content: {stream.content}") # [!code highlight]
+```
+
+
+
+You can access the additional missing properties by using the method `construct_call_response` to reconstruct a provider-specific `BaseCallResponse` instance:
+
+
+
+```python
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", stream=True)
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+stream = recommend_book("fantasy")
+for chunk, _ in stream:
+ print(chunk.content, end="", flush=True)
+
+print(f"Content: {stream.content}")
+
+call_response = stream.construct_call_response() # [!code highlight]
+print(f"Usage: {call_response.usage}") # [!code highlight]
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", stream=True)
+@prompt_template("Recommend a {genre} book")
+def recommend_book(genre: str): ...
+
+
+stream = recommend_book("fantasy")
+for chunk, _ in stream:
+ print(chunk.content, end="", flush=True)
+
+print(f"Content: {stream.content}")
+
+call_response = stream.construct_call_response() # [!code highlight]
+print(f"Usage: {call_response.usage}") # [!code highlight]
+```
+
+
+
+
+ While we try our best to reconstruct the `BaseCallResponse` instance from the stream, there's always a chance that some information present in a standard call might be missing from the stream.
+
+
+### Provider-Specific Response Details
+
+While Mirascope provides a consistent interface, you can always access the full, provider-specific response object if needed. This is available through the `chunk` property of the `BaseCallResponseChunk` object:
+
+
+
+```python
+from mirascope import llm
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", stream=True)
+def recommend_book(genre: str) -> str:
+ return f"Recommend a {genre} book"
+
+
+stream = recommend_book("fantasy")
+for chunk, _ in stream:
+ print(f"Original chunk: {chunk.chunk}") # [!code highlight]
+ print(chunk.content, end="", flush=True)
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", stream=True)
+@prompt_template("Recommend a {genre} book")
+def recommend_book(genre: str): ...
+
+
+stream = recommend_book("fantasy")
+for chunk, _ in stream:
+ print(f"Original chunk: {chunk.chunk}") # [!code highlight]
+ print(chunk.content, end="", flush=True)
+```
+
+
+
+
+ The reason that we have provider-specific response objects (e.g. `OpenAICallResponseChunk`) is to provide proper type hints and safety when accessing the original response chunk.
+
+
+## Multi-Modal Outputs
+
+While most LLM providers focus on text streaming, some providers support streaming additional output modalities like audio. The availability of multi-modal streaming varies among providers:
+
+| Provider | Text | Audio | Image |
+|---------------|:------:|:-------:|:-------:|
+| OpenAI | ✓ | ✓ | — |
+| Anthropic | ✓ | — | — |
+| Mistral | ✓ | — | — |
+| Google Gemini | ✓ | — | — |
+| Groq | ✓ | — | — |
+| Cohere | ✓ | — | — |
+| LiteLLM | ✓ | — | — |
+| Azure AI | ✓ | — | — |
+
+*Legend: ✓ (Supported), — (Not Supported)*
+
+
+### Audio Streaming
+
+For providers that support audio outputs, you can stream both text and audio responses simultaneously:
+
+
+
+Each stream chunk provides access to:
+
+- `chunk.audio`: Raw audio data in bytes format
+- `chunk.audio_transcript`: The transcript of the audio
+
+This allows you to process both text and audio streams concurrently. Since audio data is received in chunks, you could technically begin playback before receiving the complete response.
+
+
+ The example above uses `pydub` and `ffmpeg` for audio playback, but you can use any audio processing libraries or media players that can handle WAV format audio data. Choose the tools that best fit your needs and environment.
+
+ If you decide to use pydub:
+ - Install [pydub](https://github.com/jiaaro/pydub): `pip install pydub`
+ - Install ffmpeg: Available from [ffmpeg.org](https://www.ffmpeg.org/) or through system package managers
+
+
+
+ For providers that support audio outputs, refer to their documentation for available voice options and configurations:
+
+ - OpenAI: [Text to Speech Guide](https://platform.openai.com/docs/guides/text-to-speech)
+
+
+## Error Handling
+
+Error handling in streams is similar to standard non-streaming calls. However, it's important to note that errors may occur during iteration rather than at the initial function call:
+
+
+
+In these examples we show provider-specific error handling, though you can also catch generic exceptions.
+
+Note how we wrap the iteration loop in a try/except block to catch any errors that might occur during streaming.
+
+
+ The initial response when calling an LLM function with `stream=True` will return a generator. Any errors that may occur during streaming will not happen until you actually iterate through the generator. This is why we wrap the generation loop in the try/except block and not just the call to `recommend_book`.
+
+
+## Next Steps
+
+By leveraging streaming effectively, you can create more responsive and efficient LLM-powered applications with Mirascope's streaming capabilities.
+
+Next, we recommend taking a look at the [Chaining](/docs/mirascope/learn/chaining) documentation, which shows you how to break tasks down into smaller, more directed calls and chain them together.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/tools.mdx b/cloud/content/docs/v1/learn/tools.mdx
new file mode 100644
index 0000000000..c285f4e36f
--- /dev/null
+++ b/cloud/content/docs/v1/learn/tools.mdx
@@ -0,0 +1,1754 @@
+---
+title: Tools
+description: Learn how to define, use, and chain together LLM-powered tools in Mirascope to extend model capabilities with external functions, data sources, and system interactions.
+---
+
+# Tools
+
+
+ If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls)
+
+
+Tools are user-defined functions that an LLM (Large Language Model) can ask the user to invoke on its behalf. This greatly enhances the capabilities of LLMs by enabling them to perform specific tasks, access external data, interact with other systems, and more.
+
+Mirascope enables defining tools in a provider-agnostic way, which can be used across all supported LLM providers without modification.
+
+
+ When an LLM decides to use a tool, it indicates the tool name and argument values in its response. It's important to note that the LLM doesn't actually execute the function; instead, you are responsible for calling the tool and (optionally) providing the output back to the LLM in a subsequent interaction. For more details on such iterative tool-use flows, check out the [Tool Message Parameters](#tool-message-parameters) section below as well as the section on [Agents](/docs/mirascope/learn/agents).
+
+ ```mermaid
+ sequenceDiagram
+ participant YC as Your Code
+ participant LLM
+
+ YC->>LLM: Call with prompt and function definitions
+ loop Tool Calls
+ LLM->>LLM: Decide to respond or call functions
+ LLM->>YC: Respond with function to call and arguments
+ YC->>YC: Execute function with given arguments
+ YC->>LLM: Call with prompt and function result
+ end
+ LLM->>YC: Final response
+ ```
+
+
+## Basic Usage and Syntax
+
+
+ [`mirascope.llm.tool`](/docs/mirascope/api)
+
+
+There are two ways of defining tools in Mirascope: `BaseTool` and functions.
+
+You can consider the functional definitions a shorthand form of writing the `BaseTool` version of the same tool. Under the hood, tools defined as functions will get converted automatically into their corresponding `BaseTool`.
+
+Let's take a look at a basic example of each:
+
+
+
+
+
+
+```python
+from mirascope import BaseTool, llm
+from pydantic import Field
+
+# [!code highlight:13]
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: str = Field(..., description="The title of the book.")
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor]) # [!code highlight]
+def identify_author(book: str) -> str:
+ return f"Who wrote {book}?"
+
+
+response = identify_author("The Name of the Wind")
+if tool := response.tool: # [!code highlight]
+ print(tool.call()) # [!code highlight]
+ # Output: Patrick Rothfuss # [!code highlight]
+else:
+ print(response.content)
+```
+
+
+```python
+from mirascope import BaseTool, llm, prompt_template
+from pydantic import Field
+
+# [!code highlight:13]
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: str = Field(..., description="The title of the book.")
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor]) # [!code highlight]
+@prompt_template("Who wrote {book}?")
+def identify_author(book: str): ...
+
+
+response = identify_author("The Name of the Wind")
+if tool := response.tool: # [!code highlight]
+ print(tool.call()) # [!code highlight]
+ # Output: Patrick Rothfuss # [!code highlight]
+else:
+ print(response.content)
+```
+
+
+
+
+
+
+```python
+from mirascope import llm
+
+# [!code highlight:13]
+def get_book_author(title: str) -> str:
+ """Returns the author of the book with the given title
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author]) # [!code highlight]
+def identify_author(book: str) -> str:
+ return f"Who wrote {book}?"
+
+
+response = identify_author("The Name of the Wind")
+if tool := response.tool: # [!code highlight]
+ print(tool.call()) # [!code highlight]
+ # Output: Patrick Rothfuss # [!code highlight]
+else:
+ print(response.content)
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+# [!code highlight:13]
+def get_book_author(title: str) -> str:
+ """Returns the author of the book with the given title
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author]) # [!code highlight]
+@prompt_template("Who wrote {book}?")
+def identify_author(book: str): ...
+
+
+response = identify_author("The Name of the Wind")
+if tool := response.tool: # [!code highlight]
+ print(tool.call()) # [!code highlight]
+ # Output: Patrick Rothfuss # [!code highlight]
+else:
+ print(response.content)
+```
+
+
+
+
+
+
+
+
+
+In this example we:
+
+1. Define the `GetBookAuthor`/`get_book_author` tool (a dummy method for the example)
+2. Set the `tools` argument in the `call` decorator to give the LLM access to the tool.
+3. We call `identify_author`, which automatically generates the corresponding provider-specific tool schema under the hood.
+4. Check if the response from `identify_author` contains a tool, which is the `BaseTool` instance constructed from the underlying tool call
+ - If yes, we call the constructed tool's `call` method and print its output. This calls the tool with the arguments provided by the LLM.
+ - If no, we print the content of the response (assuming no tool was called).
+
+The core idea to understand here is that the LLM is asking us to call the tool on its behalf with arguments that it has provided. In the above example, the LLM chooses to call the tool to get the author rather than relying on its world knowledge.
+
+This is particularly important for buildling applications with access to live information and external systems.
+
+For the purposes of this example we are showing just a single tool call. Generally, you would then give the tool call's output back to the LLM and make another call so the LLM can generate a response based on the output of the tool. We cover this in more detail in the section on [Agents](/docs/mirascope/learn/agents)
+
+### Accessing Original Tool Call
+
+The `BaseTool` instances have a `tool_call` property for accessing the original LLM tool call.
+
+
+
+
+
+```python
+from mirascope import BaseTool, llm
+from pydantic import Field
+
+
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: str = Field(..., description="The title of the book.")
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor])
+def identify_author(book: str) -> str:
+ return f"Who wrote {book}?"
+
+
+response = identify_author("The Name of the Wind")
+if tool := response.tool:
+ print(tool.call())
+ # Output: Patrick Rothfuss
+ print(f"Original tool call: {tool.tool_call}") # [!code highlight]
+else:
+ print(response.content)
+```
+
+
+```python
+from mirascope import BaseTool, llm, prompt_template
+from pydantic import Field
+
+
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: str = Field(..., description="The title of the book.")
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor])
+@prompt_template("Who wrote {book}?")
+def identify_author(book: str): ...
+
+
+response = identify_author("The Name of the Wind")
+if tool := response.tool:
+ print(tool.call())
+ # Output: Patrick Rothfuss
+ print(f"Original tool call: {tool.tool_call}") # [!code highlight]
+else:
+ print(response.content)
+```
+
+
+
+
+
+
+```python
+from mirascope import llm
+
+
+def get_book_author(title: str) -> str:
+ """Returns the author of the book with the given title
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author])
+def identify_author(book: str) -> str:
+ return f"Who wrote {book}?"
+
+
+response = identify_author("The Name of the Wind")
+if tool := response.tool:
+ print(tool.call())
+ # Output: Patrick Rothfuss
+ print(f"Original tool call: {tool.tool_call}") # [!code highlight]
+else:
+ print(response.content)
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+def get_book_author(title: str) -> str:
+ """Returns the author of the book with the given title
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author])
+@prompt_template("Who wrote {book}?")
+def identify_author(book: str): ...
+
+
+response = identify_author("The Name of the Wind")
+if tool := response.tool:
+ print(tool.call())
+ # Output: Patrick Rothfuss
+ print(f"Original tool call: {tool.tool_call}") # [!code highlight]
+else:
+ print(response.content)
+```
+
+
+
+
+
+## Supported Field Types
+
+While Mirascope provides a consistent interface, type support varies among providers:
+
+| Type | OpenAI | Anthropic | Google | Groq | xAI | Mistral | Cohere |
+|---------------|:--------:|:-----------:|:--------:|:------:|:-----:|:---------:|:--------:|
+| str | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
+| int | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
+| float | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
+| bool | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
+| bytes | ✓ | ✓ | — | ✓ | ✓ | ✓ | ✓ |
+| list | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
+| set | ✓ | ✓ | — | ✓ | ✓ | ✓ | ✓ |
+| tuple | — | ✓ | — | ✓ | — | ✓ | ✓ |
+| dict | — | ✓ | ✓ | ✓ | — | ✓ | ✓ |
+| Literal/Enum | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
+| BaseModel | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | — |
+| Nested ($def) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | — |
+
+*Legend: ✓ (Supported), — (Not Supported)*
+
+Consider provider-specific capabilities when working with advanced type structures. Even for supported types, LLM outputs may sometimes be incorrect or of the wrong type. In such cases, prompt engineering or error handling (like [retries](/docs/mirascope/learn/retries) and [reinserting validation errors](/docs/mirascope/learn/retries#error-reinsertion)) may be necessary.
+
+## Parallel Tool Calls
+
+In certain cases the LLM will ask to call multiple tools in the same response. Mirascope makes calling all such tools simple:
+
+
+
+
+
+```python
+from mirascope import BaseTool, llm
+from pydantic import Field
+
+
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: str = Field(..., description="The title of the book.")
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor])
+def identify_authors(books: list[str]) -> str:
+ return f"Who wrote {books}?"
+
+
+# [!code highlight:5]
+response = identify_authors(["The Name of the Wind", "Mistborn: The Final Empire"])
+if tools := response.tools:
+ for tool in tools:
+ print(tool.call())
+else:
+ print(response.content)
+```
+
+
+```python
+from mirascope import BaseTool, llm, prompt_template
+from pydantic import Field
+
+
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: str = Field(..., description="The title of the book.")
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor])
+@prompt_template("Who wrote {books}?")
+def identify_authors(books: list[str]): ...
+
+
+# [!code highlight:5]
+response = identify_authors(["The Name of the Wind", "Mistborn: The Final Empire"])
+if tools := response.tools:
+ for tool in tools:
+ print(tool.call())
+else:
+ print(response.content)
+```
+
+
+
+
+
+
+```python
+from mirascope import llm
+
+
+def get_book_author(title: str) -> str:
+ """Returns the author of the book with the given title
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author])
+def identify_authors(books: list[str]) -> str:
+ return f"Who wrote {books}?"
+
+
+# [!code highlight:5]
+response = identify_authors(["The Name of the Wind", "Mistborn: The Final Empire"])
+if tools := response.tools:
+ for tool in tools:
+ print(tool.call())
+else:
+ print(response.content)
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+def get_book_author(title: str) -> str:
+ """Returns the author of the book with the given title
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author])
+@prompt_template("Who wrote {books}?")
+def identify_authors(books: list[str]): ...
+
+
+# [!code highlight:5]
+response = identify_authors(["The Name of the Wind", "Mistborn: The Final Empire"])
+if tools := response.tools:
+ for tool in tools:
+ print(tool.call())
+else:
+ print(response.content)
+```
+
+
+
+
+
+If your tool calls are I/O-bound, it's often worth writing [async tools](/docs/mirascope/learn/async#async-tools) so that you can run all of the tools calls [in parallel](/docs/mirascope/learn/async#parallel-async-calls) for better efficiency.
+
+## Streaming Tools
+
+Mirascope supports streaming responses with tools, which is useful for long-running tasks or real-time updates:
+
+
+
+
+
+```python
+from mirascope import BaseTool, llm
+from pydantic import Field
+
+
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: str = Field(..., description="The title of the book.")
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor], stream=True) # [!code highlight]
+def identify_authors(books: list[str]) -> str:
+ return f"Who wrote {books}?"
+
+
+stream = identify_authors(["The Name of the Wind", "Mistborn: The Final Empire"])
+for chunk, tool in stream: # [!code highlight]
+ if tool: # [!code highlight]
+ print(tool.call()) # [!code highlight]
+ else:
+ print(chunk.content, end="", flush=True)
+```
+
+
+```python
+from mirascope import BaseTool, llm, prompt_template
+from pydantic import Field
+
+
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: str = Field(..., description="The title of the book.")
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor], stream=True) # [!code highlight]
+@prompt_template("Who wrote {books}?")
+def identify_authors(books: list[str]): ...
+
+
+stream = identify_authors(["The Name of the Wind", "Mistborn: The Final Empire"])
+for chunk, tool in stream: # [!code highlight]
+ if tool: # [!code highlight]
+ print(tool.call()) # [!code highlight]
+ else:
+ print(chunk.content, end="", flush=True)
+```
+
+
+
+
+
+
+```python
+from mirascope import llm
+
+
+def get_book_author(title: str) -> str:
+ """Returns the author of the book with the given title
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author], stream=True) # [!code highlight]
+def identify_authors(books: list[str]) -> str:
+ return f"Who wrote {books}?"
+
+
+stream = identify_authors(["The Name of the Wind", "Mistborn: The Final Empire"])
+for chunk, tool in stream: # [!code highlight]
+ if tool: # [!code highlight]
+ print(tool.call()) # [!code highlight]
+ else:
+ print(chunk.content, end="", flush=True)
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+def get_book_author(title: str) -> str:
+ """Returns the author of the book with the given title
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author], stream=True) # [!code highlight]
+@prompt_template("Who wrote {books}?")
+def identify_authors(books: list[str]): ...
+
+
+stream = identify_authors(["The Name of the Wind", "Mistborn: The Final Empire"])
+for chunk, tool in stream: # [!code highlight]
+ if tool: # [!code highlight]
+ print(tool.call()) # [!code highlight]
+ else:
+ print(chunk.content, end="", flush=True)
+```
+
+
+
+
+
+
+ When we identify that a tool is being streamed, we will internally reconstruct the tool from the streamed response. This means that the tool won't be returned until the full tool has been streamed and reconstructed on your behalf.
+
+
+
+ Currently only OpenAI, Anthropic, Mistral, and Groq support streaming tools. All other providers will always return `None` for tools.
+
+ If you think we're missing any, let us know!
+
+
+### Streaming Partial Tools
+
+You can also stream intermediate partial tools and their deltas (rather than just the fully constructed tool) by setting `stream={"partial_tools": True}`:
+
+
+
+
+
+```python
+from mirascope import BaseTool, llm
+from pydantic import Field
+
+
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: str = Field(..., description="The title of the book.")
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(
+ provider="$PROVIDER",
+ model="$MODEL",
+ tools=[GetBookAuthor],
+ stream={"partial_tools": True}, # [!code highlight]
+)
+def identify_authors(books: list[str]) -> str:
+ return f"Who wrote {books}?"
+
+
+stream = identify_authors(["The Name of the Wind", "Mistborn: The Final Empire"])
+for chunk, tool in stream:
+ if tool: # [!code highlight]
+ if tool.delta is not None: # partial tool
+ print(tool.delta)
+ else:
+ print(tool.call())
+ else:
+ print(chunk.content, end="", flush=True)
+```
+
+
+```python
+from mirascope import BaseTool, llm, prompt_template
+from pydantic import Field
+
+
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: str = Field(..., description="The title of the book.")
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(
+ provider="$PROVIDER",
+ model="$MODEL",
+ tools=[GetBookAuthor],
+ stream={"partial_tools": True}, # [!code highlight]
+)
+@prompt_template("Who wrote {books}?")
+def identify_authors(books: list[str]): ...
+
+
+stream = identify_authors(["The Name of the Wind", "Mistborn: The Final Empire"])
+for chunk, tool in stream:
+ if tool: # [!code highlight]
+ if tool.delta is not None: # partial tool
+ print(tool.delta)
+ else:
+ print(tool.call())
+ else:
+ print(chunk.content, end="", flush=True)
+```
+
+
+
+
+
+
+```python
+from mirascope import llm
+
+
+def get_book_author(title: str) -> str:
+ """Returns the author of the book with the given title
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(
+ provider="$PROVIDER",
+ model="$MODEL",
+ tools=[get_book_author],
+ stream={"partial_tools": True}, # [!code highlight]
+)
+def identify_authors(books: list[str]) -> str:
+ return f"Who wrote {books}?"
+
+
+stream = identify_authors(["The Name of the Wind", "Mistborn: The Final Empire"])
+for chunk, tool in stream:
+ if tool: # [!code highlight]
+ if tool.delta is not None: # partial tool
+ print(tool.delta)
+ else:
+ print(tool.call())
+ else:
+ print(chunk.content, end="", flush=True)
+```
+
+
+```python
+from mirascope import llm, prompt_template
+
+
+def get_book_author(title: str) -> str:
+ """Returns the author of the book with the given title
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(
+ provider="$PROVIDER",
+ model="$MODEL",
+ tools=[get_book_author],
+ stream={"partial_tools": True}, # [!code highlight]
+)
+@prompt_template("Who wrote {books}?")
+def identify_authors(books: list[str]): ...
+
+
+stream = identify_authors(["The Name of the Wind", "Mistborn: The Final Empire"])
+for chunk, tool in stream:
+ if tool: # [!code highlight]
+ if tool.delta is not None: # partial tool
+ print(tool.delta)
+ else:
+ print(tool.call())
+ else:
+ print(chunk.content, end="", flush=True)
+```
+
+
+
+
+
+## Tool Message Parameters
+
+
+ Calling tools and inserting their outputs into subsequent LLM API calls in a loop in the most basic form of an agent. While we cover this briefly here, we recommend reading the section on [Agents](/docs/mirascope/learn/agents) for more details and examples.
+
+
+Generally the next step after the LLM returns a tool call is for you to call the tool on its behalf and supply the output in a subsequent call.
+
+Let's take a look at a basic example of this:
+
+
+
+
+
+```python
+from mirascope import BaseMessageParam, BaseTool, Messages, llm
+
+
+class GetBookAuthor(BaseTool):
+ title: str
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor])
+def identify_author(book: str, history: list[BaseMessageParam]) -> Messages.Type:
+ messages = [*history] # [!code highlight]
+ if book:
+ messages.append(Messages.User(f"Who wrote {book}?")) # [!code highlight]
+ return messages
+
+
+history = []
+response = identify_author("The Name of the Wind", history)
+history += [response.user_message_param, response.message_param]
+while tool := response.tool:
+ tools_and_outputs = [(tool, tool.call())]
+ history += response.tool_message_params(tools_and_outputs)
+ response = identify_author("", history) # [!code highlight]
+ history.append(response.message_param) # [!code highlight]
+print(response.content) # [!code highlight]
+# Output: The Name of the Wind was written by Patrick Rothfuss.
+```
+
+
+```python
+from mirascope import BaseDynamicConfig, BaseMessageParam, BaseTool, llm, prompt_template
+
+
+class GetBookAuthor(BaseTool):
+ title: str
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor])
+@prompt_template(
+ """
+ MESSAGES: {history} # [!code highlight]
+ USER: {query}
+ """
+)
+def identify_author(book: str, history: list[BaseMessageParam]) -> BaseDynamicConfig:
+ return {"computed_fields": {"query": f"Who wrote {book}" if book else ""}} # [!code highlight]
+
+
+history = []
+response = identify_author("The Name of the Wind", history)
+history += [response.user_message_param, response.message_param]
+while tool := response.tool:
+ tools_and_outputs = [(tool, tool.call())]
+ history += response.tool_message_params(tools_and_outputs)
+ response = identify_author("", history) # [!code highlight]
+ history.append(response.message_param) # [!code highlight]
+print(response.content) # [!code highlight]
+# Output: The Name of the Wind was written by Patrick Rothfuss.
+```
+
+
+
+
+
+
+```python
+from mirascope import BaseMessageParam, Messages, llm
+
+
+def get_book_author(title: str) -> str:
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author])
+def identify_author(book: str, history: list[BaseMessageParam]) -> Messages.Type:
+ messages = [*history] # [!code highlight]
+ if book:
+ messages.append(Messages.User(f"Who wrote {book}?")) # [!code highlight]
+ return messages
+
+
+history = []
+response = identify_author("The Name of the Wind", history)
+history += [response.user_message_param, response.message_param]
+while tool := response.tool:
+ tools_and_outputs = [(tool, tool.call())]
+ history += response.tool_message_params(tools_and_outputs)
+ response = identify_author("", history) # [!code highlight]
+ history.append(response.message_param) # [!code highlight]
+print(response.content) # [!code highlight]
+# Output: The Name of the Wind was written by Patrick Rothfuss.
+```
+
+
+```python
+from mirascope import BaseMessageParam, BaseDynamicConfig, llm, prompt_template
+
+
+def get_book_author(title: str) -> str:
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author])
+@prompt_template(
+ """
+ MESSAGES: {history} # [!code highlight]
+ USER: {query}
+ """
+)
+def identify_author(book: str, history: list[BaseMessageParam]) -> BaseDynamicConfig:
+ return {"computed_fields": {"query": f"Who wrote {book}" if book else ""}} # [!code highlight]
+
+
+history = []
+response = identify_author("The Name of the Wind", history)
+history += [response.user_message_param, response.message_param]
+while tool := response.tool:
+ tools_and_outputs = [(tool, tool.call())]
+ history += response.tool_message_params(tools_and_outputs)
+ response = identify_author("", history) # [!code highlight]
+ history.append(response.message_param) # [!code highlight]
+print(response.content) # [!code highlight]
+# Output: The Name of the Wind was written by Patrick Rothfuss.
+```
+
+
+
+
+
+In this example we:
+
+1. Add `history` to maintain the messages across multiple calls to the LLM.
+2. Loop until the response no longer has tools calls.
+3. While there are tool calls, call the tools, append their corresponding message parameters to the history, and make a subsequent call with an empty query and updated history. We use an empty query because the original user message is already included in the history.
+4. Print the final response content once the LLM is done calling tools.
+
+## Validation and Error Handling
+
+Since `BaseTool` is a subclass of Pydantic's [`BaseModel`](https://docs.pydantic.dev/latest/usage/models/), they are validated on construction, so it's important that you handle potential `ValidationError`'s for building more robust applications:
+
+
+
+
+
+```python
+from typing import Annotated
+
+from mirascope import BaseTool, llm
+from pydantic import AfterValidator, Field, ValidationError
+
+
+def is_upper(v: str) -> str:
+ assert v.isupper(), "Must be uppercase"
+ return v
+
+
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: Annotated[str, AfterValidator(is_upper)] = Field( # [!code highlight]
+ ..., description="The title of the book."
+ )
+
+ def call(self) -> str:
+ if self.title == "THE NAME OF THE WIND":
+ return "Patrick Rothfuss"
+ elif self.title == "MISTBORN: THE FINAL EMPIRE":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor])
+def identify_author(book: str) -> str:
+ return f"Who wrote {book}?"
+
+
+response = identify_author("The Name of the Wind")
+try: # [!code highlight]
+ if tool := response.tool:
+ print(tool.call())
+ else:
+ print(response.content)
+except ValidationError as e: # [!code highlight]
+ print(e)
+ # > 1 validation error for GetBookAuthor
+ # title
+ # Assertion failed, Must be uppercase [type=assertion_error, input_value='The Name of the Wind', input_type=str]
+ # For further information visit https://errors.pydantic.dev/2.8/v/assertion_error
+```
+
+
+```python
+from typing import Annotated
+
+from mirascope import BaseTool, llm, prompt_template
+from pydantic import AfterValidator, Field, ValidationError
+
+
+def is_upper(v: str) -> str:
+ assert v.isupper(), "Must be uppercase"
+ return v
+
+
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: Annotated[str, AfterValidator(is_upper)] = Field( # [!code highlight]
+ ..., description="The title of the book."
+ )
+
+ def call(self) -> str:
+ if self.title == "THE NAME OF THE WIND":
+ return "Patrick Rothfuss"
+ elif self.title == "MISTBORN: THE FINAL EMPIRE":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor])
+@prompt_template("Who wrote {book}?")
+def identify_author(book: str): ...
+
+
+response = identify_author("The Name of the Wind")
+try: # [!code highlight]
+ if tool := response.tool:
+ print(tool.call())
+ else:
+ print(response.content)
+except ValidationError as e: # [!code highlight]
+ print(e)
+ # > 1 validation error for GetBookAuthor
+ # title
+ # Assertion failed, Must be uppercase [type=assertion_error, input_value='The Name of the Wind', input_type=str]
+ # For further information visit https://errors.pydantic.dev/2.8/v/assertion_error
+```
+
+
+
+
+
+
+```python
+from typing import Annotated
+
+from mirascope import llm
+from pydantic import AfterValidator, ValidationError
+
+
+def is_upper(v: str) -> str:
+ assert v.isupper(), "Must be uppercase"
+ return v
+
+
+def get_book_author(title: Annotated[str, AfterValidator(is_upper)]) -> str: # [!code highlight]
+ """Returns the author of the book with the given title
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "THE NAME OF THE WIND":
+ return "Patrick Rothfuss"
+ elif title == "MISTBORN: THE FINAL EMPIRE":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author])
+def identify_author(book: str) -> str:
+ return f"Who wrote {book}?"
+
+
+response = identify_author("The Name of the Wind")
+try: # [!code highlight]
+ if tool := response.tool:
+ print(tool.call())
+ else:
+ print(response.content)
+except ValidationError as e: # [!code highlight]
+ print(e)
+ # > 1 validation error for GetBookAuthor
+ # title
+ # Assertion failed, Must be uppercase [type=assertion_error, input_value='The Name of the Wind', input_type=str]
+ # For further information visit https://errors.pydantic.dev/2.8/v/assertion_error
+```
+
+
+```python
+from typing import Annotated
+
+from mirascope import llm, prompt_template
+from pydantic import AfterValidator, ValidationError
+
+
+def is_upper(v: str) -> str:
+ assert v.isupper(), "Must be uppercase"
+ return v
+
+
+def get_book_author(title: Annotated[str, AfterValidator(is_upper)]) -> str: # [!code highlight]
+ """Returns the author of the book with the given title
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "THE NAME OF THE WIND":
+ return "Patrick Rothfuss"
+ elif title == "MISTBORN: THE FINAL EMPIRE":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author])
+@prompt_template("Who wrote {book}?")
+def identify_author(book: str): ...
+
+
+response = identify_author("The Name of the Wind")
+try: # [!code highlight]
+ if tool := response.tool:
+ print(tool.call())
+ else:
+ print(response.content)
+except ValidationError as e: # [!code highlight]
+ print(e)
+ # > 1 validation error for GetBookAuthor
+ # title
+ # Assertion failed, Must be uppercase [type=assertion_error, input_value='The Name of the Wind', input_type=str]
+ # For further information visit https://errors.pydantic.dev/2.8/v/assertion_error
+```
+
+
+
+
+
+In this example we've added additional validation, but it's important that you still handle `ValidationError`'s even with standard tools since they are still `BaseModel` instances and will validate the field types regardless.
+
+## Few-Shot Examples
+
+Just like with [Response Models](/docs/mirascope/learn/response_models#few-shot-examples), you can add few-shot examples to your tools:
+
+
+
+
+
+```python
+from mirascope import BaseTool, llm
+from pydantic import ConfigDict, Field
+
+
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: str = Field(
+ ...,
+ description="The title of the book.",
+ examples=["The Name of the Wind"], # [!code highlight]
+ )
+
+ model_config = ConfigDict(
+ json_schema_extra={"examples": [{"title": "The Name of the Wind"}]} # [!code highlight]
+ )
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor])
+def identify_author(book: str) -> str:
+ return f"Who wrote {book}?"
+
+
+response = identify_author("The Name of the Wind")
+if tool := response.tool:
+ print(tool.call())
+else:
+ print(response.content)
+```
+
+
+```python
+from mirascope import BaseTool, llm, prompt_template
+from pydantic import ConfigDict, Field
+
+
+class GetBookAuthor(BaseTool):
+ """Returns the author of the book with the given title."""
+
+ title: str = Field(
+ ...,
+ description="The title of the book.",
+ examples=["The Name of the Wind"], # [!code highlight]
+ )
+
+ model_config = ConfigDict(
+ json_schema_extra={"examples": [{"title": "The Name of the Wind"}]} # [!code highlight]
+ )
+
+ def call(self) -> str:
+ if self.title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif self.title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[GetBookAuthor])
+@prompt_template("Who wrote {book}?")
+def identify_author(book: str): ...
+
+
+response = identify_author("The Name of the Wind")
+if tool := response.tool:
+ print(tool.call())
+else:
+ print(response.content)
+```
+
+
+
+
+
+
+```python
+from typing import Annotated
+
+from pydantic import Field
+
+from mirascope import llm
+
+
+def get_book_author(
+ title: Annotated[
+ str,
+ Field(
+ ...,
+ description="The title of the book.",
+ examples=["The Name of the Wind"], # [!code highlight]
+ ),
+ ],
+) -> str:
+ """Returns the author of the book with the given title
+
+ Example: # [!code highlight]
+ {"title": "The Name of the Wind"} # [!code highlight]
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author])
+def identify_author(book: str) -> str:
+ return f"Who wrote {book}?"
+
+
+response = identify_author("The Name of the Wind")
+if tool := response.tool:
+ print(tool.call())
+else:
+ print(response.content)
+```
+
+
+```python
+from typing import Annotated
+
+from pydantic import Field
+
+from mirascope import llm, prompt_template
+
+
+def get_book_author(
+ title: Annotated[
+ str,
+ Field(
+ ...,
+ description="The title of the book.",
+ examples=["The Name of the Wind"], # [!code highlight]
+ ),
+ ],
+) -> str:
+ """Returns the author of the book with the given title
+
+ Example: # [!code highlight]
+ {"title": "The Name of the Wind"} # [!code highlight]
+
+ Args:
+ title: The title of the book.
+ """
+ if title == "The Name of the Wind":
+ return "Patrick Rothfuss"
+ elif title == "Mistborn: The Final Empire":
+ return "Brandon Sanderson"
+ else:
+ return "Unknown"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[get_book_author])
+@prompt_template("Who wrote {book}?")
+def identify_author(book: str): ...
+
+
+response = identify_author("The Name of the Wind")
+if tool := response.tool:
+ print(tool.call())
+else:
+ print(response.content)
+```
+
+
+
+
+
+Both approaches will result in the same tool schema with examples included. The function approach gets automatically converted to use Pydantic fields internally, making both methods equivalent in terms of functionality.
+
+
+ Both `BaseTool` and function-style definitions support field level examples through Pydantic's `Field`. When using function-style definitions, you'll need to wrap the type with `Annotated` to use `Field`.
+
+
+## ToolKit
+
+
+ [`mirascope.llm.toolkit`](/docs/mirascope/api)
+
+
+The `BaseToolKit` class enables:
+
+- Organiziation of a group of tools under a single namespace.
+ - This can be useful for making it clear to the LLM when to use certain tools over others. For example, you could namespace a set of tools under "file_system" to indicate that those tools are specifically for interacting with the file system.
+- Dynamic tool definitions.
+ - This can be useful for generating tool definitions that are dependent on some input or state. For example, you may want to update the description of tools based on an argument of the call being made.
+
+
+
+```python
+from mirascope import (
+ BaseDynamicConfig,
+ BaseToolKit,
+ Messages,
+ llm,
+)
+from mirascope.core import toolkit_tool
+
+
+class BookTools(BaseToolKit): # [!code highlight]
+ __namespace__ = "book_tools" # [!code highlight]
+
+ reading_level: str # [!code highlight]
+
+ @toolkit_tool # [!code highlight]
+ def suggest_author(self, author: str) -> str:
+ """Suggests an author for the user to read based on their reading level.
+
+ User reading level: {self.reading_level} # [!code highlight]
+ Author you suggest must be appropriate for the user's reading level.
+ """
+ return f"I would suggest you read some books by {author}"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def recommend_author(genre: str, reading_level: str) -> BaseDynamicConfig:
+ toolkit = BookTools(reading_level=reading_level) # [!code highlight]
+ return {
+ "tools": toolkit.create_tools(), # [!code highlight]
+ "messages": [Messages.User(f"What {genre} author should I read?")],
+ }
+
+
+response = recommend_author("fantasy", "beginner") # [!code highlight]
+if tool := response.tool:
+ print(tool.call())
+ # Output: I would suggest you read some books by J.K. Rowling # [!code highlight]
+
+response = recommend_author("fantasy", "advanced") # [!code highlight]
+if tool := response.tool:
+ print(tool.call())
+ # Output: I would suggest you read some books by Brandon Sanderson # [!code highlight]
+```
+
+
+```python
+from mirascope import (
+ BaseDynamicConfig,
+ BaseToolKit,
+ llm,
+ prompt_template,
+)
+from mirascope.core import toolkit_tool
+
+
+class BookTools(BaseToolKit): # [!code highlight]
+ __namespace__ = "book_tools" # [!code highlight]
+
+ reading_level: str # [!code highlight]
+
+ @toolkit_tool # [!code highlight]
+ def suggest_author(self, author: str) -> str:
+ """Suggests an author for the user to read based on their reading level.
+
+ User reading level: {self.reading_level} # [!code highlight]
+ Author you suggest must be appropriate for the user's reading level.
+ """
+ return f"I would suggest you read some books by {author}"
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("What {genre} author should I read?")
+def recommend_author(genre: str, reading_level: str) -> BaseDynamicConfig:
+ toolkit = BookTools(reading_level=reading_level) # [!code highlight]
+ return {"tools": toolkit.create_tools()} # [!code highlight]
+
+
+
+response = recommend_author("fantasy", "beginner") # [!code highlight]
+if tool := response.tool:
+ print(tool.call())
+ # Output: I would suggest you read some books by J.K. Rowling # [!code highlight]
+
+response = recommend_author("fantasy", "advanced") # [!code highlight]
+if tool := response.tool:
+ print(tool.call())
+ # Output: I would suggest you read some books by Brandon Sanderson # [!code highlight]
+```
+
+
+
+In this example we:
+
+1. Create a `BookTools` toolkit
+2. We set `__namespace__` equal to "book_tools"
+3. We define the `reading_level` state of the toolkit
+4. We define the `suggest_author` tool and mark it with `@toolkit_tool` to identify the method as a tool of the toolkit
+5. We use the `{self.reading_level}` template variable in the description of the tool.
+6. We create the toolkit with the `reading_level` argument.
+7. We call `create_tools` to generate the toolkit's tools. This will generate the tools on every call, ensuring that the description correctly includes the provided reading level.
+8. We call `recommend_author` with a "beginner" reading level, and the LLM calls the `suggest_author` tool with its suggested author.
+9. We call `recommend_author` again but with "advanced" reading level, and again the LLM calls the `suggest_author` tool with its suggested author.
+
+The core concept to understand here is that the `suggest_author` tool's description is dynamically generated on each call to `recommend_author` through the toolkit.
+
+This is why the "beginner" recommendation and "advanced" recommendations call the `suggest_author` tool with authors befitting the reading level of each call.
+
+## Pre-Made Tools and ToolKits
+
+Mirascope provides several pre-made tools and toolkits to help you get started quickly:
+
+
+ Pre-made tools and toolkits require installing the dependencies listed in the "Dependencies" column for each tool/toolkit.
+
+ For example:
+ ```bash
+ pip install httpx # For HTTPX tool
+ pip install requests # For Requests tool
+ ```
+
+
+### Pre-Made Tools
+
+
+ - [`mirascope.tools.web.DuckDuckGoSearch`](/docs/mirascope/api/tools/web/duckduckgo)
+ - [`mirascope.tools.web.HTTPX`](/docs/mirascope/api/tools/web/httpx)
+ - [`mirascope.tools.web.ParseURLContent`](/docs/mirascope/api/tools/web/parse_url_content)
+ - [`mirascope.tools.web.Requests`](/docs/mirascope/api/tools/web/requests)
+
+
+| Tool | Primary Use | Dependencies | Key Features | Characteristics |
+|------ |------------- |-------------- |-------------- |----------------- |
+| [`DuckDuckGoSearch`](/docs/mirascope/api/tools/web/duckduckgo) | Web Searching | [`duckduckgo-search`](https://pypi.org/project/duckduckgo-search/) | • Multiple query support
• Title/URL/snippet extraction
• Result count control
• Automated formatting | • Privacy-focused search
• Async support (AsyncDuckDuckGoSearch)
• Automatic filtering
• Structured results |
+| [`HTTPX`](/docs/mirascope/api/tools/web/httpx) | Advanced HTTP Requests | [`httpx`](https://pypi.org/project/httpx/) | • Full HTTP method support (GET/POST/PUT/DELETE)
• Custom header support
• File upload/download
• Form data handling | • Async support (AsyncHTTPX)
• Configurable timeouts
• Comprehensive error handling
• Redirect control |
+| [`ParseURLContent`](/docs/mirascope/api/tools/web/parse_url_content) | Web Content Extraction | [`beautifulsoup4`](https://pypi.org/project/beautifulsoup4/), [`httpx`](https://pypi.org/project/httpx/) | • HTML content fetching
• Main content extraction
• Element filtering
• Text normalization | • Automatic cleaning
• Configurable parser
• Timeout settings
• Error handling |
+| [`Requests`](/docs/mirascope/api/tools/web/requests) | Simple HTTP Requests | [`requests`](https://pypi.org/project/requests/) | • Basic HTTP methods
• Simple API
• Response text retrieval
• Basic authentication | • Minimal configuration
• Intuitive interface
• Basic error handling
• Lightweight implementation |
+
+Example using DuckDuckGoSearch:
+
+
+
+
+
+```python
+from mirascope import llm
+from mirascope.tools import DuckDuckGoSearch # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[DuckDuckGoSearch]) # [!code highlight]
+def research(genre: str) -> str:
+ return f"Recommend a {genre} book and summarize the story"
+
+
+response = research("fantasy")
+if tool := response.tool:
+ print(tool.call())
+```
+
+
+```python
+from mirascope import llm, prompt_template
+from mirascope.tools import DuckDuckGoSearch # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[DuckDuckGoSearch]) # [!code highlight]
+@prompt_template("Recommend a {genre} book and summarize the story")
+def research(genre: str): ...
+
+
+response = research("fantasy")
+if tool := response.tool:
+ print(tool.call())
+```
+
+
+
+
+
+
+```python
+from mirascope import llm
+from mirascope.tools import DuckDuckGoSearch, DuckDuckGoSearchConfig # [!code highlight]
+
+config = DuckDuckGoSearchConfig(max_results_per_query=5) # [!code highlight]
+CustomSearch = DuckDuckGoSearch.from_config(config) # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[CustomSearch]) # [!code highlight]
+def research(genre: str) -> str:
+ return f"Recommend a {genre} book and summarize the story"
+
+
+response = research("fantasy")
+if tool := response.tool:
+ print(tool.call())
+```
+
+
+```python
+from mirascope import llm, prompt_template
+from mirascope.tools import DuckDuckGoSearch, DuckDuckGoSearchConfig # [!code highlight]
+
+config = DuckDuckGoSearchConfig(max_results_per_query=5) # [!code highlight]
+CustomSearch = DuckDuckGoSearch.from_config(config) # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL", tools=[CustomSearch]) # [!code highlight]
+@prompt_template("Recommend a {genre} book and summarize the story")
+def research(genre: str): ...
+
+
+response = research("fantasy")
+if tool := response.tool:
+ print(tool.call())
+```
+
+
+
+
+
+### Pre-Made ToolKits
+
+
+ - [`mirascope.tools.system.FileSystemToolKit`](/docs/mirascope/api/tools/system/file_system)
+ - [`mirascope.tools.system.DockerOperationToolKit`](/docs/mirascope/api/tools/system/docker_operation)
+
+
+| ToolKit | Primary Use | Dependencies | Tools and Features | Characteristics |
+|--------- |--------------------------|------------------------------------------------------------------- |------------------- |----------------- |
+| [`FileSystemToolKit`](/docs/mirascope/api/tools/system/file_system) | File System Operations | None | • ReadFile: File content reading
• WriteFile: Content writing
• ListDirectory: Directory listing
• CreateDirectory: Directory creation
• DeleteFile: File deletion | • Path traversal protection
• File size limits
• Extension validation
• Robust error handling
• Base directory isolation |
+| [`DockerOperationToolKit`](/docs/mirascope/api/tools/system/docker_operation) | Code & Command Execution | [`docker`](https://pypi.org/project/docker/), [`docker engine`](https://docs.docker.com/engine/install/) | • ExecutePython: Python code execution with optional package installation
• ExecuteShell: Shell command execution | • Docker container isolation
• Memory limits
• Network control
• Security restrictions
• Resource cleanup |
+
+
+Example using FileSystemToolKit:
+
+
+
+```python
+from pathlib import Path
+
+from mirascope import BaseDynamicConfig, Messages, llm
+from mirascope.tools import FileSystemToolKit # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+def write_blog_post(topic: str, output_file: Path) -> BaseDynamicConfig:
+ toolkit = FileSystemToolKit(base_directory=output_file.parent) # [!code highlight]
+ return {
+ "messages": [
+ Messages.User(
+ content=f"Write a blog post about '{topic}' as a '{output_file.name}'."
+ )
+ ],
+ "tools": toolkit.create_tools(), # [!code highlight]
+ }
+
+
+response = write_blog_post("machine learning", Path("introduction.html"))
+if tool := response.tool:
+ result = tool.call()
+ print(result)
+```
+
+
+```python
+from pathlib import Path
+
+from mirascope import BaseDynamicConfig, Messages, llm, prompt_template
+from mirascope.tools import FileSystemToolKit # [!code highlight]
+
+
+@llm.call(provider="$PROVIDER", model="$MODEL")
+@prompt_template("Write a blog post about '{topic}' as a '{output_file.name}'.")
+def write_blog_post(topic: str, output_file: Path) -> BaseDynamicConfig:
+ toolkit = FileSystemToolKit(base_directory=output_file.parent) # [!code highlight]
+ return {
+ "messages": [
+ Messages.User(
+ content="Write a blog post about '{topic}' as a '{output_file.name}'."
+ )
+ ],
+ "tools": toolkit.create_tools(), # [!code highlight]
+ }
+
+
+response = write_blog_post("machine learning", Path("introduction.html"))
+if tool := response.tool:
+ result = tool.call()
+ print(result)
+```
+
+
+
+## Next Steps
+
+Tools can significantly extend LLM capabilities, enabling more interactive and dynamic applications. We encourage you to explore and experiment with tools to enhance your projects and the find the best fit for your specific needs.
+
+Mirascope hopes to provide a simple and clean interface that is both easy to learn and easy to use; however, we understand that LLM tools can be a difficult concept regardless of the supporting tooling.
+
+Next, we recommend learning about how to build [Agents](/docs/mirascope/learn/agents) that take advantage of these tools.
\ No newline at end of file
diff --git a/cloud/content/policy/privacy.mdx b/cloud/content/policy/privacy.mdx
new file mode 100644
index 0000000000..70dace5a52
--- /dev/null
+++ b/cloud/content/policy/privacy.mdx
@@ -0,0 +1,312 @@
+---
+title: Privacy Policy
+lastUpdated: 2025-04-08
+description: How Mirascope collects, uses, and protects your personal information.
+---
+
+We at Mirascope, Inc. ("Mirascope," "we," "us," or "our")
+have created this privacy policy (this "Privacy Policy") because we know that
+you care about how information you provide to us is used and shared. This Privacy Policy
+relates to our information collection and use practices in connection with our website
+located at [https://mirascope.com/](/) (the "Website"), our
+proprietary open source AI engineering platform: Lilypad (the "Platform") that is
+made available to you as a Mirascope Hosted Saas Solution or a Self-Hosted SaaS Solution,
+and when you otherwise interact with us in any way.
+
+**Description of Users and Acceptance of Terms**
+
+This Privacy Policy applies to visitors to the Website, who view only publicly-available
+content ("Visitors"), customers who have signed up to access and use our Platform
+(the "Customers"), Customer’s employees, contractors, or agents authorized by
+Customer to access and use the Platform ("Authorized Users").
+
+By visiting our Website, Visitors are agreeing to the terms of this Privacy Policy and
+the accompanying [Website Terms of Use](../terms/use).
+
+By clicking "I Agree" when you sign up to access and use our Platform, each Customer
+is agreeing to the terms of this Privacy Policy and the accompanying [Term of Service](../terms/service).
+
+By accessing and/or using our Platform, each Authorized User is agreeing to the terms
+of this Privacy Policy and the accompanying [Term of Service](../terms/service).
+
+Capitalized terms not defined in this Privacy Policy shall have the meaning set forth
+in our [Website Terms of Use](../terms/use) or [Terms of Service](../terms/service), as applicable.
+
+## I. The Information We Collect and/or Receive
+
+In the course of operating the Website and the Platform, and/or interacting with you,
+we will collect (and/or receive) the following types of information. You authorize us
+to collect and/or receive such information.
+
+### 1. Contact Information
+
+When you contact us via the Website, email or by mail, when you call us, when you
+sign up to receive our newsletter, you will be asked to provide certain information,
+which may include, your name, e-mail address, and any information provided in messages
+to us (collectively, "Contact Information"). The Contact Information is used to provide
+the requested service or information and to contact you for purposes of direct marketing
+of our current and future services.
+
+### 2. Account Information
+
+In order to access and use our Platform, each Authorized User will have to create an
+account on the Platform. In connection with creating an account, we collect Log-in
+Credentials and other registration information (collectively, "Account Information").
+We use the Account Information to process the creation of your account, including to verify
+your identity, and to manage your account.
+
+### 3. Customer Data
+
+In using the Platform, Customers and Authorized Users provide us Customer Data. We use
+the Customer Data (other than any personal information contained therein) in accordance
+with the terms and conditions of the Terms of Service. Any personal information contained
+in the Customer Data will be used in accordance with this Privacy Policy.
+
+### 4. Billing Information
+
+In you choose to pay the applicable fees by credit card, you will be required to provide
+certain additional information which may include a credit card number, expiration date,
+billing zip code, activation code, bank information, and similar information
+("Billing Information"). Such Billing Information will be collected and processed
+by our Third-Party Payment Processor pursuant to the terms and conditions of their
+privacy policies and terms of use. Mirascope does not directly obtain or process any
+Billing Information.
+
+### 5. Information obtained automatically from your online activity
+
+When you access or use the Website or the Platform, we use browser cookies, pixels, web
+beacons, and similar technologies (collectively, "Tracking Technologies") to
+automatically collect or receive certain standard technical information and other data
+(such as traffic data, usage data (including but not limited to, profiles viewed,
+matches made, features used, frequency and duration of the Website or the Platform
+usage, and interactions with content on the Website or the Platform, location data,
+device information (including but not limited to, type of device, mobile device platform,
+operating system, browser type, screen resolution, IP address and other technical
+information), logs and other communications data)) sent to us by your computer, mobile
+device, tablet, or any other device over time on our Website or the Platform, and your
+online activity across third party websites, apps, and devices. We may also evaluate
+your computer, mobile phone, or other access device to identify any malicious software
+or activity that may affect the availability of our Website or the Platform.
+
+When you access or use the Website or the Platform, analytics networks, and providers,
+and other third parties may use Tracking Technologies to collect information about your
+online activities over time and across different websites, apps, online services,
+digital properties, and devices.
+
+The data we or third parties collect automatically may include personal information
+and/or statistical data that may not identify you personally; however, we or third
+parties may maintain, combine, or associate it with personal information collected in
+other ways or received from third parties. We and/or third parties use this information
+to (i) enhance the performance and functionality of the Website and Platform; (ii)
+personalize your experience with the Website and Platform, understand how you use the
+Website or Platform, maintain a persistent session, and improve and further develop the
+Website and Platform; and (iii) serve targeted and other advertising, and provide
+custom experiences, across other sites, apps, online services, digital properties
+and devices, measure how the ads perform, and for analytics purposes.
+
+The Tracking Technologies used on the Website and the Platform include the following,
+among others:
+
+*Cookies*: Cookies are small packets of data that a website stores on your computer’s
+hard drive so that your computer will "remember" information about your visit. In
+addition to collecting information, we use cookies to help us authenticate users,
+provide content of interest to you, analyze which features you use most frequently,
+and measure and optimize advertising and promotional effectiveness. To do this, we
+may use both session cookies, which expire once you close your web browser, and
+persistent cookies, which stay on your computer until you delete them. For information
+regarding your choices regarding Cookies, please see Section III of this Privacy Policy.
+
+### 6. Information obtained from third-party analytics services
+
+We may use one or more third–party analytics services to evaluate your use of the Website
+or Platform, compile reports on activity (based on their collection of IP addresses,
+Internet service provider, browser type, operating system and language, referring and
+exit pages and URLs, data and time, amount of time spent on particular pages, what sections
+of the Website or Platform you visit, number of links clicked while on the Website or
+Platform, search terms and other similar usage data), and analyze performance metrics.
+These third parties use cookies and other technologies to help analyze and provide us
+the data. By accessing and using the Website or Platform, you consent to the processing
+of data about you by these analytics providers in the manner and for the purposes set
+out in this Privacy Policy. Please be advised that if you opt out of any service, you
+may not be able to use the full functionality of the Website and Platform.
+
+Below is a list of analytics providers that we use; however, such list may be subject
+to change based on how we wish to understand the user experience and we will endeavor
+to update it diligently. You may use the accompanying links to learn more about such
+providers and, if available, how to opt-out from their analytics collection.
+
+For more information on Google Analytics, including how to opt out from certain data
+collection, please visit [https://www.google.com/analytics](https://www.google.com/analytics)
+
+For more information on PostHog, including how to opt out from certain data collection,
+please visit [https://posthog.com](https://posthog.com/)
+
+## II. How We Use and Share Your Information
+
+We may use and share your information as set forth below:
+
+- To provide the Website and Platform;
+
+- To solicit your feedback, inform you about our products and services and those of
+our third-party marketing partners;
+
+- To monitor, support, analyze, and improve the Website and Platform;
+
+- To fulfill your requests for information regarding new or improved products and services;
+
+- To engage research, project planning, troubleshooting problems, and detecting and
+protecting against error, fraud, or other criminal activity;
+
+- To protect the safety and security of our Website, Platform and our customers;
+
+- To third-party contractors and service providers that provide services to us in the
+operation of our business and assistance with the Website and Platform, such as technical support for the Website and the Platform and providing services such as IT and cloud hosting, payment processing and financing, transactions reporting, customer relationship management, email marketing, analytics services, email distribution, fraud detection and preventions, administrative services and among others;
+
+- To create and disclose aggregated, anonymous, user statistics and other information to
+(i) affiliates, agents, business partners, and other third parties; (ii) describe the
+Website, and Platform to current and prospective business partners; and (iii) other third
+parties for lawful purposes;
+
+- To share some or all of your information with our parent company, subsidiaries, affiliates
+or other companies under common control with us;
+
+- To fulfill our legal and regulatory requirements;
+
+- To comply with applicable law, such as to comply with a subpoena, or similar legal process,
+and when we believe in good faith that disclosure is necessary to protect our rights, protect
+your safety or the safety of others, investigate fraud, or respond to a government request;
+
+- To assess or complete a corporate sale, merger, reorganization, sale of assets, dissolution,
+investment, or similar corporate event where we expect that your personal information will be
+part of the transferred assets;
+
+- To audit our internal processes for compliance with legal and contractual requirements or
+our internal policies;
+
+- To prevent, identify, investigate, and deter fraudulent, harmful, unauthorized, unethical,
+or illegal activity, including cyberattacks and identity theft; and
+
+- Otherwise, with your consent.
+
+We will take reasonable measures (e.g., by contract) to require that any party receiving
+any of your personal information from us, including for purposes of providing the Website
+and Platform undertakes to: (i) retain and use such information only for the purposes set
+out in this Privacy Policy; (ii) not disclose your personal information except with your
+consent, as permitted by applicable law, or as permitted by this Privacy Policy; and (iii)
+generally protect the privacy of your personal information.
+
+## III. Your Choices
+
+Update Information: If the personal information we have for you changes, you may correct,
+update, or delete it by contacting us as set forth in Section XI of this Privacy Policy.
+You may correct, update, or delete some of your personal information directly in your
+account on the Platform. We will use commercially reasonable efforts to process all such
+requests in a timely manner. You should be aware, however, that it is not always possible
+to completely remove or modify information in our databases. Additionally, we will retain
+and use your information (or copies thereof) as necessary to comply with our legal and/or
+regulatory obligations, resolve disputes, and enforce our agreements.
+
+Marketing Communications: You may manage your receipt of marketing and non-transactional
+communications by clicking on the "unsubscribe" link located on the bottom of any of our
+marketing emails. Please note that you cannot opt out of receiving transactional e-mails.
+
+Cookie Management: Most browsers let you remove or reject cookies. To do this, follow the
+instructions in your browser settings. Many browsers accept cookies by default until you
+change your settings. Please note that if you set your browser to disable cookies or other
+Tracking Technologies, the Website and the Platform may not work properly. For more
+information about cookies, including how to see what cookies have been set on your browser
+and how to manage and delete them, visit [www.allaboutcookies.org](https://www.allaboutcookies.org/).
+
+You will need to apply these opt-out settings on each device from which you wish to opt-out.
+We cannot offer any assurances as to whether the companies we work with participate in the
+opt-out programs described above.
+
+## IV. How We Protect Your Information
+
+We take commercially reasonable security measures to protect your information from loss,
+misuse, and unauthorized access, disclosure, alteration, and destruction, taking into
+account the risks involved in processing and the nature of such data, and in compliance
+with applicable laws and regulations. You should keep in mind, however, that no Internet
+transmission is 100% secure or error-free. In particular, e-mail sent to or from the
+Website or Platform may not be secure, and you should therefore take special care in
+deciding what information you send to us via e-mail or other electronic means.
+
+## V. External Sites
+
+The Website and Platform may contain links to third-party websites ("External Sites").
+These links are provided solely as a convenience to you and not as an endorsement by us
+of the content on such External Sites. The content of such External Sites is developed
+and provided by others. You should contact the website administrator or webmaster for
+those External Sites if you have any concerns regarding such links or any content
+located on such External Sites. We are not responsible for the content of any linked
+External Sites and do not make any representations regarding the content or accuracy
+of materials on such External Sites. You should take precautions when downloading
+files from all websites to protect your computer from viruses and other destructive
+programs. If you decide to access linked External Sites, you do so at your own risk.
+
+## VI. Important Notice to All Non-U.S. Residents
+
+Our Website, Platform and servers are located in the United States. If you are located
+outside of the United States, please be aware that your information, including your
+personal information, may be transferred to, processed, maintained, and used on
+computers, servers, and systems located outside of your state, province, country,
+or other governmental jurisdiction where the privacy laws may not be as protective
+as those in your country of origin. If you are located outside the United States
+and choose to use the Website and/or the Platform, you consent to any transfer and
+processing of your personal information in accordance with this Privacy Policy, and
+you do so at your own risk.
+
+## VII. DO NOT TRACK
+
+As discussed above, third parties such as analytics providers may collect information
+about your online activities over time and across different websites when you access
+or use the Website or Platform. Currently, various browsers offer a "Do Not Track"
+option, but there is no standard for commercial websites. At this time, we do not
+monitor, recognize, or honor any opt-out or do not track mechanisms, including general
+web browser "Do Not Track" settings and/or signals.
+
+## VIII. Nevada Privacy Rights
+
+If you are a resident of Nevada, you have the right to opt-out of the sale of certain
+personal information to third parties. You can exercise this right by contacting us
+at [support@mirascope.com](mailto:support@mirascope.com) with the subject line "Nevada Do Not Sell Request" and providing
+us with your name and the email address associated with your account. Please note
+that we do not currently sell your personal information as sales are defined in
+Nevada Revised Statutes Chapter 603A.
+
+## IX. About Children
+
+We do not knowingly collect or receive personal information from children under the age
+of 18 through the Website and Platform. If you are under the age of 18, please do not
+provide any personal information through the Website and Platform. We encourage parents
+and legal guardians to monitor their children’s Internet usage and to help enforce
+this Privacy Policy by instructing their children to never provide any personal information
+on the Website or Platform, or any other web site without their permission. If you
+have reason to believe that a child under the age of 18 has provided personal information
+to us through the Website and/or Platform, please contact us at [legal@mirascope.com](mailto:legal@mirascope.com), and we
+will endeavor to delete that information from our databases.
+
+## X. Changes to this Privacy Policy
+
+This Privacy Policy is effective as of the date stated at the top of this Privacy Policy.
+We may change this Privacy Policy from time to time, and will post any changes on the
+Website and the Platform as soon as they go into effect. By visiting the Website,
+accessing and/or using the Platform, after we make any such changes to this Privacy
+Policy, you are deemed to have accepted such changes. Please refer back to this Privacy
+Policy on a regular basis.
+
+## XI. Contact Us
+
+If you have any questions about this Privacy Policy or to report a privacy issue, please
+contact us in one of the following ways:
+
+Email: [support@mirascope.com](mailto:support@mirascope.com)
+
+Telephone: +1 (310) 400-6238
+
+Or write to us at:
+
+Mirascope, Inc.
+10000 Washington Blvd
+STE 600
+Culver City, CA 90232 USA
\ No newline at end of file
diff --git a/cloud/content/policy/terms/service.mdx b/cloud/content/policy/terms/service.mdx
new file mode 100644
index 0000000000..66a4ce917e
--- /dev/null
+++ b/cloud/content/policy/terms/service.mdx
@@ -0,0 +1,605 @@
+---
+title: Terms of Service
+lastUpdated: 2025-04-08
+description: Legal terms governing your use of Mirascope's platform and services.
+---
+
+These Terms of Service (these "**Terms of Service**") govern your and your Authorized
+Users’ (defined below) access to and use of our Platform (as defined below), which
+is made available to you ("**Customer**," "**you**," or "**your**") by Mirascope,
+Inc. ("**Mirascope**," "**we,**" "**us**," or "**our**") (each a "**Party**,"
+and collectively, the "**Parties**").
+
+PLEASE READ THESE TERMS OF SERVICE CAREFULLY. BY CLICKING "I AGREE" WHEN YOU SIGN UP TO
+ACCESS AND USE OUR PLATFORM OR OTHERWISE MANIFESTING ASSENT TO THESE TERMS OF SERVICE,
+YOU ACKNOWLEDGE THAT YOU HAVE READ, UNDERSTOOD, AND AGREE TO BE LEGALLY BOUND BY THESE
+TERMS OF SERVICE, AND THE TERMS AND CONDITIONS OF OUR [PRIVACY POLICY](../../privacy) (THE
+"**PRIVACY POLICY**"), WHICH IS HEREBY INCORPORATED INTO THESE TERMS OF SERVICE AND
+MADE A PART HEREOF BY REFERENCE (COLLECTIVELY, THE "**AGREEMENT**"). IF YOU DO NOT
+AGREE TO ANY OF THE TERMS IN THIS AGREEMENT, THEN PLEASE DO NOT USE THE PLATFORM.
+
+If you accept or agree to the Agreement on behalf of a company or other legal entity,
+you represent and warrant that you have the authority to bind that company or other
+legal entity to the Agreement and, in such event, "**you**" and "**your**" will
+refer and apply to that company or other legal entity.
+
+We reserve the right, at our sole discretion, to modify, discontinue, or terminate the
+Platform, or to modify the Agreement, at any time and without prior notice. If we modify
+the Agreement, we will post the modification on our website and/or on the Platform. By
+continuing to access or use the Platform after we have posted such modifications, you
+are indicating that you agree to be bound by the modified Agreement. If the modified
+Agreement is not acceptable to you, your only recourse is to cease using the Services.
+
+**THE SECTIONS BELOW TITLED "BINDING ARBITRATION" AND "CLASS ACTION WAIVER" CONTAIN A
+BINDING ARBITRATION AGREEMENT AND CLASS ACTION WAIVER. THEY AFFECT YOUR LEGAL RIGHTS.
+PLEASE READ THEM.**
+
+Capitalized terms not defined in these Terms of Service shall have the meaning set forth
+in our [Privacy Policy](../../privacy).
+
+1. **DEFINITIONS**. The definitions for some of the defined terms used in this Agreement
+are set forth below. The definitions for other defined terms are set forth elsewhere in
+this Agreement.
+
+ 1.1. "**Affiliate**" means, with respect to any entity, any other entity that, directly
+ or indirectly, through one or more intermediaries, controls, is controlled by, or is
+ under common control with, such entity. The term "control" means the possession,
+ directly or indirectly, of the power to direct or cause the direction of the management
+ and policies of an entity, whether through the ownership of voting securities, by
+ contract, or otherwise.
+
+ 1.2. "**Applicable Law**" means, with respect to any Party, any federal, state, or local
+ statute, law, ordinance, rule, administrative interpretation, regulation, order, writ,
+ injunction, directive, judgment, decree, or other requirement of any international,
+ federal, state, or local court, administrative agency, or commission or other
+ governmental or regulatory authority or instrumentality, domestic or foreign, applicable
+ to such Party or any of its properties, assets, or business operations.
+
+ 1.3. "**Authorized User**" means Customer’s employees, contractors, or agents authorized
+ by Customer to access and use the Platform pursuant to the terms and conditions of this
+ Agreement; provided, however, that any contractors’ or agents’ access to and use of the
+ Platform will be limited to their provision of services to Customer. You are responsible
+ for all acts and omissions of Authorized Users and any other person who accesses and uses
+ the Platform using any of your or any Authorized Users’ login credentials.
+
+ 1.4. "**Customer Data**" means (i) any data and information that you or your Authorized
+ Users submit to the Platform; and (ii) Input.
+
+ 1.5. "**Customer Site**" means any location owned or leased solely by Customer or an
+ Affiliate or that portion of any shared space, such as a shared data center, attributable
+ solely to Customer or such Affiliate on which the Software may be installed.
+
+ 1.6. "**Documentation**" means the manuals, specifications, and other materials
+ describing the functionality, features, and operating characteristics, and use of the
+ Platform as provided or made available by Mirascope to Customer whether in a written
+ or electronic form.
+
+ 1.7. "**Harmful Code**" means computer code, programs, or programming devices that are
+ intentionally designed to disrupt, modify, access, delete, damage, deactivate, disable,
+ harm, or otherwise impede in any manner, including aesthetic disruptions or distortions,
+ the operation of the Platform, or any other associated software, firmware, hardware,
+ computer system, or network (including, without limitation, "Trojan horses," "viruses,"
+ "worms," "time bombs," "time locks," "devices," "traps," "access codes," or "drop dead"
+ or "trap door" devices) or any other harmful, malicious, or hidden procedures, routines
+ or mechanisms that would cause the Platform to cease functioning or to damage or corrupt
+ data, storage media, programs, equipment, or communications, or otherwise interfere with
+ the operations of the Platform.
+
+ 1.8. "**Input**" means any and all information, content or data that you or your
+ Authorized Users input to the Platform for processing.
+
+ 1.9. "**Maximum Number of Authorized Users**" means the maximum number of Authorized
+ Users that are allowed to access and use the Platform under the subscription plan you
+ select.
+
+ 1.10. "**Output**" means data generated by the Platform as a result of processing the
+ Input.
+
+ 1.11. "**Platform**" means our proprietary open source AI engineering platform: Lilypad
+ and all Updates thereto, together with all Documentation.
+
+ 1.12. "**Mirascope Hosted SaaS Solution**" means the Platform that is delivered and
+ managed by Mirascope as software-as-a-service over the internet.
+
+ 1.13. "**Self-Hosted SaaS Solution**" means Software that is installed, hosted and
+ maintained by Customer instead of Mirascope.
+
+ 1.14. "**Software**" means the object code form of the Platform licensed to Customer
+ for installation at Customer Sites. Software does not include any source code of the
+ Platform.
+
+ 1.15. "**Sensitive Information**" means credit or debit card numbers; financial account
+ numbers or wire instructions, government issued identification numbers (such as Social
+ Security numbers, passport numbers), biometric information, protected health information,
+ personal information of children protected under any child data protection laws, and any
+ other information or combinations of information that falls within the definition of
+ "special categories of data" under Applicable Law relating to privacy and data protection.
+
+ 1.16. "**Subscription Term**" means the duration of the subscription for the Platform
+ that you purchase.
+
+ 1.17. "**Updates**" means any generally available corrections, fixes, patches,
+ workarounds, and minor modifications denominated by version changes to the right of
+ the decimal point (e.g., v3.0 to v3.1) to the Platform that Mirascope provides to
+ Customer under this Agreement. All version numbers shall be reasonably determined by
+ Mirascope in accordance with normal industry practice. Updates do not include additions
+ or modifications that Mirascope considers to be a separate product or for which Mirascope
+ charges its customers extra or separately.
+
+ 1.18. "**Usage Data**" means the data that we collect in connection with our monitoring
+ of the performance and use of the Platform by you and your Authorized Users, including,
+ without limitation, date and time that you access the Platform, the portions of the
+ Platform visited, the frequency and number of times such pages are accessed, the number
+ of times the Platform is used in a given time period and other usage and performance
+ data.
+
+2. **ACCESS TO THE PLATFORM**
+
+ 2.1. **Right to Access the Mirascope Hosted SaaS Solution.** If you have purchased a
+ subscription to the Mirascope Hosted SaaS Solution, then subject to the terms and
+ conditions of this Agreement, we hereby grant you during the Subscription Term a
+ limited, non-exclusive, non-transferable, non-sublicensable, revocable right and
+ license to permit your Authorized Users (but no more than Maximum Number of Authorized
+ Users) to access and use the Mirascope Hosted SaaS Solution solely for your internal
+ business purposes.
+
+ 2.2. **Right to Access the Self-Hosted SaaS Solution.** If you have purchased a
+ subscription to the Self-Hosted Hosted SaaS Solution, then subject to the terms and
+ conditions of this Agreement, we hereby grant you during the Subscription Term a
+ limited, non-exclusive, non-transferable, non-sublicensable, revocable license to:
+ (i) install and execute the Software at Customer Sites only; and (ii) permit your
+ Authorized Users (but no more than Maximum Number of Authorized Users) to access and
+ use the Self-Hosted SaaS Solution solely for your internal business purposes.
+
+ 2.3. **Delivery.** Customer will receive access to the Mirascope Hosted SaaS Solution
+ via a website hosted by Mirascope and Software via electronic delivery.
+
+ 2.4. **Modifications.** We reserve the right to modify the Platform, from time to time
+ by adding, deleting, or modifying features to improve the user experience or for other
+ business purposes. We further reserve the right to discontinue any feature of the Platform at any time during the Term at our sole and reasonable discretion. Any such modification or discontinuance will not materially decrease the overall functionality of the Platform.
+
+ 2.5. **Beta Features.** From time to time, we may invite Customer to try "beta" features
+ or functionalities of the Platform which are not generally available to our customers for
+ use at no charge. Customer may accept or decline any such trial in its sole discretion.
+ Such beta features are for evaluation purposes only and not for use, are not considered
+ part of the Platform under this Agreement, are not supported, and may be subject to
+ additional terms. Unless otherwise expressly agreed to by us, any beta feature trial
+ period will expire upon the date that a version of the beta feature becomes generally
+ available to all of our customers for use or upon the date that we elect to discontinue
+ such beta feature. We may discontinue beta features at any time in our sole discretion
+ and may never make them generally available as part of the Platform. We will have no
+ liability to Customer or any third party for any harm or damage arising out of or in
+ connection with any use of a beta feature, and Customer’s use of any beta feature is
+ at Customer’s own risk.
+
+ 2.6. **Restrictions on Use.** You shall not (and shall not authorize, permit, or
+ encourage any third party to): (i) allow anyone other than Authorized Users to use
+ the Platform; (ii) reverse engineer, decompile, disassemble, or otherwise attempt to
+ discern the source code or interface protocols of the Platform; (iii) modify, adapt,
+ or translate the Platform, or any portion or component thereof; (iv) make any copies
+ of the Platform, or any portion or component thereof, except as permitted under this
+ Agreement; (v) resell, distribute, or sublicense the Platform, or any portion or
+ component thereof, or use any of the foregoing for the benefit of anyone other than
+ Customer; (vi) remove or modify any proprietary markings or restrictive legends placed
+ on the Platform; (vii) use the Platform, or any portion or component thereof in violation
+ of any Applicable Law, in order to build a competitive product or service, or for any
+ purpose not specifically permitted in this Agreement; (viii) introduce, post, or upload
+ to the Platform any Harmful Code; (ix) use the Platform in connection with service
+ bureau, timeshare, service provider or like activity whereby you operate the Platform
+ for the benefit of a third party; or (ix) circumvent any processes, procedures, or
+ technologies that we have put in place to safeguard the Platform.
+
+ 2.7. **Documentation.** Customer may copy and use (and permit the Authorized Users to
+ copy and use) the Documentation solely in connection with the use of the Platform under
+ this Agreement.
+
+ 2.8. **Third-Party Items.** The Platform may include, or be dependent on, certain
+ third-party data, open source software components, application programming interfaces,
+ and other items (the "**Third-Party Items**"). Third-Party Items shall not include any
+ Third-Party AI Models. Mirascope agrees that throughout the Subscription Term,
+ Mirascope will ensure that it at all times maintains all rights and licenses in and
+ to the Third-Party Items that are necessary to ensure that Customer and its Authorized
+ Users can use the Platform in the manner contemplated in this Agreement. Mirascope
+ will provide support for the Third-Party Items in the same manner and scope for which
+ it provides support for the Platform hereunder. MIRASCOPE, NOT BEING THE OWNER,
+ OPERATOR, SUPPLIER, OR PRODUCER OF THE THIRD-PARTY ITEMS NOR THEIR AGENT, DOES NOT
+ ENDORSE ANY THIRD-PARTY ITEMS, AND MAKES NO EXPRESS OR IMPLIED WARRANTY OF ANY KIND
+ WHATSOEVER WITH RESPECT TO THE THIRD-PARTY ITEMS AND DISCLAIMS ANY SUCH WARRANTIES
+ THAT MIGHT OTHERWISE EXIST.
+
+ 2.9. **Third-Party AI Models.** The Platform may use Third-Party AI Models to provide
+ the AI features and functionality. You acknowledge and understand that your use of such
+ AI features and functionality will be governed by the terms and conditions of third
+ parties that provide such Third-Party AI Models
+ ("**Third-Party AI Models Terms and Conditions**") and your Input may be used by such
+ third parties in accordance with such Third-Party AI Models Terms and Conditions.
+ Mirascope has no control over the use of the Input, thus, any use of such is at your
+ own risk and Mirascope does not represent, undertake or warrant to any security or
+ control of or to the Input.
+
+ 2.10. **Onboarding of Authorized Users.** Each Authorized User will be required to create
+ an account, which includes a username, a password, and certain additional information,
+ including a valid email address, that will assist in authenticating the Authorized User’s
+ identity when he or she logs into the Platform in the future (collectively,
+ "**Log-in Credentials**"). When creating an account, an Authorized User must provide
+ true, accurate, current, and complete information. You are solely responsible for the
+ confidentiality and use of Authorized Users’ Log-in Credentials, as well as for any use,
+ misuse, or communications entered through the Software. You shall promptly inform us of
+ any need to deactivate a username, password, or other Log-in Credential. We reserve the
+ right to delete or change Authorized Users’ Log-in Credentials at any time and for any
+ reason. We will not be liable for any unauthorized use of an Authorized User’s account.
+
+ 2.11. **Support Services.** Mirascope shall use commercially reasonable efforts to
+ provide you and your Authorized Users problem resolution and technical support in
+ connection with the Platform during the Subscription Term (the "**Support Services**").
+ In order to provide Support Services for Self-Hosted SaaS Solution, Mirascope may require
+ access to your instance of Self-Hosted SaaS Solution in order to debug and resolve any
+ issues you may encounter. Such access is enabled by default, but you may opt-out by
+ disabling it from the "Settings" of the Self-Hosted SaaS Solution. By disabling the
+ access, you understand and agree that you may not be able to receive all the benefits
+ of the Support Services provided by Mirascope under this Agreement.
+
+ 2.12. **Hosting the Mirascope Hosted SaaS Solution.** During the Subscription Term,
+ we, or our contractors, shall host the Mirascope Hosted SaaS Solution, such that the
+ Mirascope Hosted SaaS Solution is available for use by your Authorized Users. We
+ and/or our contractors shall periodically monitor the Mirascope Hosted SaaS Solution
+ to optimize performance, and shall use commercially reasonable efforts to minimize any
+ downtime, other than for scheduled maintenance or downtime caused by reasons beyond our
+ reasonable control, including, but not limited to, acts of God, acts of any governmental
+ body, war, insurrection, sabotage, armed conflict, terrorism, embargo, fire, flood,
+ strike or other labor disturbance, unavailability of or interruption or delay in
+ telecommunications or third-party services, or virus attacks or hackers. We will use
+ commercially reasonable efforts to notify you of any unavailability or other issue
+ with the Mirascope Hosted SaaS Solution. You and your Authorized Users will be
+ responsible for obtaining Internet connections and other third-party software and
+ services necessary for them to access the Mirascope Hosted SaaS Solution.
+
+ 2.13. **Overage.** At any time during the Subscription Term, if Customer usage exceeds
+ its subscription plan ("**Overage**"), Customer will correct the Overage by purchasing
+ additional licenses within fifteen (15) days of the Overage. If Customer does not
+ purchase licenses for the Overage within such fifteen (15) day period, Mirascope may
+ suspend Customer’s use of the Platform by providing five (5) days prior notice. Customer
+ agrees (i) that Mirascope may access to view Self-Hosted SaaS Solution and (ii) to
+ provide Mirascope with all information reasonably required for the purpose of verifying
+ Customer’s compliance with subscription plan usage, which may be in the form of a formal
+ certification.
+
+ 2.14. **Privacy Policy.** Your use of the Platform may involve the transmission to us
+ of certain personal information. Our policies with respect to the collection and use of
+ such personal information are governed according to our Privacy Policy, located at
+ [https://mirascope.com/privacy](/privacy), which is hereby
+ incorporated by reference in its entirety.
+
+3. **CUSTOMER DATA; OUTPUT.**
+
+ 3.1. **Customer Data.** Subject to the terms and conditions of this Agreement, Customer
+ hereby grants us a non-exclusive, worldwide, fully paid-up, royalty-free right and
+ license, with the right to grant sublicenses, to reproduce, execute, use, store, archive,
+ modify, perform, display, and distribute the Customer Data during the Subscription Term
+ for the purpose of performing its obligations under this Agreement. You will have sole
+ responsibility for the accuracy, quality, and legality of your Customer Data.
+
+ 3.2. **Input and Output.** Customer is solely responsible for ensuring that the Input
+ and Output complies with Applicable Laws and this Agreement. You may use the Input and
+ Output for any legal and lawful purposes, at your own risk. Due to the nature of
+ artificial intelligence, Output may not be unique across all users and the AI features
+ and functionality of the Platform may generate the same or similar Output for different
+ users or third parties.
+
+ 3.3. **Aggregated Data.** Notwithstanding anything to the contrary herein, we may use,
+ and may permit our third-party service providers to access and use, the Customer Data,
+ as well as any Usage Data that we may collect, in an anonymous and aggregated form
+ ("**Aggregate Data**") for the purposes of operating, maintaining, managing, and
+ improving our products and services including the Platform. Aggregate Data does not
+ identify Customer or any individual (including any Authorized User). You hereby agree
+ that we may collect, use, publish, disseminate, sell, transfer, and otherwise exploit
+ such Aggregate Data.
+
+ 3.4. **Data Security.** We (and any third-party hosting provider that we may engage)
+ will employ commercially reasonable physical, administrative, and technical safeguards
+ to secure the Customer Data in the Mirascope Hosted SaaS Solution, from unauthorized
+ use or disclosure. You acknowledge and agree that you are responsible for implementing
+ and maintaining commercially reasonable physical, administrative and technical
+ safeguard to security the Customer Data in the Self-Hosted SaaS Solution from
+ unauthorized use or disclosure.
+
+4. **INTELLECTUAL PROPERTY**
+
+As between the Parties, all right, title, and interest in and to the Platform, the
+Aggregate Data, and the Usage Data, including all modifications, improvements, adaptations,
+enhancements, derivatives, or translations made thereto or therefrom, and all intellectual
+property rights therein, are and will remain the sole and exclusive property of Mirascope.
+Subject to Section 3, all right, title, and interest in and to the Customer Data and all
+intellectual property rights therein, will be and remain Customer’s sole and exclusive
+property. As between the Parties and to the extent permitted by the Third-Party AI Model
+Terms and Conditions, we hereby grant you the right to use the Output in accordance with
+this Agreement.
+
+5. **RESTRICTIONS**
+
+The Platform is available only for individuals aged 18 years or older. If you are under 18
+years of age, then please do not access and/or use the Platform. By entering into this
+Agreement, you represent and warrant that you are 18 years or older.
+
+6. **FEEDBACK**
+
+We welcome and encourage you to provide feedback, comments, and suggestions for improvements
+to the Platform and our services ("**Feedback**"). Although we encourage you to e-mail us,
+we do not want you to, and you should not, e-mail us any content that contains confidential
+information. With respect to any Feedback you provide, we shall be free to use and disclose
+any ideas, concepts, know-how, techniques, or other materials contained in your Feedback
+for any purpose whatsoever, including, but not limited to, the development, production and
+marketing of products and services that incorporate such information, without compensation or
+attribution to you.
+
+7. **REPRESENTATIONS AND WARRANTIES**
+
+You represent and warrant that (i) you have all rights and permissions, and have provided all
+notices and obtained all consents that are necessary for us to process the Customer Data;
+(ii) use of Customer Data in accordance with this Agreement shall not violate or
+misappropriate any intellectual property, privacy, publicity, contractual or other rights of
+any third party; and (iii) you will not input, submit, or otherwise process any Sensitive
+Information through the Platform.
+
+8. **NO WARRANTIES; LIMITATION OF LIABILITY**
+
+ALTHOUGH CERTAIN DATA AND MATERIALS THAT MAY BE GENERATED BY THE PLATFORM CAN BE USED AS AN
+AID TO CUSTOMER AND ITS AUTHORIZED USERS TO MAKE INFORMED BUSINESS DECISIONS, SUCH DATA AND
+MATERIALS ARE NOT MEANT TO SUBSTITUTE LEGAL OR BUSINESS ADVICE OR CUSTOMER’S OR ANY AUTHORIZED
+USER’S EXERCISE OF THEIR OWN BUSINESS JUDGMENT. ANY SUCH DECISIONS OR JUDGMENTS ARE MADE AT
+SUCH PARTY’S SOLE DISCRETION AND ELECTION. YOU ACKNOWLEDGE AND AGREE THAT THE PLATFORM HAS
+NOT BEEN DESIGNED TO PROCESS OR MANAGE SENSITIVE INFORMATION, AND YOU AND YOUR AUTHORIZED
+USERS’ AGREE NOT TO USE THE PLATFORM TO COLLECT, MANAGE, OR OTHERWISE PROCESS ANY SENSITIVE
+INFORMATION. WE WILL NOT HAVE, AND WE SPECIFICALLY DISCLAIM ANY LIABILITY THAT MAY RESULT
+FROM YOUR OR YOUR AUTHORIZED USER’S USE OF THE PLATFORM TO COLLECT, MANAGE OR OTHERWISE
+PROCESS SENSITIVE INFORMATION. THE PLATFORM, ANY BETA FEATURES, THEIR COMPONENTS, ANY
+DOCUMENTATION, ANY THIRD-PARTY ITEMS AND ANY THIRD-PARTY AI MODELS, AND ANY OTHER MATERIALS
+AND INFORMATION ARE PROVIDED ON AN "AS IS" AND "AS AVAILABLE" BASIS, AND NEITHER MIRASCOPE
+NOR MIRASCOPE’S SUPPLIERS MAKE ANY WARRANTIES WITH RESPECT TO THE SAME OR OTHERWISE IN
+CONNECTION WITH THIS AGREEMENT, AND MIRASCOPE HEREBY DISCLAIMS ANY AND ALL EXPRESS, IMPLIED,
+OR STATUTORY WARRANTIES, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF NON-INFRINGEMENT,
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AVAILABILITY, ERROR-FREE OR UNINTERRUPTED
+OPERATION, AND ANY WARRANTIES ARISING FROM A COURSE OF DEALING, COURSE OF PERFORMANCE, OR
+USAGE OF TRADE.
+
+MIRASCOPE MAKES NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE ACCURACY OF ANY OUTPUTS.
+YOU ARE SOLELY RESPONSIBLE FOR EVALUATING THE ACCURACY OF ANY OUTPUT AND YOU SHALL NOT RELY
+ON MIRASCOPE TO DO SO. THE OUTPUT MAY NOT REFLECT CURRENT, CORRECT OR COMPLETE INFORMATION
+AND YOU, YOUR AUTHORIZED USERS MAY RELY ON THE OUTPUT AT YOUR AND THEIR SOLE RISK. TO THE
+EXTENT THAT MIRASCOPE AND MIRASCOPE’S SUPPLIERS MAY NOT AS A MATTER OF APPLICABLE LAW
+DISCLAIM ANY IMPLIED WARRANTY, THE SCOPE AND DURATION OF SUCH WARRANTY WILL BE THE MINIMUM
+PERMITTED UNDER SUCH LAW.
+
+WITHOUT LIMITING THE FOREGOING, WE DO NOT WARRANT, GUARANTEE OR MAKE ANY REPRESENTATION, NOR
+SHALL WE BE RESPONSIBLE FOR (A) THE CORRECTNESS, ACCURACY, RELIABILITY, COMPLETENESS OR
+CURRENCY OF THE PLATFORM, ANY BETA FEATURES, THEIR COMPONENTS, ANY DOCUMENTATION, ANY
+THIRD-PARTY ITEMS AND ANY THIRD-PARTY AI MODELS, AND ANY OTHER MATERIALS AND INFORMATION
+PROVIDED HEREUNDER; OR (B) ANY RESULTS ACHIEVED OR ACTION TAKEN BY YOU IN RELIANCE ON THE
+PLATFORM, ANY BETA FEATURES, THEIR COMPONENTS, ANY DOCUMENTATION, ANY THIRD-PARTY ITEMS
+AND ANY THIRD-PARTY AI MODELS, AND ANY OTHER MATERIALS AND INFORMATION PROVIDED HEREUNDER.
+ANY DECISION, ACT OR OMISSION OF YOURS THAT IS BASED ON THE PLATFORM, ANY BETA FEATURES,
+THEIR COMPONENTS, ANY DOCUMENTATION, ANY THIRD-PARTY ITEMS AND ANY THIRD-PARTY AI MODELS,
+AND ANY OTHER MATERIALS AND INFORMATION PROVIDED HEREUNDER IS AT YOUR OWN AND SOLE RISK.
+THE PLATFORM, ANY BETA FEATURES, THEIR COMPONENTS, ANY DOCUMENTATION, ANY THIRD-PARTY
+ITEMS AND ANY THIRD-PARTY AI MODELS, AND ANY OTHER MATERIALS AND INFORMATION PROVIDED
+HEREUNDER IS PROVIDED AS A CONVENIENCE ONLY AND DOES NOT REPLACE THE NEED TO REVIEW ITS
+ACCURACY, COMPLETENESS AND CORRECTNESS.
+
+IN CONNECTION WITH ANY WARRANTY, CONTRACT, OR COMMON LAW TORT CLAIMS: (I) WE SHALL NOT BE
+LIABLE FOR ANY INCIDENTAL, SPECIAL, PUNITIVE, EXEMPLARY OR CONSEQUENTIAL DAMAGES, LOST
+PROFITS, LOST REVENUES, OR DAMAGES RESULTING FROM LOST DATA OR BUSINESS INTERRUPTION
+RESULTING FROM THE USE OR INABILITY TO ACCESS AND USE THE PLATFORM, EVEN IF WE HAVE BEEN
+ADVISED OF THE POSSIBILITY OF SUCH DAMAGES; AND (II) ANY DIRECT DAMAGES THAT YOU MAY SUFFER
+AS A RESULT OF YOUR USE OF THE PLATFORM, SHALL BE LIMITED TO THE TOTAL FEES PAID AND
+PAYABLE TO US BY YOU IN THE IMMEDIATELY PRECEDING THREE (3) MONTH PERIOD FROM THE DATE
+ON WHICH THE CLAIM ARISES. ANY CLAIMS MADE BY YOU IN CONNECTION WITH YOUR USE OF THE
+PLATFORM MUST BE BROUGHT BY YOU WITHIN ONE (1) YEAR OF THE DATE ON WHICH THE EVENT
+GIVING RISE TO SUCH ACTION OCCURRED.
+
+9. **INDEMNIFICATION**.
+
+You will indemnify, defend, and hold Mirascope, its affiliates, and our and their respective
+shareholders, members, officers, directors, employees, agents, and representatives
+(collectively, "**Mirascope Indemnitees**") harmless from and against any and all damages,
+liabilities, losses, costs, and expenses, including reasonable attorney’s fees (collectively,
+"**Losses**") incurred by any Mirascope Indemnitee in connection with a third-party claim, action,
+or proceeding (each, a "**Claim**") arising from your (i) breach of this Agreement, including but
+not limited to, any breach of your representations and warranties; (ii) Customer Data; (iii)
+negligence, gross negligence, willful misconduct, fraud, misrepresentation or violation of
+Applicable Laws; or (iv) violation of any third-party right, including without limitation
+any copyright, trademark, property, or privacy right; *provided, however*, that the foregoing
+obligations shall be subject to our: (i) promptly notifying you of the Claim; (ii) providing
+you, at your expense, with reasonable cooperation in the defense of the Claim; and (iii)
+providing you with sole control over the defense and negotiations for a settlement or
+compromise.
+
+10. **EXTERNAL SITES**
+
+The Platform may contain links to third-party websites ("**External Sites**"). These links
+are provided solely as a convenience to you and not as an endorsement by us of the content
+on such External Sites. The content of such External Sites is developed and provided by
+others. You should contact the website administrator or webmaster for those External Sites
+if you have any concerns regarding such links or any content located on such External Sites.
+ We are not responsible for the content of any linked External Sites and do not make any
+ representations regarding the content or accuracy of materials on such External Sites.
+ You should take precautions when downloading files from all websites to protect your
+ computer from viruses and other destructive programs. If you decide to access linked
+ External Sites, you do so at your own risk.
+
+11. **FEES AND PAYMENT**
+
+In exchange for your access to and use of the Platform, you agree to pay the fees for the
+applicable subscription plan that you selected at registration ("**Fees**"). We may use a
+third-party payment vendor ("**Third-Party Payment Processor**") to process your payment.
+You warrant and represent that you are the valid owner or an authorized user, of the
+credit card or payment account that you provide to such Third-Party Payment Processor,
+and that all information you provide is accurate. If payment is not received from your
+credit card issuer or any other payment facility, you hereby agree to pay all amounts due
+upon demand. You agree to pay all costs of collection, including attorney’s fees and costs,
+on any outstanding balance.
+
+
+IT IS IMPORTANT TO NOTE THAT WHEN YOU SIGN UP FOR A SUBSCRIPTION (MONTHLY, ANNUALLY, OR
+OTHERWISE), YOUR SUBSCRIPTION WILL AUTOMATICALLY RENEW UNTIL YOU CANCEL IT. YOU MAY CANCEL
+AT ANY TIME BY FOLLOWING THE INSTRUCTIONS IN YOUR ACCOUNT OR BY CONTACTING US AT
+support@mirascope.com AND THE CANCELLATION WILL TAKE EFFECT AT THE EXPIRATION OF THE
+THEN-CURRENT TERM. AGAIN, IF YOU DO NOT CANCEL, THEN YOUR SUBSCRIPTION WILL AUTOMATICALLY
+RENEW UNDER THE SAME SUBSCRIPTION. THERE ARE NO REFUNDS FOR CANCELLATION, AND YOU UNDERSTAND
+AND AGREE THAT YOU SHALL RECEIVE NO REFUND OR EXCHANGE FOR ANY UNUSED TIME OF THE
+SUBSCRIPTION ACCORDING TO THE CHOSEN PREFERENCES (EITHER A MONTHLY OR A YEARLY SUBSCRIPTION).
+
+We reserve the right to institute new or additional fees, at any time upon notice to you.
+
+12. **COMPLIANCE WITH APPLICABLE LAWS**
+
+The Platform is based in the United States. We make no claims concerning whether the Platform
+may be viewed or be appropriate for use outside of the United States. If you access the
+Platform from outside of the United States, you do so at your own risk. Whether inside or
+outside of the United States, you are solely responsible for ensuring compliance with the
+laws of your specific jurisdiction.
+
+13. **TERM AND TERMINATION**
+
+Your right to access and use the Platform will commence upon your acceptance of these Terms
+of Service and will continue for the duration of the subscription plan that you selected at
+registration (the "**Initial Term**"). Thereafter, this Agreement will automatically renew
+for consecutive terms equivalent to the duration of your subscription plan (each, a
+"**Renewal Term**" and collectively, with the Initial Term, the "**Term**"), unless you
+notify us at least thirty (30) days prior to the expiration of the then-current renewal
+term of your intention to not renew.
+
+If you fail to pay the applicable Fees when due, we reserve the right to downgrade your
+subscription to the Platform to the free plan. You shall retain access to Platform offered
+under the free plan, but premium features and benefits associated with the paid subscription
+will no longer be available. We may reinstate your paid subscription upon receipt of the
+outstanding payment in full.
+
+Upon termination or expiration of this Agreement: (i) you will stop all access to and use of
+the Platform; provided, however, if you have purchased a subscription to the Self-Hosted
+Hosted SaaS Solution, then you will retain the right to access and use the free version of
+the Self-Hosted Hosted SaaS Solution; (ii) you will promptly pay all unpaid Fees and
+applicable taxes due through the date of such termination or expiration; and (iii) with
+respect to Mirascope Hosted SaaS Solution and provided all Fees due under the Agreement in
+connection with Mirascope Hosted SaaS Solution have been paid, Customer will have up to
+thirty (30) days from the effective date of the termination or expiration of this Agreement
+to retrieve all Customer Data from the Platform and thereafter, Mirascope will have no
+obligation to retain Customer Data or make Customer Data available to Customer. Your
+continued access to the free version of the Self-Hosted Hosted SaaS Solution as per (ii)
+is subject to your and your Authorized Users’ compliance with the terms and conditions of
+this Agreement. Mirascope shall not be obligated to provide any Support Services for the
+free version and Mirascope reserves the right to modify or discontinue the free version
+at its sole discretion without prior notice.
+
+We reserve the right to change, suspend, discontinue or terminate your access and use of
+all or any part of the Platform at any time without prior notice or liability. Sections 1,
+3, 4, 6, 7, 8, 9, 11, and 13 -20 shall survive the termination of this Agreement.
+
+14. **BINDING ARBITRATION"**
+
+In the event of a dispute arising under or relating to this Agreement, the Platform, or
+any products or services (each, a "Dispute"), such dispute will be finally and
+exclusively resolved by binding arbitration governed by the Federal Arbitration Act
+("FAA"). NEITHER PARTY SHALL HAVE THE RIGHT TO LITIGATE SUCH CLAIM IN COURT OR
+TO HAVE A JURY TRIAL, EXCEPT EITHER PARTY MAY BRING ITS CLAIM IN ITS LOCAL SMALL CLAIMS
+COURT, IF PERMITTED BY THAT SMALL CLAIMS COURT RULES AND IF WITHIN SUCH COURT’S
+JURISDICTION. ARBITRATION IS DIFFERENT FROM COURT, AND DISCOVERY AND APPEAL RIGHTS MAY
+ALSO BE LIMITED IN ARBITRATION. All disputes will be resolved before a neutral arbitrator
+selected jointly by the parties, whose decision will be final, except for a limited right
+of appeal under the FAA. The arbitration shall be commenced and conducted by JAMS
+pursuant to its then current Comprehensive Arbitration Rules and Procedures and in
+accordance with the Expedited Procedures in those rules, or, where appropriate, pursuant
+to JAMS’ Streamlined Arbitration Rules and Procedures. All applicable JAMS’ rules and
+procedures are available at the JAMS website [www.jamsadr.com](https://www.jamsadr.com/). Each party
+will be responsible for paying any JAMS filing, administrative, and arbitrator fees in
+accordance with JAMS rules. Judgment on the arbitrator’s award may be entered in any court
+having jurisdiction. This clause shall not preclude parties from seeking provisional remedies
+in aid of arbitration from a court of appropriate jurisdiction. The arbitration may be
+conducted in person, through the submission of documents, by phone, or online. If conducted
+in person, the arbitration shall take place in the United States county where you reside.
+The parties may litigate in court to compel arbitration, to stay a proceeding pending
+arbitration, or to confirm, modify, vacate, or enter judgment on the award entered by the
+arbitrator. The parties shall cooperate in good faith in the voluntary and informal exchange
+of all non-privileged documents and other information (including electronically stored
+information) relevant to the Dispute immediately after commencement of the arbitration.
+Nothing in this Agreement will prevent us from seeking injunctive relief in any court of
+competent jurisdiction as necessary to protect our proprietary interests.
+
+15. **CLASS ACTION WAIVER**
+
+You agree that any arbitration or proceeding shall be limited to the Dispute between us and
+you individually. To the full extent permitted by law, (i) no arbitration or proceeding shall
+be joined with any other; (ii) there is no right or authority for any Dispute to be arbitrated
+or resolved on a class action-basis or to utilize class action procedures; and (iii) there is
+no right or authority for any Dispute to be brought in a purported representative capacity on
+behalf of the general public or any other persons. YOU AGREE THAT YOU MAY BRING CLAIMS AGAINST
+US ONLY IN YOUR INDIVIDUAL CAPACITY AND NOT AS A PLAINTIFF OR CLASS MEMBER IN ANY PURPORTED
+CLASS OR REPRESENTATIVE PROCEEDING.
+
+16. **EQUITABLE RELIEF**
+
+You acknowledge and agree that in the event of a breach or threatened violation of our
+intellectual property rights and confidential and proprietary information by you, we will
+suffer irreparable harm and will therefore be entitled to injunctive relief to enforce this
+Agreement. We may, without waiving any other remedies under this Agreement, seek from any
+court having jurisdiction any interim, equitable, provisional, or injunctive relief that is
+necessary to protect our rights and property pending the outcome of the arbitration referenced
+above. You hereby irrevocably and unconditionally consent to the personal and subject matter
+jurisdiction of the federal and state courts in the State of California for purposes of any
+such action by us.
+
+17. **CONTROLLING LAW; EXCLUSIVE FORUM**
+
+The Agreement and any action related thereto will be governed by the laws of the State of
+California without regard to its conflict of laws provisions. The Parties hereby consent
+and agree to the exclusive jurisdiction of the state and federal courts located in the State
+of California for all suits, actions, or proceedings directly or indirectly arising out of or
+relating to this Agreement, and waive any and all objections to such courts, including but
+not limited to, objections based on improper venue or inconvenient forum, and each party
+hereby irrevocably submits to the exclusive jurisdiction of such courts in any suits,
+actions, or proceedings arising out of or relating to this Agreement
+
+18. **FORCE MAJEURE.**
+
+Mirascope will not be deemed to be in breach of this Agreement for any failure or delay in
+performance to the extent caused by reasons beyond its reasonable control, including, but
+not limited to, acts of God, acts of any governmental body, war, insurrection, sabotage,
+armed conflict, terrorism, embargo, fire, flood, strike or other labor disturbance,
+COVID-19, quarantine restrictions, freight embargoes, unavailability of or interruption
+or delay in telecommunications or third-party services, or virus attacks or hackers
+(collectively, "Force Majeure Event"). When such Force Majeure Event arises, Mirascope
+shall promptly notify you of its failure to perform, describing the cause of failure and
+how it affects performance, and the anticipated duration of the inability to perform. For
+the avoidance of doubt, nothing in this Section 18 shall be construed to excuse any payment
+obligations hereunder.
+
+19. **EXPORT CONTROL LAWS.**
+
+Our Platform may be subject to export control laws and regulations of the United States.
+You hereby certify that you and your Authorized Users will comply with all U.S. export
+control laws and regulations including but not limited to the International Traffic in Arms
+Regulations ("ITAR") (22CFR 120-130), Export Administration Regulations ("EAR")
+(15CFR 730-774) and regulations administered by the U.S. Treasury Department’s Office of
+Foreign Assets Control ("OFAC") (31CFR 500598) (collectively, the "**Export Control Laws**").
+You and your Authorized Users agree not to, directly or indirectly, use, sell, supply,
+export, reexport, transfer, divert, release, or otherwise dispose of the Software and any
+products, software, or technology (including products derived from or based on such
+technology) received from Mirascope under this Agreement to any destination, entity, or
+person or for any end use prohibited by applicable Export Controls Laws.
+
+20. **MISCELLANEOUS.**
+
+You may not assign any of your rights, duties, or obligations under these Terms of Service
+to any person or entity, in whole or in part, without written consent from Mirascope. Our
+failure to act on or enforce any provision of the Agreement shall not be construed as a
+waiver of that provision or any other provision in this Agreement. No waiver shall be
+effective against us unless made in writing, and no such waiver shall be construed as a
+waiver in any other or subsequent instance. Except as expressly agreed by us and you in
+writing, the Agreement constitutes the entire agreement between you and us with respect to
+the subject matter, and supersedes all previous or contemporaneous agreements, whether
+written or oral, between the parties with respect to the subject matter. You acknowledge
+and agree that there are no third-party beneficiaries under this Agreement. The section
+headings are provided merely for convenience and shall not be given any legal import.
+This Agreement will inure to the benefit of our successors, assigns, licensees, and
+sublicensees.
+
+Copyright 2025 Mirascope, Inc. All rights reserved.
\ No newline at end of file
diff --git a/cloud/content/policy/terms/use.mdx b/cloud/content/policy/terms/use.mdx
new file mode 100644
index 0000000000..3d8fed41e1
--- /dev/null
+++ b/cloud/content/policy/terms/use.mdx
@@ -0,0 +1,304 @@
+---
+title: Terms of Use
+lastUpdated: 2025-04-08
+description: Guidelines and rules for using the Mirascope website.
+---
+
+Mirascope, Inc. ("Mirascope," or "we," "our," or "us")
+welcomes you. We invite you to access and use our website located at
+[https://mirascope.com/](/) (the "Website"), subject to the
+following terms and conditions (the "Terms of Use").
+
+PLEASE READ THESE TERMS OF USE CAREFULLY. BY VISITING THE WEBSITE, YOU ACKNOWLEDGE
+THAT YOU HAVE READ, UNDERSTOOD, AND AGREE TO BE LEGALLY BOUND BY THESE TERMS OF USE,
+AND THE TERMS AND CONDITIONS OF OUR PRIVACY POLICY (THE "PRIVACY POLICY"), WHICH
+IS HEREBY INCORPORATED INTO THESE TERMS OF USE AND MADE A PART HEREOF BY REFERENCE
+(COLLECTIVELY, THE "AGREEMENT"). IF YOU DO NOT AGREE TO ANY OF THE TERMS IN THIS
+AGREEMENT, THEN PLEASE DO NOT USE THE WEBSITE.
+
+If you accept or agree to the Agreement on behalf of a company or other legal entity,
+you represent and warrant that you have the authority to bind that company or other
+legal entity to the Agreement and, in such event, "you" and "your" will
+refer and apply to that company or other legal entity.
+
+We reserve the right, at our sole discretion, to modify, discontinue, or terminate the
+Website, or to modify the Agreement, at any time and without prior notice. If we modify
+the Agreement, we will post the modification on the Website. By continuing to access or
+use the Website after we have posted a modification on the Website, you are indicating
+that you agree to be bound by the modified Agreement. If the modified Agreement is not
+acceptable to you, your only recourse is to cease using the Website.
+
+**THE SECTIONS BELOW TITLED "BINDING ARBITRATION" AND "CLASS ACTION WAIVER" CONTAIN A
+BINDING ARBITRATION AGREEMENT, AND CLASS ACTION WAIVER. THEY AFFECT YOUR LEGAL RIGHTS.
+PLEASE READ THEM.**
+
+Capitalized terms not defined in these Terms of Use shall have the meaning set forth in
+our Privacy Policy.
+
+## 1. USE OF PERSONAL INFORMATION
+
+Your use of the Website may involve the transmission to us of certain personal information.
+Our policies with respect to the collection and use of such personal information are governed
+according to our Privacy Policy, located at [https://mirascope.com/privacy](/privacy),
+which is hereby incorporated by reference in its entirety.
+
+## 2. INTELLECTUAL PROPERTY
+
+The Website contains materials, such as software, text, graphics, images, sound recordings,
+audiovisual works, tutorials, and other material provided by or on behalf of Mirascope
+(collectively referred to as the "Content"). The Content may be owned by us or our
+licensors and is protected under both United States and foreign laws. Unauthorized use of
+the Content may violate copyright, trademark, and other intellectual property rights or
+laws. You have no rights in or to the Content, and you will not use the Content except as
+permitted under this Agreement. No other use is permitted without prior written consent
+from us. You must retain all copyright and other proprietary or legal notices contained
+in the original Content. You may not sell, transfer, assign, license, sublicense, or
+modify the Content or reproduce, display, publicly perform, make a derivative version of,
+distribute, or otherwise use the Content in any way for any public or commercial purpose.
+The use or posting of the Content outside the Website, or in a networked computer
+environment for any purpose is expressly prohibited.
+
+If you violate any part of this Agreement, your permission to access the Website
+automatically terminates and you must immediately destroy any copies you have made of
+the Website.
+
+The trademarks, service marks, and logos of Mirascope (the "Mirascope Trademarks")
+used and displayed on the Website are registered and unregistered trademarks or service
+marks of Mirascope. Other company, products and service names located on the Website
+may be trademarks or service marks owned by others (the "Third-Party Trademarks,"
+and, collectively with Mirascope Trademarks, the "Trademarks"). Nothing on the
+Website should be construed as granting, by implication, estoppel, or otherwise, any
+license or right to use the Trademarks, without our prior written permission specific
+for each such use. Use of the Trademarks as part of a link to or from any website is
+prohibited unless establishment of such a link is approved in advance by us in writing.
+All goodwill generated from the use of Mirascope Trademarks inures to our benefit.
+
+Elements of the Website are protected by trade dress, trademark, unfair competition,
+and other state and federal laws and may not be copied or imitated in whole or in part,
+by any means, including, but not limited to, the use of framing or mirrors. None of
+the Content may be retransmitted without our express, written consent for each and
+every instance.
+
+## 3. GUIDELINES
+
+By accessing and/or using the Website, you hereby agree to comply with the following
+guidelines:
+
+- You will not use the Website for any unlawful purpose;
+
+- You will not access or use the Website to collect any market research for a
+competing businesses;
+
+- You will not impersonate any person or entity or falsely state or otherwise
+misrepresent your affiliation with a person or entity;
+
+- You will not decompile, reverse engineer, or disassemble any software or other
+platform or processes accessible through the Website;
+
+- You will not cover, obscure, block, or in any way interfere with any advertisements
+and/or safety features on the Website;
+
+- You will not circumvent, remove, alter, deactivate, degrade, or thwart any of the
+protections on the Website;
+
+- You will not use any robot, spider, scraper, or other automated means to access
+the Website for any purpose without our express written permission; provided,
+however, we grant the operators of public search engines permission to use
+spiders to copy materials from the public portions of the Website for the sole purpose
+of and solely to the extent necessary for creating publicly-available searchable
+indices of the materials, but not caches or archives of such materials;
+
+ - To the extent you utilize any robot, spider, scraper, or other automated means
+ to access the Website in violation of the foregoing, you hereby allow us to, with
+ or without notice to you, employ any technical safeguards or other means to block
+ such activities, including, without limitation, blocking your access to the Website
+ entirely.
+
+- You will not take any action that imposes or may impose (in our sole discretion) an
+unreasonable or disproportionately large load on our technical infrastructure; and
+
+- You will not interfere with or attempt to interrupt the proper operation of the
+Website through the use of any virus, device, information collection or transmission
+mechanism, software or routine, or access or attempt to gain access to any data,
+files, or passwords related to the Website through hacking, password or data mining,
+or any other means.
+
+We reserve the right, in our sole and absolute discretion, to deny you (or any device)
+access to the Website, or any portion thereof, without notice.
+
+## 4. FEEDBACK
+
+We welcome and encourage you to provide feedback, comments, and suggestions for
+improvements to the Website, and our services ("Feedback"). Although we
+encourage you to e-mail us, we do not want you to, and you should not, e-mail us
+any content that contains confidential information. With respect to any Feedback
+you provide, we shall be free to use and disclose any ideas, concepts, know-how,
+techniques, or other materials contained in your Feedback for any purpose whatsoever,
+including, but not limited to, the development, promotion, and marketing of Website
+and services that incorporate such information, without compensation or attribution
+to you.
+
+## 5. NO WARRANTIES; LIMITATION OF LIABILITY
+
+THE WEBSITE, AND THE CONTENT ARE PROVIDED ON AN "AS IS" AND "AS AVAILABLE" BASIS,
+AND NEITHER MIRASCOPE NOR MIRASCOPE’S SUPPLIERS MAKE ANY WARRANTIES WITH RESPECT TO
+THE SAME OR OTHERWISE IN CONNECTION WITH THIS AGREEMENT, AND MIRASCOPE HEREBY
+DISCLAIMS ANY AND ALL EXPRESS, IMPLIED, OR STATUTORY WARRANTIES, INCLUDING, WITHOUT
+LIMITATION, ANY WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, FITNESS FOR A
+PARTICULAR PURPOSE, AVAILABILITY, ERROR-FREE OR UNINTERRUPTED OPERATION, AND ANY
+WARRANTIES ARISING FROM A COURSE OF DEALING, COURSE OF PERFORMANCE, OR USAGE OF
+TRADE.
+
+IN CONNECTION WITH ANY WARRANTY, CONTRACT, OR COMMON LAW TORT CLAIMS: (I) WE SHALL
+NOT BE LIABLE FOR ANY INCIDENTAL OR CONSEQUENTIAL DAMAGES, LOST PROFITS, OR DAMAGES
+RESULTING FROM LOST DATA OR BUSINESS INTERRUPTION RESULTING FROM THE USE OR INABILITY
+TO ACCESS AND USE THE WEBSITE, OR ANY RELATED SERVICES, EVEN IF WE HAVE BEEN ADVISED
+OF THE POSSIBILITY OF SUCH DAMAGES; AND (II) ANY DIRECT DAMAGES THAT YOU MAY SUFFER
+AS A RESULT OF YOUR USE OF THE WEBSITE, OR ANY RELATED SERVICES SHALL BE LIMITED TO
+ONE HUNDRED DOLLARS ($100).
+
+SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF CERTAIN WARRANTIES. THEREFORE, SOME
+OF THE ABOVE LIMITATIONS ON WARRANTIES IN THIS SECTION MAY NOT APPLY TO YOU.
+
+THE WEBSITE MAY CONTAIN TECHNICAL INACCURACIES OR TYPOGRAPHICAL ERRORS OR OMISSIONS.
+WE ARE NOT RESPONSIBLE FOR ANY SUCH TYPOGRAPHICAL, TECHNICAL, OR PRICING ERRORS LISTED
+ON THE WEBSITE. WE RESERVE THE RIGHT TO MAKE CHANGES, CORRECTIONS, AND/OR IMPROVEMENTS
+TO THE WEBSITE AT ANY TIME WITHOUT NOTICE.
+
+## 6. EXTERNAL SITES
+
+The Website may contain links to third-party websites ("External Sites"). These
+links are provided solely as a convenience to you and not as an endorsement by us
+of the content on such External Sites. The content of such External Sites is
+developed and provided by others. You should contact the Website administrator
+or webmaster for those External Sites if you have any concerns regarding such
+links or any content located on such External Sites. We are not responsible for
+the content of any linked External Sites and do not make any representations
+regarding the content or accuracy of materials on such External Sites. You
+should take precautions when downloading files from all websites to protect
+your computer from viruses and other destructive programs. If you decide to
+access linked External Sites, you do so at your own risk.
+
+## 7. INDEMNIFICATION
+
+You will indemnify, defend, and hold Mirascope and its shareholders, members,
+officers, directors, employees, agents, and representatives (collectively,
+"Mirascope Indemnitees") harmless from and against any and all damages,
+liabilities, losses, costs, and expenses, including reasonable attorney’s fees
+(collectively, "Losses") incurred by any Mirascope Indemnitee in connection with
+a third-party claim, action, or proceeding (each, a "Claim") arising from (i)
+your breach of this Agreement; (ii) your misuse of the Website, and/or the Content;
+and/or (iii) your violation of any third-party rights, including without limitation
+any copyright, trademark, property, publicity, or privacy right; *provided*, *however*,
+that the foregoing obligations shall be subject to our: (i) promptly notifying you
+of the Claim; (ii) providing you, at your expense, with reasonable cooperation in
+the defense of the Claim; and (iii) providing you with sole control over the defense
+and negotiations for a settlement or compromise.
+
+## 8. COMPLIANCE WITH APPLICABLE LAWS
+
+The Website is based in the United States. We make no claims concerning whether the
+Website and/or the Content may be viewed or be appropriate for use outside of the
+United States. If you access the Website, and/or the Content from outside of the
+United States, you do so at your own risk. Whether inside or outside of the United
+States, you are solely responsible for ensuring compliance with the laws of your
+specific jurisdiction.
+
+## 9. TERMINATION OF THE AGREEMENT
+
+We reserve the right, in our sole discretion, to restrict, suspend, or terminate the
+Agreement and/or your access to all or any part of the Website, at any time and for
+any reason without prior notice or liability. We reserve the right to change, suspend,
+or discontinue all or any part of the Website at any time without prior notice or
+liability.
+
+## 10. BINDING ARBITRATION
+
+In the event of a dispute arising under or relating to this Agreement, the Website,
+or any other products or services provided by us (each, a "Dispute"), such dispute
+will be finally and exclusively resolved by binding arbitration governed by the
+Federal Arbitration Act ("FAA"). NEITHER PARTY SHALL HAVE THE RIGHT TO LITIGATE SUCH
+CLAIM IN COURT OR TO HAVE A JURY TRIAL, EXCEPT EITHER PARTY MAY BRING ITS CLAIM IN ITS
+LOCAL SMALL CLAIMS COURT, IF PERMITTED BY THAT SMALL CLAIMS COURT RULES AND IF WITHIN
+SUCH COURT’S JURISDICTION. ARBITRATION IS DIFFERENT FROM COURT, AND DISCOVERY AND APPEAL
+RIGHTS MAY ALSO BE LIMITED IN ARBITRATION. All disputes will be resolved before a
+neutral arbitrator selected jointly by the parties, whose decision will be final, except
+for a limited right of appeal under the FAA. The arbitration shall be commenced and
+conducted by JAMS pursuant to its then current Comprehensive Arbitration Rules and
+Procedures and in accordance with the Expedited Procedures in those rules, or, where
+appropriate, pursuant to JAMS’ Streamlined Arbitration Rules and Procedures. All
+applicable JAMS’ rules and procedures are available at the JAMS website
+[www.jamsadr.com](https://www.jamsadr.com/). Each party will be responsible for paying any JAMS
+filing, administrative, and arbitrator fees in accordance with JAMS rules. Judgment
+on the arbitrator’s award may be entered in any court having jurisdiction. This clause
+shall not preclude parties from seeking provisional remedies in aid of arbitration from
+a court of appropriate jurisdiction. The arbitration may be conducted in person,
+through the submission of documents, by phone, or online. If conducted in person, the
+arbitration shall take place in the United States county where you reside. The parties
+may litigate in court to compel arbitration, to stay a proceeding pending arbitration, or
+to confirm, modify, vacate, or enter judgment on the award entered by the arbitrator.
+The parties shall cooperate in good faith in the voluntary and informal exchange of all
+non-privileged documents and other information (including electronically stored information)
+relevant to the Dispute immediately after commencement of the arbitration. Nothing in
+these Terms of Use will prevent us from seeking injunctive relief in any court of
+competent jurisdiction as necessary to protect our proprietary interests.
+
+## 11. CLASS ACTION WAIVER
+
+You agree that any arbitration or proceeding shall be limited to the Dispute between
+us and you individually. To the full extent permitted by law, (i) no arbitration or
+proceeding shall be joined with any other; (ii) there is no right or authority for any
+Dispute to be arbitrated or resolved on a class action-basis or to utilize class action
+procedures; and (iii) there is no right or authority for any Dispute to be brought in a
+purported representative capacity on behalf of the general public or any other persons.
+YOU AGREE THAT YOU MAY BRING CLAIMS AGAINST US ONLY IN YOUR INDIVIDUAL CAPACITY AND NOT
+AS A PLAINTIFF OR CLASS MEMBER IN ANY PURPORTED CLASS OR REPRESENTATIVE PROCEEDING.
+
+## 12. EQUITABLE RELIEF
+
+You acknowledge and agree that in the event of a breach or threatened violation of our
+intellectual property rights and confidential and proprietary information by you, we
+will suffer irreparable harm and will therefore be entitled to injunctive relief to
+enforce this Agreement. We may, without waiving any other remedies under this Agreement,
+seek from any court having jurisdiction any interim, equitable, provisional, or injunctive
+relief that is necessary to protect our rights and property pending the outcome of the
+arbitration referenced above. You hereby irrevocably and unconditionally consent to the
+personal and subject matter jurisdiction of the federal and state courts in the State of
+California for purposes of any such action by us.
+
+## 13. CONTROLLING LAW; EXCLUSIVE FORUM
+
+The Agreement and any action related thereto will be governed by the laws of the State
+of California without regard to its conflict of laws provisions. For disputes not subject
+to binding arbitration under Section 10, the parties hereby consent and agree to the
+exclusive jurisdiction of the state and federal courts located in the State of California,
+for all suits, actions, or proceedings directly or indirectly arising out of or relating to
+this Agreement, and waive any and all objections to such courts, including but not limited
+to, objections based on improper venue or inconvenient forum, and each party hereby
+irrevocably submits to the exclusive jurisdiction of such courts in any suits, actions,
+or proceedings arising out of or relating to this Agreement.
+
+## 14. MISCELLANEOUS
+
+If the Agreement is terminated in accordance with the termination provision in
+"Section 9" above, such termination shall not affect the validity of the following
+provisions of this Agreement, which shall remain in full force and effect:
+"Intellectual Property," "Feedback,""No Warranties; Limitation of Liability,"
+"Indemnification," "Compliance with Applicable Laws," "Termination of the Agreement,"
+"Binding Arbitration," "Class Action Waiver," "Controlling Law; Exclusive Forum," and
+"Miscellaneous."
+
+You may not assign any of your rights, duties, or obligations under these Terms of Use to
+any person or entity, in whole or in part, without written consent from Mirascope. Our
+failure to act on or enforce any provision of the Agreement shall not be construed as a
+waiver of that provision or any other provision in this Agreement. No waiver shall be
+effective against us unless made in writing, and no such waiver shall be construed as a
+waiver in any other or subsequent instance. Except as expressly agreed by us and you in
+writing, the Agreement constitutes the entire agreement between you and us with respect
+to the subject matter, and supersedes all previous or contemporaneous agreements, whether
+written or oral, between the parties with respect to the subject matter. The section
+headings are provided merely for convenience and shall not be given any legal import.
+This Agreement will inure to the benefit of our successors, assigns, licensees, and
+sublicensees.
+
+Copyright 2025 Mirascope, Inc. All rights reserved.
\ No newline at end of file