@@ -631,4 +631,4 @@ Librarian().run()
This section is just the tip of the iceberg when it comes to building agents, implementing just one type of simple agent flow. It's important to remember that "agent" is quite a general term and can mean different things for different use-cases. Mirascope's various features make building agents easier, but it will be up to you to determine the architecture that best suits your goals.
-Next, we recommend taking a look at our [Agent Tutorials](/docs/mirascope/guides/agents/web-search-agent) to see examples of more complex, real-world agents.
\ No newline at end of file
+Next, we recommend taking a look at our [Agent Tutorials](/docs/v1/guides/agents/web-search-agent) to see examples of more complex, real-world agents.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/async.mdx b/cloud/content/docs/v1/learn/async.mdx
index 9d1892f01..a10ceaa34 100644
--- a/cloud/content/docs/v1/learn/async.mdx
+++ b/cloud/content/docs/v1/learn/async.mdx
@@ -44,7 +44,7 @@ Asynchronous programming is a crucial concept when building applications with LL
## Basic Usage and Syntax
-If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls)
+If you haven't already, we recommend first reading the section on [Calls](/docs/v1/learn/calls)
To use async in Mirascope, simply define the function as async and use the `await` keyword when calling it. Here's a basic example:
@@ -158,7 +158,7 @@ We are using `asyncio.gather` to run and await multiple asynchronous tasks concu
## Async Streaming
-If you haven't already, we recommend first reading the section on [Streams](/docs/mirascope/learn/streams)
+If you haven't already, we recommend first reading the section on [Streams](/docs/v1/learn/streams)
Streaming with async works similarly to synchronous streaming, but you use `async for` instead of a regular `for` loop:
@@ -211,7 +211,7 @@ asyncio.run(main())
## Async Tools
-If you haven't already, we recommend first reading the section on [Tools](/docs/mirascope/learn/tools)
+If you haven't already, we recommend first reading the section on [Tools](/docs/v1/learn/tools)
When using tools asynchronously, you can make the `call` method of a tool async:
@@ -327,4 +327,4 @@ By leveraging these async features in Mirascope, you can build more efficient an
This section concludes the core functionality Mirascope supports. If you haven't already, we recommend taking a look at any previous sections you've missed to learn about what you can do with Mirascope.
-You can also check out the section on [Provider-Specific Features](/docs/mirascope/learn/provider-specific/openai) to learn about how to use features that only certain providers support, such as OpenAI's structured outputs.
\ No newline at end of file
+You can also check out the section on [Provider-Specific Features](/docs/v1/learn/provider-specific/openai) to learn about how to use features that only certain providers support, such as OpenAI's structured outputs.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/calls.mdx b/cloud/content/docs/v1/learn/calls.mdx
index 12957a853..fad0f1a89 100644
--- a/cloud/content/docs/v1/learn/calls.mdx
+++ b/cloud/content/docs/v1/learn/calls.mdx
@@ -6,7 +6,7 @@ description: Learn how to make API calls to various LLM providers using Mirascop
# Calls
- If you haven't already, we recommend first reading the section on writing [Prompts](/docs/mirascope/learn/prompts)
+ If you haven't already, we recommend first reading the section on writing [Prompts](/docs/v1/learn/prompts)
When working with Large Language Model (LLM) APIs in Mirascope, a "call" refers to making a request to a LLM provider's API with a particular setting and prompt.
@@ -18,7 +18,7 @@ We currently support [OpenAI](https://openai.com/), [Anthropic](https://www.anth
If there are any providers we don't yet support that you'd like to see supported, let us know!
- [`mirascope.llm.call`](/docs/mirascope/api/llm/call)
+ [`mirascope.llm.call`](/docs/v1/api/llm/call)
## Basic Usage and Syntax
@@ -134,10 +134,10 @@ print(override_response.content)
### Common Response Properties and Methods
- [`mirascope.core.base.call_response`](/docs/mirascope/api/core/base/call_response)
+ [`mirascope.core.base.call_response`](/docs/v1/api/core/base/call_response)
-All [`BaseCallResponse`](/docs/mirascope/api) objects share these common properties:
+All [`BaseCallResponse`](/docs/v1/api) objects share these common properties:
- `content`: The main text content of the response. If no content is present, this will be the empty string.
- `finish_reasons`: A list of reasons why the generation finished (e.g., "stop", "length"). These will be typed specifically for the provider used. If no finish reasons are present, this will be `None`.
@@ -148,8 +148,8 @@ All [`BaseCallResponse`](/docs/mirascope/api) objects share these common propert
- `output_tokens`: The number of output tokens generated if available. Otherwise this will be `None`.
- `cost`: An estimated cost of the API call if available. Otherwise this will be `None`.
- `message_param`: The assistant's response formatted as a message parameter.
-- `tools`: A list of provider-specific tools used in the response, if any. Otherwise this will be `None`. Check out the [`Tools`](/docs/mirascope/learn/tools) documentation for more details.
-- `tool`: The first tool used in the response, if any. Otherwise this will be `None`. Check out the [`Tools`](/docs/mirascope/learn/tools) documentation for more details.
+- `tools`: A list of provider-specific tools used in the response, if any. Otherwise this will be `None`. Check out the [`Tools`](/docs/v1/learn/tools) documentation for more details.
+- `tool`: The first tool used in the response, if any. Otherwise this will be `None`. Check out the [`Tools`](/docs/v1/learn/tools) documentation for more details.
- `tool_types`: A list of tool types used in the call, if any. Otherwise this will be `None`.
- `prompt_template`: The prompt template used for the call.
- `fn_args`: The arguments passed to the function.
@@ -165,7 +165,7 @@ All [`BaseCallResponse`](/docs/mirascope/api) objects share these common propert
There are also two common methods:
- `__str__`: Returns the `content` property of the response for easy printing.
-- `tool_message_params`: Creates message parameters for tool call results. Check out the [`Tools`](/docs/mirascope/learn/tools) documentation for more information.
+- `tool_message_params`: Creates message parameters for tool call results. Check out the [`Tools`](/docs/v1/learn/tools) documentation for more information.
## Multi-Modal Outputs
@@ -218,11 +218,11 @@ When using models that support audio outputs, you'll have access to:
There are several common parameters that you'll find across all providers when using the `call` decorator. These parameters allow you to control various aspects of the LLM call:
- `model`: The only required parameter for all providers, which may be passed in as a standard argument (whereas all others are optional and must be provided as keyword arguments). It specifies which language model to use for the generation. Each provider has its own set of available models.
-- `stream`: A boolean that determines whether the response should be streamed or returned as a complete response. We cover this in more detail in the [`Streams`](/docs/mirascope/learn/streams) documentation.
-- `response_model`: A Pydantic `BaseModel` type that defines how to structure the response. We cover this in more detail in the [`Response Models`](/docs/mirascope/learn/response_models) documentation.
-- `output_parser`: A function for parsing the response output. We cover this in more detail in the [`Output Parsers`](/docs/mirascope/learn/output_parsers) documentation.
-- `json_mode`: A boolean that deterines whether to use JSON mode or not. We cover this in more detail in the [`JSON Mode`](/docs/mirascope/learn/json_mode) documentation.
-- `tools`: A list of tools that the model may request to use in its response. We cover this in more detail in the [`Tools`](/docs/mirascope/learn/tools) documentation.
+- `stream`: A boolean that determines whether the response should be streamed or returned as a complete response. We cover this in more detail in the [`Streams`](/docs/v1/learn/streams) documentation.
+- `response_model`: A Pydantic `BaseModel` type that defines how to structure the response. We cover this in more detail in the [`Response Models`](/docs/v1/learn/response_models) documentation.
+- `output_parser`: A function for parsing the response output. We cover this in more detail in the [`Output Parsers`](/docs/v1/learn/output_parsers) documentation.
+- `json_mode`: A boolean that deterines whether to use JSON mode or not. We cover this in more detail in the [`JSON Mode`](/docs/v1/learn/json_mode) documentation.
+- `tools`: A list of tools that the model may request to use in its response. We cover this in more detail in the [`Tools`](/docs/v1/learn/tools) documentation.
- `client`: A custom client to use when making the call to the LLM. We cover this in more detail in the [`Custom Client`](#custom-client) section below.
- `call_params`: The provider-specific parameters to use when making the call to that provider's API. We cover this in more detail in the [`Provider-Specific Usage`](#provider-specific-usage) section below.
@@ -354,16 +354,16 @@ print(response.content)
For details on provider-specific modules, see the API documentation for each provider:
- - [`mirascope.core.openai.call`](/docs/mirascope/api/core/openai/call)
- - [`mirascope.core.anthropic.call`](/docs/mirascope/api/core/anthropic/call)
- - [`mirascope.core.mistral.call`](/docs/mirascope/api/core/mistral/call)
- - [`mirascope.core.google.call`](/docs/mirascope/api/core/google/call)
- - [`mirascope.core.azure.call`](/docs/mirascope/api/core/azure/call)
- - [`mirascope.core.cohere.call`](/docs/mirascope/api/core/cohere/call)
- - [`mirascope.core.groq.call`](/docs/mirascope/api/core/groq/call)
- - [`mirascope.core.xai.call`](/docs/mirascope/api/core/xai/call)
- - [`mirascope.core.bedrock.call`](/docs/mirascope/api/core/bedrock/call)
- - [`mirascope.core.litellm.call`](/docs/mirascope/api/core/litellm/call)
+ - [`mirascope.core.openai.call`](/docs/v1/api/core/openai/call)
+ - [`mirascope.core.anthropic.call`](/docs/v1/api/core/anthropic/call)
+ - [`mirascope.core.mistral.call`](/docs/v1/api/core/mistral/call)
+ - [`mirascope.core.google.call`](/docs/v1/api/core/google/call)
+ - [`mirascope.core.azure.call`](/docs/v1/api/core/azure/call)
+ - [`mirascope.core.cohere.call`](/docs/v1/api/core/cohere/call)
+ - [`mirascope.core.groq.call`](/docs/v1/api/core/groq/call)
+ - [`mirascope.core.xai.call`](/docs/v1/api/core/xai/call)
+ - [`mirascope.core.bedrock.call`](/docs/v1/api/core/bedrock/call)
+ - [`mirascope.core.litellm.call`](/docs/v1/api/core/litellm/call)
While Mirascope provides a consistent interface across different LLM providers, you can also use provider-specific modules with refined typing for an individual provider.
@@ -422,7 +422,7 @@ You can also configure the client dynamically at runtime through the dynamic con
- A common mistake is to use the synchronous client with async calls. Read the section on [Async Custom Client](/docs/mirascope/learn/async#custom-client) to see how to use a custom client with asynchronous calls.
+ A common mistake is to use the synchronous client with async calls. Read the section on [Async Custom Client](/docs/v1/learn/async#custom-client) to see how to use a custom client with asynchronous calls.
## Error Handling
@@ -474,10 +474,10 @@ By mastering calls in Mirascope, you'll be well-equipped to build robust, flexib
Next, we recommend choosing one of:
-- [Streams](/docs/mirascope/learn/streams) to see how to stream call responses for a more real-time interaction.
-- [Chaining](/docs/mirascope/learn/chaining) to see how to chain calls together.
-- [Response Models](/docs/mirascope/learn/response_models) to see how to generate structured outputs.
-- [Tools](/docs/mirascope/learn/tools) to see how to give LLMs access to custom tools to extend their capabilities.
-- [Async](/docs/mirascope/learn/async) to see how to better take advantage of asynchronous programming and parallelization for improved performance.
+- [Streams](/docs/v1/learn/streams) to see how to stream call responses for a more real-time interaction.
+- [Chaining](/docs/v1/learn/chaining) to see how to chain calls together.
+- [Response Models](/docs/v1/learn/response_models) to see how to generate structured outputs.
+- [Tools](/docs/v1/learn/tools) to see how to give LLMs access to custom tools to extend their capabilities.
+- [Async](/docs/v1/learn/async) to see how to better take advantage of asynchronous programming and parallelization for improved performance.
Pick whichever path aligns best with what you're hoping to get from Mirascope.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/chaining.mdx b/cloud/content/docs/v1/learn/chaining.mdx
index 317c37c7c..7fff4e773 100644
--- a/cloud/content/docs/v1/learn/chaining.mdx
+++ b/cloud/content/docs/v1/learn/chaining.mdx
@@ -6,7 +6,7 @@ description: Learn how to combine multiple LLM calls in sequence to solve comple
# Chaining
- If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls)
+ If you haven't already, we recommend first reading the section on [Calls](/docs/v1/learn/calls)
Chaining in Mirascope allows you to combine multiple LLM calls or operations in a sequence to solve complex tasks. This approach is particularly useful for breaking down complex problems into smaller, manageable steps.
@@ -346,10 +346,10 @@ print(f"Rewritten Summary: {rewritten_summary}")
-[Response Models](/docs/mirascope/learn/response_models) are a great way to add more structure to your chains, and [parallel async calls](/docs/mirascope/learn/async#parallel-async-calls) can be particularly powerful for making your chains more efficient.
+[Response Models](/docs/v1/learn/response_models) are a great way to add more structure to your chains, and [parallel async calls](/docs/v1/learn/async#parallel-async-calls) can be particularly powerful for making your chains more efficient.
## Next Steps
By mastering Mirascope's chaining techniques, you can create sophisticated LLM-powered applications that tackle complex, multi-step problems with greater accuracy, control, and observability.
-Next, we recommend taking a look at the [Response Models](/docs/mirascope/learn/response_models) documentation, which shows you how to generate structured outputs.
\ No newline at end of file
+Next, we recommend taking a look at the [Response Models](/docs/v1/learn/response_models) documentation, which shows you how to generate structured outputs.
\ No newline at end of file
diff --git a/cloud/content/docs/v1/learn/evals.mdx b/cloud/content/docs/v1/learn/evals.mdx
index b31dc6a76..f8dfc6860 100644
--- a/cloud/content/docs/v1/learn/evals.mdx
+++ b/cloud/content/docs/v1/learn/evals.mdx
@@ -6,7 +6,7 @@ description: Learn how to evaluate LLM outputs using multiple approaches includi
# Evals: Evaluating LLM Outputs
-If you haven't already, we recommend first reading the section on [Response Models](/docs/mirascope/learn/response_models)
+If you haven't already, we recommend first reading the section on [Response Models](/docs/v1/learn/response_models)
Evaluating the outputs of Large Language Models (LLMs) is a crucial step in developing robust and reliable AI applications. This section covers various approaches to evaluating LLM outputs, including using LLMs as evaluators as well as implementing hardcoded evaluation criteria.
@@ -284,10 +284,10 @@ for evaluation in evaluations:
-We are taking advantage of [provider-agnostic prompts](/docs/mirascope/learn/calls#provider-agnostic-usage) in this example to easily call multiple providers with the same prompt. Of course, you can always engineer each judge specifically for a given provider instead.
+We are taking advantage of [provider-agnostic prompts](/docs/v1/learn/calls#provider-agnostic-usage) in this example to easily call multiple providers with the same prompt. Of course, you can always engineer each judge specifically for a given provider instead.
- We highly recommend using [parallel asynchronous calls](/docs/mirascope/learn/async#parallel-async-calls) to run your evaluations more quickly since each call can (and should) be run in parallel.
+ We highly recommend using [parallel asynchronous calls](/docs/v1/learn/async#parallel-async-calls) to run your evaluations more quickly since each call can (and should) be run in parallel.
## Hardcoded Evaluation Criteria
diff --git a/cloud/content/docs/v1/learn/extensions/custom_provider.mdx b/cloud/content/docs/v1/learn/extensions/custom_provider.mdx
index 67a8bafc2..2a1364d59 100644
--- a/cloud/content/docs/v1/learn/extensions/custom_provider.mdx
+++ b/cloud/content/docs/v1/learn/extensions/custom_provider.mdx
@@ -5,7 +5,7 @@ description: Learn how to implement a custom LLM provider for Mirascope by creat
# Implementing a Custom Provider
-This guide explains how to implement a custom provider for Mirascope using the `call_factory` method. Before proceeding, ensure you're familiar with Mirascope's core concepts as covered in the [Learn section](/docs/mirascope/learn) of the documentation.
+This guide explains how to implement a custom provider for Mirascope using the `call_factory` method. Before proceeding, ensure you're familiar with Mirascope's core concepts as covered in the [Learn section](/docs/v1/learn) of the documentation.
## Overview
diff --git a/cloud/content/docs/v1/learn/index.mdx b/cloud/content/docs/v1/learn/index.mdx
index e01ec79af..621e5c491 100644
--- a/cloud/content/docs/v1/learn/index.mdx
+++ b/cloud/content/docs/v1/learn/index.mdx
@@ -9,7 +9,7 @@ This section is designed to help you master Mirascope, a toolkit for building AI
Our documentation is tailored for developers who have at least some experience with Python and LLMs. Whether you're coming from other development tool libraries or have worked directly with provider SDKs and APIs, Mirascope offers a familiar but enhanced experience.
-If you haven't already, we recommend checking out [Getting Started](/docs/mirascope/guides/getting-started/quickstart) and [Why Use Mirascope](/docs/mirascope/getting-started/why).
+If you haven't already, we recommend checking out [Getting Started](/docs/v1/guides/getting-started/quickstart) and [Why Use Mirascope](/docs/v1/getting-started/why).
## Key Features and Benefits
@@ -46,79 +46,79 @@ We encourage you to dive into each component's documentation to gain a deeper un
Evals
Apply core components to build evaluation strategies for your LLM applications
-
Read more →
+
Read more →