diff --git a/cloud/app/components/home-page.tsx b/cloud/app/components/home-page.tsx index aca881c25..6ff1a6b1d 100644 --- a/cloud/app/components/home-page.tsx +++ b/cloud/app/components/home-page.tsx @@ -244,7 +244,7 @@ export const MirascopeBlock = ({ onScrollToTop }: MirascopeBlockProps) => {
name='PARIS' country='France' ``` -You can learn more about Mirascope’s response model [here](/docs/mirascope/learn/response_models). +You can learn more about Mirascope’s response model [here](/docs/v1/learn/response_models). ## Versioning and Observability in LangChain (And How Lilypad Does It Better) diff --git a/cloud/content/blog/langchain-sucks.mdx b/cloud/content/blog/langchain-sucks.mdx index 9d7f2f4e9..fe5aa939a 100644 --- a/cloud/content/blog/langchain-sucks.mdx +++ b/cloud/content/blog/langchain-sucks.mdx @@ -186,7 +186,7 @@ For instance, the method signature for `invoke` below indicates it expects `”t ![LangChain No Type Safety](/assets/blog/langchain-sucks/langchain-no-type-safety.webp) -In contrast, Mirascope implements type safety via its Pydantic-based [response model](/docs/mirascope/learn/response_models/), enforcing what values functions return, and how LLM calls interact with those functions. +In contrast, Mirascope implements type safety via its Pydantic-based [response model](/docs/v1/learn/response_models/), enforcing what values functions return, and how LLM calls interact with those functions. We provide full linting and editor support, offering warnings, errors, and autocomplete as you code. This helps catch potential issues early and ensures code consistency. diff --git a/cloud/content/blog/langfuse-integration.mdx b/cloud/content/blog/langfuse-integration.mdx index 4a6b8738d..5d51d5ff6 100644 --- a/cloud/content/blog/langfuse-integration.mdx +++ b/cloud/content/blog/langfuse-integration.mdx @@ -14,7 +14,7 @@ Mirascope integrates with Langfuse with a decorator `@with_langfuse`. This gives ### Call -This is a basic call example that will work across all [Mirascope call function settings](/docs/mirascope/learn/calls), including [streams](/docs/mirascope/learn/streams), [async](/docs/mirascope/learn/async), and more. +This is a basic call example that will work across all [Mirascope call function settings](/docs/v1/learn/calls), including [streams](/docs/v1/learn/streams), [async](/docs/v1/learn/async), and more. ```python import os @@ -41,7 +41,7 @@ And that’s it! Now your Mirascope class methods will be sent to Langfuse trace ### Response Models -Mirascope's [`response_model`](/docs/mirascope/learn/response_models) argument enables extracting or generating structured outputs with LLMs. You can easily observe these structured outputs in Langfuse as well so you can assess the quality of your data and ensure your results are accurate. +Mirascope's [`response_model`](/docs/v1/learn/response_models) argument enables extracting or generating structured outputs with LLMs. You can easily observe these structured outputs in Langfuse as well so you can assess the quality of your data and ensure your results are accurate. ```python import os diff --git a/cloud/content/blog/llamaindex-vs-langchain.mdx b/cloud/content/blog/llamaindex-vs-langchain.mdx index d76559c50..aa4dc79b5 100644 --- a/cloud/content/blog/llamaindex-vs-langchain.mdx +++ b/cloud/content/blog/llamaindex-vs-langchain.mdx @@ -116,7 +116,7 @@ When it comes to prompt versioning, LangChain offers this capability in its Lang ### Mirascope: One Prompt That Type Checks Inputs Automatically -Rather than attempt to tell you how you should formulate prompts, Mirascope provides its [`prompt_template`](/docs/mirascope/learn/prompts) decorator so you can write prompt templates as standard Python functions. +Rather than attempt to tell you how you should formulate prompts, Mirascope provides its [`prompt_template`](/docs/v1/learn/prompts) decorator so you can write prompt templates as standard Python functions. Our `prompt_template` decorator allows you to automatically generate prompt messages simply by calling the function, which returns the list of `BaseMessageParam` instances. @@ -334,7 +334,7 @@ Therefore all you need to know is vanilla Python and the Pydantic library. We al And unlike other frameworks, we believe that everything that can impact the quality of an LLM call (from model parameters to prompts) should live together with the call. The LLM call is therefore the [central organizing unit of our code](/blog/engineers-should-handle-prompting-llms) around which everything gets versioned and tested. -A prime example of this is Mirascope’s use of [`call_params`](/docs/mirascope/learn/calls/#provider-specific-parameters), which contains all the parameters of the LLM call and typically lives inside the call: +A prime example of this is Mirascope’s use of [`call_params`](/docs/v1/learn/calls/#provider-specific-parameters), which contains all the parameters of the LLM call and typically lives inside the call: ```python from mirascope.core import openai diff --git a/cloud/content/blog/llm-agents.mdx b/cloud/content/blog/llm-agents.mdx index b425bf3ed..1f937aaa3 100644 --- a/cloud/content/blog/llm-agents.mdx +++ b/cloud/content/blog/llm-agents.mdx @@ -37,7 +37,7 @@ In this article, we explain how LLM agents work, their core components and use c ## Components of an LLM Agent -[LLM agents](/docs/mirascope/learn/agents) are built around four key parts that work together to enable reasoning, planning, and execution. +[LLM agents](/docs/v1/learn/agents) are built around four key parts that work together to enable reasoning, planning, and execution. These components include: @@ -62,7 +62,7 @@ Planning refers to the process where the agent figures out the steps needed to c For instance, if the agent is built for customer support, it might plan by first identifying the user’s question, then retrieving the necessary information, and finally generating a helpful response. -Popular techniques like [Chain-of-thought (CoT)](/docs/mirascope/guides/prompt-engineering/text-based/chain-of-thought/) and Tree-of-thought (ToT) prompting or decision trees can help structure this process - enabling the agent to think through problems step by step, especially when dealing with complex multi-step tasks. +Popular techniques like [Chain-of-thought (CoT)](/docs/v1/guides/prompt-engineering/text-based/chain-of-thought/) and Tree-of-thought (ToT) prompting or decision trees can help structure this process - enabling the agent to think through problems step by step, especially when dealing with complex multi-step tasks. It can also use feedback mechanisms allowing the agent to iteratively refine its execution plan based on past actions and observations. @@ -109,7 +109,7 @@ Below are some of the most notable examples of [frameworks](/blog/llm-frameworks ### Mirascope -Mirascope lets you build autonomous or semi-autonomous systems that handle tasks, make decisions, and interact with users to integrate with tools, manage state, and even bring [humans into the loop](/docs/mirascope/learn/agents/) when needed. +Mirascope lets you build autonomous or semi-autonomous systems that handle tasks, make decisions, and interact with users to integrate with tools, manage state, and even bring [humans into the loop](/docs/v1/learn/agents/) when needed. Mirascope provides several key benefits, such as: @@ -157,7 +157,7 @@ if tool := response.tool: ``` -If the function doesn’t contain a docstring or can’t be changed for some reason (like if it’s third-party code) then you can use Mirascope’s [`BaseTool` class](/docs/mirascope/learn/tools/#__tabbed_2_1) to define the tool. +If the function doesn’t contain a docstring or can’t be changed for some reason (like if it’s third-party code) then you can use Mirascope’s [`BaseTool` class](/docs/v1/learn/tools/#__tabbed_2_1) to define the tool. Below, we specify the `GetCityPopulation` class to be a tool with a defined argument: @@ -353,7 +353,7 @@ A typical [LLM pipeline](/blog/llm-pipeline/) uses retrieved data or documents t That means if our agent (here a chatbot) believes it already has enough context or can handle a request in another way, it might skip retrieval entirely. -You can find more details about the code below in our [tutorial library](/docs/mirascope/guides/agents/local-chat-with-codebase/). +You can find more details about the code below in our [tutorial library](/docs/v1/guides/agents/local-chat-with-codebase/). **Prerequisites**: diff --git a/cloud/content/blog/llm-application-development.mdx b/cloud/content/blog/llm-application-development.mdx index ba192294f..725dd4628 100644 --- a/cloud/content/blog/llm-application-development.mdx +++ b/cloud/content/blog/llm-application-development.mdx @@ -141,9 +141,9 @@ Some more benefits: For some frameworks, important details like prompt formatting and model settings are usually scattered across the codebase. This makes it harder to manage changes and often forces developers to write extra boilerplate just to switch providers or keep things running smoothly. -Mirascope centralizes these concerns with the [@llm.call decorator](/docs/mirascope/learn/calls). You define the provider, model, and response format in one place. The interface is the same across providers like OpenAI, Anthropic, and Mistral, so switching is as simple as updating the provider argument, with no need to rewrite boilerplate. +Mirascope centralizes these concerns with the [@llm.call decorator](/docs/v1/learn/calls). You define the provider, model, and response format in one place. The interface is the same across providers like OpenAI, Anthropic, and Mistral, so switching is as simple as updating the provider argument, with no need to rewrite boilerplate. -To keep prompts clean and reusable, Mirascope also provides the [`@prompt_template` decorator](/docs/mirascope/learn/prompts). You can stack it with `@llm.call` on any Python function, keeping templates next to the logic that uses them: +To keep prompts clean and reusable, Mirascope also provides the [`@prompt_template` decorator](/docs/v1/learn/prompts). You can stack it with `@llm.call` on any Python function, keeping templates next to the logic that uses them: ```python from mirascope import llm, prompt_template @@ -182,7 +182,7 @@ print(response.content) Most frameworks don’t handle response validation, requiring you to write boilerplate to check if the LLM actually returned what you expected. -Mirascope solves this with its [built-in response validation based on Pydantic](/docs/mirascope/learn/response_models). You simply define what the output should look like, and Mirascope automatically validates it before returning. This built-in automation eliminates repetitive guardrails you’d otherwise code by hand, giving you a clean, type-safe Python object that becomes your function’s return value. +Mirascope solves this with its [built-in response validation based on Pydantic](/docs/v1/learn/response_models). You simply define what the output should look like, and Mirascope automatically validates it before returning. This built-in automation eliminates repetitive guardrails you’d otherwise code by hand, giving you a clean, type-safe Python object that becomes your function’s return value. Hovering your cursor over a response in your IDE shows an inferred instance of your response model: @@ -194,7 +194,7 @@ This means you immediately know whether the return type matches your expectation Whenever the output deviates from your model, Mirascope shows a clear ValidationError at runtime. -Start building scalable LLM applications with Mirascope’s [developer-friendly toolkit](https://github.com/mirascope/mirascope). You can also find code samples and more information in our [documentation](/docs/mirascope/learn). +Start building scalable LLM applications with Mirascope’s [developer-friendly toolkit](https://github.com/mirascope/mirascope). You can also find code samples and more information in our [documentation](/docs/v1/learn). ### 3. Lilypad diff --git a/cloud/content/blog/llm-applications.mdx b/cloud/content/blog/llm-applications.mdx index 90802104c..644a7f2f1 100644 --- a/cloud/content/blog/llm-applications.mdx +++ b/cloud/content/blog/llm-applications.mdx @@ -21,7 +21,7 @@ As we suggested above, the number of potential applications using large language ### 1. Text Classification -You can leverage an LLM’s deep understanding of language to [accurately categorize text](/docs/mirascope/guides/more-advanced/text-classification/) based on its content, intent, or sentiment. +You can leverage an LLM’s deep understanding of language to [accurately categorize text](/docs/v1/guides/more-advanced/text-classification/) based on its content, intent, or sentiment. This can be done in a wide variety of contexts like customer support, content moderation, sentiment analysis, and even more specialized domains like legal document classification or medical record categorization. @@ -63,7 +63,7 @@ Output: ### 2. Text Summarization -[Text summarization](/docs/mirascope/guides/more-advanced/text-summarization/) basically takes a longer text and distills it into a more manageable format. Unlike earlier models that merely extracted and rearranged sentences, modern LLMs “digest” and generate more useful, abstractive summaries. +[Text summarization](/docs/v1/guides/more-advanced/text-summarization/) basically takes a longer text and distills it into a more manageable format. Unlike earlier models that merely extracted and rearranged sentences, modern LLMs “digest” and generate more useful, abstractive summaries. This means they synthesize new sentences that encapsulate the core ideas, offering a more fluid and nuanced representation of the original content. @@ -120,7 +120,7 @@ Generative pre-trained transformers (GPTs) are large language models based on th ### 3. Search with Sources -In [Search with Sources](/docs/mirascope/guides/more-advanced/search-with-sources/), the model retrieves information from verified sources through a search tool, and then supports its answers with citations from these sources. It lists these alongside its responses, ensuring the information is grounded in verifiable data. +In [Search with Sources](/docs/v1/guides/more-advanced/search-with-sources/), the model retrieves information from verified sources through a search tool, and then supports its answers with citations from these sources. It lists these alongside its responses, ensuring the information is grounded in verifiable data. Retrieving and citing information from verified sources offers better accuracy and improves the credibility of LLM answers by allowing users to easily fact check these. @@ -155,7 +155,7 @@ Output: ### 4. PDF Extraction -[This application](/docs/mirascope/guides/more-advanced/extract-from-pdf/) lets you extract specific content from PDF files - whether that’s text, images, or structured data - without the need for the labor-intensive and error-prone manual methods of the past. +[This application](/docs/v1/guides/more-advanced/extract-from-pdf/) lets you extract specific content from PDF files - whether that’s text, images, or structured data - without the need for the labor-intensive and error-prone manual methods of the past. Before language models, extracting data required manual effort or, at best, the use of natural language processing (NLP) tools that required configuration to correctly identify and categorize information within a document. Such tools lacked the flexibility to adapt to new or unexpected categories without manual effort. @@ -206,7 +206,7 @@ Output: ### 5. Knowledge Graph Extraction -You can [extract a (highly structured) knowledge graph](/docs/mirascope/guides/more-advanced/knowledge-graph/) from messy, unstructured data by mapping out the relationships between different entities. This kind of application is well-suited to domains where understanding complex connections is critical, like legal analysis, academic research, and fraud detection. +You can [extract a (highly structured) knowledge graph](/docs/v1/guides/more-advanced/knowledge-graph/) from messy, unstructured data by mapping out the relationships between different entities. This kind of application is well-suited to domains where understanding complex connections is critical, like legal analysis, academic research, and fraud detection. NLP already makes use of knowledge graphs to identify and link entities within unstructured text. However, LLMs improve the process by more accurately recognizing entities and linking all those entities to the appropriate entries in the knowledge graph. @@ -257,7 +257,7 @@ The knowledge graph contains information about the challenges of renewable energ ### 6. Generate Image Captions -[Generating image captions](/docs/mirascope/guides/more-advanced/generating-captions/) involves creating descriptive text that accurately reflects the content of an image. +[Generating image captions](/docs/v1/guides/more-advanced/generating-captions/) involves creating descriptive text that accurately reflects the content of an image. This task, once done only by humans, has seen considerable automation in recent years through the use of machine learning algorithms like conditional random fields (CRFs) and support vector machines (SVMs). While these methods are effective, they often require considerable time and computational resources. diff --git a/cloud/content/blog/llm-as-judge.mdx b/cloud/content/blog/llm-as-judge.mdx index 1c3822f37..8238649f4 100644 --- a/cloud/content/blog/llm-as-judge.mdx +++ b/cloud/content/blog/llm-as-judge.mdx @@ -250,7 +250,7 @@ Also in the example we use Mirascope and Pydantic to prompt the judges and enfor Mirascope in particular follows a “use-when-you-need” design philosophy that lets you choose modules off the shelf and easily slot these into existing workflows, **rather than having to adopt [larger abstractions](/blog/langchain-runnables/) wholesale and cope with their lack of transparency**, as other [frameworks](/blog/llm-frameworks/) might encourage you to do. -The example below is from the Mirascope docs about [evaluations](/docs/mirascope/learn/evals/#panel-of-judges). +The example below is from the Mirascope docs about [evaluations](/docs/v1/learn/evals/#panel-of-judges). ### Set Up the Environment @@ -329,7 +329,7 @@ This involves creating a list of judges, each making a call to a specific provid Mirascope call functions like `openai.call` let you easily switch providers using minimal boilerplate since Mirascope abstracts away the details of working with their APIs. -We also support a range of [popular providers](/docs/mirascope/), like OpenAI, Mistral, Gemini, Groq, and more. +We also support a range of [popular providers](/docs/v1/), like OpenAI, Mistral, Gemini, Groq, and more. Next, we run `evaluate_helpfulness` with a sample text through both LLMs as judges and parse the response of each with `Eval` to ensure structured and validated output. diff --git a/cloud/content/blog/llm-chaining.mdx b/cloud/content/blog/llm-chaining.mdx index 93baa1751..5dc77974f 100644 --- a/cloud/content/blog/llm-chaining.mdx +++ b/cloud/content/blog/llm-chaining.mdx @@ -694,7 +694,7 @@ The Lilypad Playground allows non-technical users, such as domain experts, to in Just as avoiding dependency on a framework’s update cycle - like waiting for LangChain to update when the OpenAI SDK changes - allows you to adopt improvements without delay, avoiding vendor lock-in gives you the ability to switch between LLM providers without extensive code rewriting. -Mirascope’s [`llm.call` decorator](/docs/mirascope/learn/calls/) allows you to quickly change providers in LLM calls by specifying the provider name in the argument list. This flexibility ensures that your application remains adaptable, allowing you to switch to a different model endpoint as needed. +Mirascope’s [`llm.call` decorator](/docs/v1/learn/calls/) allows you to quickly change providers in LLM calls by specifying the provider name in the argument list. This flexibility ensures that your application remains adaptable, allowing you to switch to a different model endpoint as needed. Also, Mirascope's native Python interface lets you integrate models and functionality that may not yet be supported natively by Mirascope, either through provider-specific decorators or by using the provider’s SDK/API directly. This means you can seamlessly mix and match different LLMs - including the latest releases from providers like OpenAI - without being restricted by framework limitations. @@ -791,4 +791,4 @@ It also shows autosuggestions: Mirascope’s lightweight Python toolkit lets you build scalable AI-powered chains while giving you the freedom to use only the components you need - no rigid frameworks, just modular flexibility. -You can easily get started by exploring our examples and guides on both our [documentation pages](/docs/mirascope/) and on [GitHub](https://github.com/mirascope/mirascope/). +You can easily get started by exploring our examples and guides on both our [documentation pages](/docs/v1/) and on [GitHub](https://github.com/mirascope/mirascope/). diff --git a/cloud/content/blog/llm-evaluation.mdx b/cloud/content/blog/llm-evaluation.mdx index fcd2323f4..521925acf 100644 --- a/cloud/content/blog/llm-evaluation.mdx +++ b/cloud/content/blog/llm-evaluation.mdx @@ -312,7 +312,7 @@ for evaluation in evaluations: ### Mirascope Evaluation Guide -See our [documentation](/docs/mirascope/learn/evals/) for rich examples and techniques to help you write evaluations for a range of LLM-driven applications. +See our [documentation](/docs/v1/learn/evals/) for rich examples and techniques to help you write evaluations for a range of LLM-driven applications. ## 6 Best Practices for Evaluating Language Models diff --git a/cloud/content/blog/llm-frameworks.mdx b/cloud/content/blog/llm-frameworks.mdx index fb2d2554f..9675801f5 100644 --- a/cloud/content/blog/llm-frameworks.mdx +++ b/cloud/content/blog/llm-frameworks.mdx @@ -186,7 +186,7 @@ The above snippet shows how you can build a RAG pipeline using [LangChain](/blog #### A Convenient Approach to LLM Calling -Mirascope’s [call decorator](/docs/mirascope/learn/calls/) (e.g., `@openai.call()`) converts regular Python functions into [prompts](/blog/advanced-prompt-engineering/) by turning the function signature and return statement into an API request. +Mirascope’s [call decorator](/docs/v1/learn/calls/) (e.g., `@openai.call()`) converts regular Python functions into [prompts](/blog/advanced-prompt-engineering/) by turning the function signature and return statement into an API request. This automates prompt formatting, model selection, and response parsing, while ensuring type safety and smooth integration with various LLM providers. @@ -585,7 +585,7 @@ It does this by enabling responses to be processed in real-time as they’re gen This streaming capability is useful for applications where users benefit from incremental responses, making the interaction more dynamic and responsive without waiting for the entire output to be generated​​ -For examples of complex, real-world agents, we recommend checking out our [agent tutorials](/docs/mirascope/guides/agents/web-search-agent/). +For examples of complex, real-world agents, we recommend checking out our [agent tutorials](/docs/v1/guides/agents/web-search-agent/). ### Microsoft Semantic Kernel diff --git a/cloud/content/blog/llm-integration.mdx b/cloud/content/blog/llm-integration.mdx index e28624559..723a8d9b3 100644 --- a/cloud/content/blog/llm-integration.mdx +++ b/cloud/content/blog/llm-integration.mdx @@ -451,6 +451,6 @@ Future-proof your AI workflows and applications with our toolkit’s pythonic ap It offers the right level of abstraction for simplifying development and experimentation with large language models, while not boxing you in, and ensuring scalability for projects of any size. -Want to learn more about Mirascope’s tools for building generative AI agents? You can find Mirascope code samples both in our [documentation](/docs/mirascope/) and our [GitHub repository](https://github.com/mirascope/mirascope). +Want to learn more about Mirascope’s tools for building generative AI agents? You can find Mirascope code samples both in our [documentation](/docs/v1/) and our [GitHub repository](https://github.com/mirascope/mirascope). Lilypad code samples can also be found in our [documentation](/docs/lilypad). These resources can help you incorporate automation into your LLM integration workflow, speeding up deployment and reducing manual tasks. diff --git a/cloud/content/blog/llm-orchestration.mdx b/cloud/content/blog/llm-orchestration.mdx index bc816fcbf..ff4217151 100644 --- a/cloud/content/blog/llm-orchestration.mdx +++ b/cloud/content/blog/llm-orchestration.mdx @@ -76,7 +76,7 @@ def recommend_movie_prompt(movie_titles: list[str]) -> openai.OpenAIDynamicConfi response = recommend_movie_propmt(["The Dark Knight", "Forrest Gump"]) ``` -This uses Mirascope's [`prompt_template`](/docs/mirascope/learn/prompts) decorator in tandem with the [`openai.call`](/docs/mirascope/learn/calls) decorator, which centralizes internal prompt logic and turns the prompt into an actual LLM API call, respectively. +This uses Mirascope's [`prompt_template`](/docs/v1/learn/prompts) decorator in tandem with the [`openai.call`](/docs/v1/learn/calls) decorator, which centralizes internal prompt logic and turns the prompt into an actual LLM API call, respectively. The above code also illustrates our principle of colocation, where everything impacting the quality of a call is colocated and versioned together. This can be seen in the computed field `titles_in_quotes` computed within the prompt function. @@ -149,7 +149,7 @@ if tool := response.tool: ``` -In the above code, we set up the function `get_flight_information` to generate a function call with its docstring, and then we register this function as a tool with OpenAI (Mirascope also provides ways of turning functions into tools [without needing a docstring](/docs/mirascope/learn/tools)). +In the above code, we set up the function `get_flight_information` to generate a function call with its docstring, and then we register this function as a tool with OpenAI (Mirascope also provides ways of turning functions into tools [without needing a docstring](/docs/v1/learn/tools)). ### 2. Data Preparation @@ -193,7 +193,7 @@ This includes maintaining [version control of prompts](/blog/prompt-versioning) #### Changing Model Providers -Mirascope allows you to [change model providers](/docs/mirascope/learn/calls) in just three lines (corresponding to the highlighted code below): +Mirascope allows you to [change model providers](/docs/v1/learn/calls) in just three lines (corresponding to the highlighted code below): 1. Change the `from mirascope.core import {provider}` import to the new provider. 2. Update any specific call params such as `model`. @@ -253,7 +253,7 @@ To support scalability, frameworks leverage technologies that enable high throug #### Streaming Model Responses -A common strategy is to stream large responses as chunks, rather than waiting for the entire response to be generated. Mirascope [streams](/docs/mirascope/learn/streams), which you can enable by setting `stream=True` in the [`call`](/docs/mirascope/learn/calls) decorator, provide `BaseCallResponseChunk` convenience wrappers around the original response chunks, as well as tools (if provided) that are constructed on your behalf: +A common strategy is to stream large responses as chunks, rather than waiting for the entire response to be generated. Mirascope [streams](/docs/v1/learn/streams), which you can enable by setting `stream=True` in the [`call`](/docs/v1/learn/calls) decorator, provide `BaseCallResponseChunk` convenience wrappers around the original response chunks, as well as tools (if provided) that are constructed on your behalf: ```python from mirascope.core import openai, prompt_template @@ -277,7 +277,7 @@ Streaming reduces the wait time associated with generating and delivering large #### Async Streaming -[Asynchronous streaming](/docs/mirascope/learn/async/#async-streaming) supports concurrency, as offered by our `stream_book_recommendation` function that sets up an asynchronous stream to get responses from the natural language processing model: +[Asynchronous streaming](/docs/v1/learn/async/#async-streaming) supports concurrency, as offered by our `stream_book_recommendation` function that sets up an asynchronous stream to get responses from the natural language processing model: ```python import asyncio diff --git a/cloud/content/blog/llm-prompt.mdx b/cloud/content/blog/llm-prompt.mdx index f22057dfb..34532664e 100644 --- a/cloud/content/blog/llm-prompt.mdx +++ b/cloud/content/blog/llm-prompt.mdx @@ -562,7 +562,7 @@ From there, gradually iterate and refine the prompt (as we saw earlier with mult The choice of model - whether it’s GPT-3.5, GPT-4, Claude, or others - can impact the quality of your responses. That’s why testing your prompt across multiple AI models is generally a good idea. -Mirascope’s `propt_template` decorator (combined with the [call decorator](/docs/mirascope/learn/calls)) serves as a kind of [template](/blog/langchain-prompt-template) to formulate model-agnostic prompts without needing to make major changes to the prompt structure or content. +Mirascope’s `propt_template` decorator (combined with the [call decorator](/docs/v1/learn/calls)) serves as a kind of [template](/blog/langchain-prompt-template) to formulate model-agnostic prompts without needing to make major changes to the prompt structure or content. ### Colocate Parameters and Code with LLM Calls diff --git a/cloud/content/blog/llm-tools.mdx b/cloud/content/blog/llm-tools.mdx index 1a0ee6f78..c07e3d89d 100644 --- a/cloud/content/blog/llm-tools.mdx +++ b/cloud/content/blog/llm-tools.mdx @@ -40,7 +40,7 @@ All this means you don't need to learn unnecessarily complex concepts or fancy s Mirascope follows Pythonic conventions. Take function call sequencing, for example. We don't make you implement directed acyclic graphs (DAGs) outright - which introduces unnecessary complexities. Instead, we code call sequences using regular Python that’s readable, lightweight, and easy to maintain. -An example of this is our [`prompt_template`](/docs/mirascope/learn/prompts) decorator, which enables writing prompt templates as simply Python functions. +An example of this is our [`prompt_template`](/docs/v1/learn/prompts) decorator, which enables writing prompt templates as simply Python functions. An example of our `prompt_template` decorator is below, where the prompt template provides a string for generating a prompt that requests music recommendations based on pairs of durations and genres. The computed field `durations_x_genres` constructs these pairs and integrates them into the prompt template for the final prompt output. @@ -176,7 +176,7 @@ Chat().run() #### Streamlined Function Calling (Tools) -Mirascope lets you extend model capabilities [by adding tools](/docs/mirascope/learn/tools) (i.e., function calling) to your workflows. Tools allow LLMs to access external information, perform calculations, run code, and more, and are straightforward to set up with Mirascope’s pythonic conventions. +Mirascope lets you extend model capabilities [by adding tools](/docs/v1/learn/tools) (i.e., function calling) to your workflows. Tools allow LLMs to access external information, perform calculations, run code, and more, and are straightforward to set up with Mirascope’s pythonic conventions. One of the simplest examples of using Mirascope's tools convenience is automatically generating a tool schema and object from a function with a docstring, allowing you to pass the function directly into your calls. For example: @@ -210,7 +210,7 @@ if tool := response.tool: A call response using Mirascope gives you a wrapper around the original response from the model, giving you access to the `call()` method. This way, you can easily invoke the function and manage its arguments. -Our code uses Google-style Python docstrings, but we support other styles too, including ReST, Numpydoc, and Epydoc-style docstrings. For more code samples and use cases, please refer to our documentation on [tools (function calling)](/docs/mirascope/learn/tools). +Our code uses Google-style Python docstrings, but we support other styles too, including ReST, Numpydoc, and Epydoc-style docstrings. For more code samples and use cases, please refer to our documentation on [tools (function calling)](/docs/v1/learn/tools). #### Extraction of Structured Information from Unstructured LLM Outputs @@ -245,7 +245,7 @@ print(task_details) Mirascope defines the schema for extraction via Pydantic models, which are classes derived from [`Pydantic.BaseModel`](https://docs.pydantic.dev/latest/api/base_model/). -Mirascope currently supports [extracting built-in types](/docs/mirascope/learn/response_models/#built-in-types) including: `str`, `int`, `float`, `bool`, `list`, `set`, `tuple`, and `Enum`. +Mirascope currently supports [extracting built-in types](/docs/v1/learn/response_models/#built-in-types) including: `str`, `int`, `float`, `bool`, `list`, `set`, `tuple`, and `Enum`. ### LangChain diff --git a/cloud/content/blog/mirascope-v1-release.mdx b/cloud/content/blog/mirascope-v1-release.mdx index 10d9e58f7..f5db16cd5 100644 --- a/cloud/content/blog/mirascope-v1-release.mdx +++ b/cloud/content/blog/mirascope-v1-release.mdx @@ -43,7 +43,7 @@ Our community has been incredible, providing consistent feedback that has influe It's worth going through our solutions to each of these problems to properly highlight why we made the changes we did in our v1 release. Of course, there were many other points of feedback we addressed as well as additional features we've included in the release. -Take a look at our [migration guide](/docs/mirascope/getting-started/migration/) and [learn documentation](/docs/mirascope/learn) for a deeper dive that covers everything in detail. +Take a look at our [migration guide](/docs/v1/getting-started/migration/) and [learn documentation](/docs/v1/learn) for a deeper dive that covers everything in detail. ### Separation of State and Arguments @@ -217,8 +217,8 @@ We invite you to try out Mirascope V1, share your experiences, and join us in sh Ready to get started? Here's how: -1. Check out our [Quick Start Guide](/docs/mirascope/guides/getting-started/quickstart) to set up Mirascope in minutes. -2. Explore our [Usage Documentation](/docs/mirascope/learn) for in-depth guides and examples. +1. Check out our [Quick Start Guide](/docs/v1/guides/getting-started/quickstart) to set up Mirascope in minutes. +2. Explore our [Usage Documentation](/docs/v1/learn) for in-depth guides and examples. 3. Join our [Discord Community](https://mirascope.com/discord-invite) to discuss questions, stay up to date on announcements, or even show off what you've built. Let's build the future of AI together! diff --git a/cloud/content/blog/openai-function-calling.mdx b/cloud/content/blog/openai-function-calling.mdx index 25922bed1..ce1bacc5c 100644 --- a/cloud/content/blog/openai-function-calling.mdx +++ b/cloud/content/blog/openai-function-calling.mdx @@ -270,7 +270,7 @@ print("After Tools Response:", response.content) ### Use Other Model Providers-Not Just OpenAI -As long as your functions are documented with docstrings, you can work with [other providers](/docs/mirascope/learn/calls) without needing to make many code changes, as Mirascope reformats the code in the background. +As long as your functions are documented with docstrings, you can work with [other providers](/docs/v1/learn/calls) without needing to make many code changes, as Mirascope reformats the code in the background. As well, you can define tools in Mirascope using `BaseTool`, which works across all providers by default so you don’t have to change anything when switching providers. diff --git a/cloud/content/blog/prompt-chaining.mdx b/cloud/content/blog/prompt-chaining.mdx index 3bef65606..d399c1b25 100644 --- a/cloud/content/blog/prompt-chaining.mdx +++ b/cloud/content/blog/prompt-chaining.mdx @@ -959,7 +959,7 @@ Each prompt should also perform a specific function and have a single responsibi ### Manage Data Carefully -Ensure that the format of the desired output of one prompt matches the expected input format of the next prompt to avoid errors. Schemas can easily help you achieve this, and the Mirascope library offers its [`response_model`](/docs/mirascope/learn/response_models) argument, which is built on top of Pydantic, to help you define schemas, extract structured outputs from large language models, and validate those outputs. +Ensure that the format of the desired output of one prompt matches the expected input format of the next prompt to avoid errors. Schemas can easily help you achieve this, and the Mirascope library offers its [`response_model`](/docs/v1/learn/response_models) argument, which is built on top of Pydantic, to help you define schemas, extract structured outputs from large language models, and validate those outputs. For example, if a prompt expects a JSON object with specific fields, ensure the preceding prompt generates data in that exact structure. As well, be prepared to transform data to meet the input requirements of subsequent prompts. This might involve parsing JSON, reformatting strings, or converting data types. diff --git a/cloud/content/blog/prompt-engineering-tools.mdx b/cloud/content/blog/prompt-engineering-tools.mdx index 94b28bb94..af7c5fd14 100644 --- a/cloud/content/blog/prompt-engineering-tools.mdx +++ b/cloud/content/blog/prompt-engineering-tools.mdx @@ -276,7 +276,7 @@ if tool := response.tool: Mirascope supports varying docstring styles, including Google-, ReST-, Numpydoc-, and Epydoc-style. -To learn more about Mirascope, you can refer to its extensive [documentation](/docs/mirascope/learn). +To learn more about Mirascope, you can refer to its extensive [documentation](/docs/v1/learn). ## LangSmith — Specialized in Logging and Experimenting With Prompts diff --git a/cloud/content/blog/prompt-engineering-vs-fine-tuning.mdx b/cloud/content/blog/prompt-engineering-vs-fine-tuning.mdx index fc2b26e8e..ca2a39b43 100644 --- a/cloud/content/blog/prompt-engineering-vs-fine-tuning.mdx +++ b/cloud/content/blog/prompt-engineering-vs-fine-tuning.mdx @@ -141,7 +141,7 @@ Below are listed _some_ of our own core [best practices](/blog/prompt-engineerin 1. __Write prompts in as clear and contextually rich a way as possible__. Vague prompts are ambiguous and often lack the context necessary to show the direction you want the model to take in its responses. 2. [__Colocate everything__](/blog/engineers-should-handle-prompting-llms) __that affects the quality of an LLM call with the call itself__. This means grouping model type, temperature, and other parameters altogether (along with placing the prompt in the near vicinity), as opposed to scattering these around the codebase. This was actually a big sticking point for us when first using the OpenAI SDK and [LangChain](/blog/langchain-sucks), which didn’t seem to care about keeping everything together, and was one of the reasons we designed [Mirascope](/docs/mirascope). 3. Following on from the previous point, [__version and manage__](/blog/prompt-versioning) __everything that you’ve colocated together, as a single unit__. We can’t stress this enough - this makes it easy to track changes and roll back to previous versions. We even have a [dedicated tool for automatically versioning and tracing](/docs/lilypad) for easier prompt management. -4. Prioritize validating LLM outputs and __use advanced retry mechanisms to handle errors effectively__. Apply tools like [Tenacity](https://tenacity.readthedocs.io/en/latest/) to automate retries, reinserting errors into subsequent prompts, allowing the model to refine its output with each attempt. [Mirascope provides utilities](/docs/mirascope/learn/retries/#error-reinsertion) to ease this process by collecting validation errors and reusing them contextually in calls that follow, improving model performance without you needing to make manual adjustments. +4. Prioritize validating LLM outputs and __use advanced retry mechanisms to handle errors effectively__. Apply tools like [Tenacity](https://tenacity.readthedocs.io/en/latest/) to automate retries, reinserting errors into subsequent prompts, allowing the model to refine its output with each attempt. [Mirascope provides utilities](/docs/v1/learn/retries/#error-reinsertion) to ease this process by collecting validation errors and reusing them contextually in calls that follow, improving model performance without you needing to make manual adjustments. ### Guidelines for Fine-Tuning diff --git a/cloud/content/blog/prompt-evaluation.mdx b/cloud/content/blog/prompt-evaluation.mdx index 6c69d0d2e..b37d59256 100644 --- a/cloud/content/blog/prompt-evaluation.mdx +++ b/cloud/content/blog/prompt-evaluation.mdx @@ -790,4 +790,4 @@ As you see above, the test example helps confirm that the evaluation logic works Ready to build smarter, more effective prompts? Mirascope lets you write custom evaluation criteria for AI applications using the Python you already know, and slots readily into existing developer workflows, making it easy to get started. -You can find more Mirascope [evaluation tutorials](/docs/mirascope/guides/evals/evaluating-web-search-agent/) both on our [documentation site](/docs/mirascope) and our [GitHub page](https://github.com/mirascope/mirascope). +You can find more Mirascope [evaluation tutorials](/docs/v1/guides/evals/evaluating-web-search-agent/) both on our [documentation site](/docs/mirascope) and our [GitHub page](https://github.com/mirascope/mirascope). diff --git a/cloud/content/blog/rag-application.mdx b/cloud/content/blog/rag-application.mdx index 74eca0849..c5f60c6d4 100644 --- a/cloud/content/blog/rag-application.mdx +++ b/cloud/content/blog/rag-application.mdx @@ -127,7 +127,7 @@ This has led to many different development approaches, as developers experiment For example, it’s unclear whether current frameworks offer the right level of abstraction. For instance, some argue that LangChain’s runnables, often used in RAG applications, [are too high-level](/blog/langchain-runnables/) for scenarios where you need more granular control over each step of the pipeline. -(That’s why we favor the simplicity of a [Pythonic approach](/docs/mirascope/getting-started/why/), like Mirascope’s, which leverages Python’s native syntax to allow fine-grained customization of each pipeline stage.) +(That’s why we favor the simplicity of a [Pythonic approach](/docs/v1/getting-started/why/), like Mirascope’s, which leverages Python’s native syntax to allow fine-grained customization of each pipeline stage.) ### Challenge #4: Evaluating Content Generated by RAG diff --git a/cloud/content/blog/synthetic-data-generation.mdx b/cloud/content/blog/synthetic-data-generation.mdx index eb2c8c15c..020d8311c 100644 --- a/cloud/content/blog/synthetic-data-generation.mdx +++ b/cloud/content/blog/synthetic-data-generation.mdx @@ -134,7 +134,7 @@ Plus, evaluating the quality and relevance of the data further complicates the p As we’ve seen, language models are great at learning and representing complex patterns from vast amounts of data. -Below, we walk you through [an example](/docs/mirascope/guides/more-advanced/generating-synthetic-data/) of using OpenAI's GPT-4o-mini to create ecommerce data on home appliances, including attributes like name, price, and inventory. +Below, we walk you through [an example](/docs/v1/guides/more-advanced/generating-synthetic-data/) of using OpenAI's GPT-4o-mini to create ecommerce data on home appliances, including attributes like name, price, and inventory. The main libraries we’ll use are: @@ -222,7 +222,7 @@ Colocation simplifies development by keeping the logic, structure, and [prompt]( This differs from practices encouraged by other major [LLM frameworks](/blog/llm-frameworks/) that don’t promote colocation, with the effect that prompts, logic, and configuration can get scattered around the codebase, leading to greater challenges in debugging, testing, and iterating on functionality. -Mirascope’s call decorators are also model agnostic, requiring minimal code changes for switching LLM providers (our library interfaces with [many popular providers](/docs/mirascope/)). +Mirascope’s call decorators are also model agnostic, requiring minimal code changes for switching LLM providers (our library interfaces with [many popular providers](/docs/v1/)). Mirascope’s `response_model` parameter lets you take full advantage of Pydantic’s type annotations, which integrate features like autocomplete, type hints, and linting directly in your IDE. diff --git a/cloud/content/docs/v1/api/core/anthropic/call.mdx b/cloud/content/docs/v1/api/core/anthropic/call.mdx index 60bfb9369..e5176a44b 100644 --- a/cloud/content/docs/v1/api/core/anthropic/call.mdx +++ b/cloud/content/docs/v1/api/core/anthropic/call.mdx @@ -12,7 +12,7 @@ A decorator for calling the Anthropic API with a typed function. -[Calls](/docs/mirascope/learn/calls) +[Calls](/docs/v1/learn/calls) @@ -86,7 +86,7 @@ print(response.content) "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "Callable", @@ -168,7 +168,7 @@ print(response.content) "type_str": "AnthropicCallResponse", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/anthropic/call_response#anthropiccallresponse" + "doc_url": "/docs/v1/api/core/anthropic/call_response#anthropiccallresponse" }, { "type_str": "ResponseModelT", @@ -216,7 +216,7 @@ print(response.content) "type_str": "AnthropicCallParams", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/anthropic/call_params#anthropiccallparams" + "doc_url": "/docs/v1/api/core/anthropic/call_params#anthropiccallparams" }, "description": "The `AnthropicCallParams` call parameters to use\nin the API call." } diff --git a/cloud/content/docs/v1/api/core/anthropic/call_params.mdx b/cloud/content/docs/v1/api/core/anthropic/call_params.mdx index c30859f0c..930e02b37 100644 --- a/cloud/content/docs/v1/api/core/anthropic/call_params.mdx +++ b/cloud/content/docs/v1/api/core/anthropic/call_params.mdx @@ -8,7 +8,7 @@ description: API documentation for mirascope.core.anthropic.call_params -[Calls](/docs/mirascope/learn/calls#provider-specific-parameters) +[Calls](/docs/v1/learn/calls#provider-specific-parameters) @@ -19,7 +19,7 @@ The parameters to use when calling the Anthropic API. [Anthropic API Reference](https://docs.anthropic.com/en/api/messages) **Bases:** - + -[Calls](/docs/mirascope/learn/calls#handling-responses) +[Calls](/docs/v1/learn/calls#handling-responses) @@ -39,7 +39,7 @@ print(response.content) ``` **Bases:** - + -[Streams](/docs/mirascope/learn/streams#handling-streamed-responses) +[Streams](/docs/v1/learn/streams#handling-streamed-responses) @@ -47,7 +47,7 @@ for chunk, _ in stream: ``` **Bases:** - + AnthropicDynamicConfig -**Type:** +**Type:** ## AsyncAnthropicDynamicConfig -**Type:** +**Type:** The function return type for functions wrapped with the `anthropic_call` decorator. diff --git a/cloud/content/docs/v1/api/core/anthropic/stream.mdx b/cloud/content/docs/v1/api/core/anthropic/stream.mdx index 0e4abb25d..44c22f327 100644 --- a/cloud/content/docs/v1/api/core/anthropic/stream.mdx +++ b/cloud/content/docs/v1/api/core/anthropic/stream.mdx @@ -12,7 +12,7 @@ The `AnthropicStream` class for convenience around streaming LLM calls. -[Streams](/docs/mirascope/learn/streams) +[Streams](/docs/v1/learn/streams) @@ -43,7 +43,7 @@ for chunk, _ in stream: ``` **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/anthropic/tool.mdx b/cloud/content/docs/v1/api/core/anthropic/tool.mdx index efe8b596a..1371f0bf9 100644 --- a/cloud/content/docs/v1/api/core/anthropic/tool.mdx +++ b/cloud/content/docs/v1/api/core/anthropic/tool.mdx @@ -12,7 +12,7 @@ The `OpenAITool` class for easy tool usage with OpenAI LLM calls. -[Tools](/docs/mirascope/learn/tools) +[Tools](/docs/v1/learn/tools) @@ -21,7 +21,7 @@ The `OpenAITool` class for easy tool usage with OpenAI LLM calls. A tool configuration for Anthropic-specific features. **Bases:** - + + diff --git a/cloud/content/docs/v1/api/core/azure/call.mdx b/cloud/content/docs/v1/api/core/azure/call.mdx index 3ec6f9821..d77020b51 100644 --- a/cloud/content/docs/v1/api/core/azure/call.mdx +++ b/cloud/content/docs/v1/api/core/azure/call.mdx @@ -12,7 +12,7 @@ A decorator for calling the Azure API with a typed function. -[Calls](/docs/mirascope/learn/calls) +[Calls](/docs/v1/learn/calls) @@ -86,7 +86,7 @@ print(response.content) "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "Callable", @@ -168,7 +168,7 @@ print(response.content) "type_str": "AzureCallResponse", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/azure/call_response#azurecallresponse" + "doc_url": "/docs/v1/api/core/azure/call_response#azurecallresponse" }, { "type_str": "ResponseModelT", @@ -216,7 +216,7 @@ print(response.content) "type_str": "AzureCallParams", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/azure/call_params#azurecallparams" + "doc_url": "/docs/v1/api/core/azure/call_params#azurecallparams" }, "description": "The `AzureCallParams` call parameters to use in the\nAPI call." } diff --git a/cloud/content/docs/v1/api/core/azure/call_params.mdx b/cloud/content/docs/v1/api/core/azure/call_params.mdx index 9bceaf116..61288e820 100644 --- a/cloud/content/docs/v1/api/core/azure/call_params.mdx +++ b/cloud/content/docs/v1/api/core/azure/call_params.mdx @@ -10,7 +10,7 @@ description: API documentation for mirascope.core.azure.call_params -[Calls](/docs/mirascope/learn/calls#provider-specific-parameters) +[Calls](/docs/v1/learn/calls#provider-specific-parameters) @@ -26,7 +26,7 @@ The parameters to use when calling the Azure API. [Azure API Reference](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference) **Bases:** - + -[Calls](/docs/mirascope/learn/calls#handling-responses) +[Calls](/docs/v1/learn/calls#handling-responses) @@ -39,7 +39,7 @@ print(response.content) ``` **Bases:** - + -[Streams](/docs/mirascope/learn/streams#handling-streamed-responses) +[Streams](/docs/v1/learn/streams#handling-streamed-responses) @@ -40,7 +40,7 @@ for chunk, _ in stream: ``` **Bases:** - + AsyncAzureDynamicConfig -**Type:** +**Type:** ## AzureDynamicConfig -**Type:** +**Type:** The function return type for functions wrapped with the `azure_call` decorator. diff --git a/cloud/content/docs/v1/api/core/azure/stream.mdx b/cloud/content/docs/v1/api/core/azure/stream.mdx index f46bfca47..782c50421 100644 --- a/cloud/content/docs/v1/api/core/azure/stream.mdx +++ b/cloud/content/docs/v1/api/core/azure/stream.mdx @@ -10,7 +10,7 @@ The `AzureStream` class for convenience around streaming LLM calls. -[Streams](/docs/mirascope/learn/streams) +[Streams](/docs/v1/learn/streams) @@ -36,7 +36,7 @@ for chunk, _ in stream: ``` **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/azure/tool.mdx b/cloud/content/docs/v1/api/core/azure/tool.mdx index 2ef083ffc..bccb130ee 100644 --- a/cloud/content/docs/v1/api/core/azure/tool.mdx +++ b/cloud/content/docs/v1/api/core/azure/tool.mdx @@ -12,14 +12,14 @@ The `AzureTool` class for easy tool usage with Azure LLM calls. -[Tools](/docs/mirascope/learn/tools) +[Tools](/docs/v1/learn/tools) ## GenerateAzureStrictToolJsonSchema **Bases:** - + ## AzureToolConfig @@ -27,7 +27,7 @@ The `AzureTool` class for easy tool usage with Azure LLM calls. A tool configuration for Azure-specific features. **Bases:** - + + diff --git a/cloud/content/docs/v1/api/core/base/call_factory.mdx b/cloud/content/docs/v1/api/core/base/call_factory.mdx index 7fd39bb9e..4e32b5569 100644 --- a/cloud/content/docs/v1/api/core/base/call_factory.mdx +++ b/cloud/content/docs/v1/api/core/base/call_factory.mdx @@ -30,7 +30,7 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseCallResponseT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response#basecallresponse" + "doc_url": "/docs/v1/api/core/base/call_response#basecallresponse" } ] }, @@ -53,7 +53,7 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseCallResponseChunkT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response_chunk#basecallresponsechunk" + "doc_url": "/docs/v1/api/core/base/call_response_chunk#basecallresponsechunk" } ] }, @@ -76,7 +76,7 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] }, @@ -99,7 +99,7 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseStreamT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/stream#basestream" + "doc_url": "/docs/v1/api/core/base/stream#basestream" } ] }, @@ -111,7 +111,7 @@ A factory method for creating provider-specific call decorators. "type_str": "BaseCallParams", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_params#basecallparams" + "doc_url": "/docs/v1/api/core/base/call_params#basecallparams" }, "description": "The default call parameters to use, which must match the\n`TCallParams` type if provided." }, @@ -149,7 +149,7 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseDynamicConfigT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" }, { "type_str": "_AsyncBaseDynamicConfigT", @@ -161,7 +161,7 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseCallParamsT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_params#basecallparams" + "doc_url": "/docs/v1/api/core/base/call_params#basecallparams" }, { "type_str": "_ResponseT", @@ -191,7 +191,7 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] }, @@ -222,7 +222,7 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseDynamicConfigT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" }, { "type_str": "_AsyncBaseDynamicConfigT", @@ -234,7 +234,7 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseCallParamsT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_params#basecallparams" + "doc_url": "/docs/v1/api/core/base/call_params#basecallparams" }, { "type_str": "_ResponseT", @@ -264,7 +264,7 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -300,13 +300,13 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseCallResponseT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response#basecallresponse" + "doc_url": "/docs/v1/api/core/base/call_response#basecallresponse" }, { "type_str": "_BaseCallResponseChunkT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response_chunk#basecallresponsechunk" + "doc_url": "/docs/v1/api/core/base/call_response_chunk#basecallresponsechunk" } ] } @@ -337,13 +337,13 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseCallResponseChunkT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response_chunk#basecallresponsechunk" + "doc_url": "/docs/v1/api/core/base/call_response_chunk#basecallresponsechunk" }, { "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] }, @@ -372,13 +372,13 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseCallResponseChunkT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response_chunk#basecallresponsechunk" + "doc_url": "/docs/v1/api/core/base/call_response_chunk#basecallresponsechunk" }, { "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] }, @@ -404,19 +404,19 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseCallResponseT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response#basecallresponse" + "doc_url": "/docs/v1/api/core/base/call_response#basecallresponse" }, { "type_str": "_BaseCallResponseChunkT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response_chunk#basecallresponsechunk" + "doc_url": "/docs/v1/api/core/base/call_response_chunk#basecallresponsechunk" }, { "type_str": "_BaseDynamicConfigT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" }, { "type_str": "_AsyncBaseDynamicConfigT", @@ -428,13 +428,13 @@ A factory method for creating provider-specific call decorators. "type_str": "_BaseCallParamsT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_params#basecallparams" + "doc_url": "/docs/v1/api/core/base/call_params#basecallparams" }, { "type_str": "_BaseStreamT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/stream#basestream" + "doc_url": "/docs/v1/api/core/base/stream#basestream" }, { "type_str": "_SyncBaseClientT", diff --git a/cloud/content/docs/v1/api/core/base/call_response.mdx b/cloud/content/docs/v1/api/core/base/call_response.mdx index bf6b57411..b1b0d75ca 100644 --- a/cloud/content/docs/v1/api/core/base/call_response.mdx +++ b/cloud/content/docs/v1/api/core/base/call_response.mdx @@ -53,7 +53,7 @@ This module contains the base call response class. "type_str": "_BaseCallResponseT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response#basecallresponse" + "doc_url": "/docs/v1/api/core/base/call_response#basecallresponse" } ] }, @@ -83,7 +83,7 @@ This module contains the base call response class. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "str", @@ -161,7 +161,7 @@ This module contains the base call response class. "type_str": "_BaseCallResponseT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response#basecallresponse" + "doc_url": "/docs/v1/api/core/base/call_response#basecallresponse" } ] }, @@ -191,13 +191,13 @@ This module contains the base call response class. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "JsonableType", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/types#jsonabletype" + "doc_url": "/docs/v1/api/core/base/types#jsonabletype" } ] } @@ -235,7 +235,7 @@ This module contains the base call response class. A base abstract interface for LLM call responses. **Bases:** -, , +, , -[Streams](/docs/mirascope/learn/streams#handling-streamed-responses) +[Streams](/docs/v1/learn/streams#handling-streamed-responses) @@ -19,7 +19,7 @@ This module contains the `BaseCallResponseChunk` class. A base abstract interface for LLM streaming response chunks. **Bases:** -, , +, , DynamicConfigMessages **Bases:** -, +, DynamicConfigCallParams **Bases:** -, +, DynamicConfigClient **Bases:** -, +, DynamicConfigMessagesCallParams **Bases:** -, +, DynamicConfigMessagesClient **Bases:** -, +, DynamicConfigCallParamsClient **Bases:** -, +, DynamicConfigFull **Bases:** -, +, BaseDynamicConfig -**Type:** +**Type:** The base type in a function as an LLM call to return for dynamic configuration. diff --git a/cloud/content/docs/v1/api/core/base/merge_decorators.mdx b/cloud/content/docs/v1/api/core/base/merge_decorators.mdx index bd3518870..d96d7ce2a 100644 --- a/cloud/content/docs/v1/api/core/base/merge_decorators.mdx +++ b/cloud/content/docs/v1/api/core/base/merge_decorators.mdx @@ -55,7 +55,7 @@ All function metadata (e.g. docstrings, function name) is preserved through the "type_str": "_P", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/toolkit#p" + "doc_url": "/docs/v1/api/core/base/toolkit#p" }, { "type_str": "_R", @@ -203,7 +203,7 @@ All function metadata (e.g. docstrings, function name) is preserved through the "type_str": "_P", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/toolkit#p" + "doc_url": "/docs/v1/api/core/base/toolkit#p" }, { "type_str": "_R", diff --git a/cloud/content/docs/v1/api/core/base/message_param.mdx b/cloud/content/docs/v1/api/core/base/message_param.mdx index 14dd7b35b..b100fcfe1 100644 --- a/cloud/content/docs/v1/api/core/base/message_param.mdx +++ b/cloud/content/docs/v1/api/core/base/message_param.mdx @@ -13,7 +13,7 @@ A base class for message parameters. -[Prompts](/docs/mirascope/learn/prompts#prompt-templates-messages) +[Prompts](/docs/v1/learn/prompts#prompt-templates-messages) @@ -76,13 +76,13 @@ A base class for message parameters. "type_str": "TextPart", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/message_param#textpart" + "doc_url": "/docs/v1/api/core/base/message_param#textpart" }, { "type_str": "ImagePart", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/message_param#imagepart" + "doc_url": "/docs/v1/api/core/base/message_param#imagepart" }, { "type_str": "ImageURLPart", @@ -94,7 +94,7 @@ A base class for message parameters. "type_str": "AudioPart", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/message_param#audiopart" + "doc_url": "/docs/v1/api/core/base/message_param#audiopart" }, { "type_str": "AudioURLPart", diff --git a/cloud/content/docs/v1/api/core/base/prompt.mdx b/cloud/content/docs/v1/api/core/base/prompt.mdx index 2bb765973..e53e54eee 100644 --- a/cloud/content/docs/v1/api/core/base/prompt.mdx +++ b/cloud/content/docs/v1/api/core/base/prompt.mdx @@ -96,7 +96,7 @@ Returns the list of parsed message parameters. "type_str": "BaseMessageParam", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/message_param#basemessageparam" + "doc_url": "/docs/v1/api/core/base/message_param#basemessageparam" } ] } @@ -128,7 +128,7 @@ Returns the dynamic config of the prompt. "type_str": "BaseDynamicConfig", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" } }} /> @@ -270,7 +270,7 @@ print(response.content) "type_str": "BaseDynamicConfig", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" } ] } @@ -297,7 +297,7 @@ print(response.content) "type_str": "_BaseCallResponseT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response#basecallresponse" + "doc_url": "/docs/v1/api/core/base/call_response#basecallresponse" } ] } @@ -346,7 +346,7 @@ print(response.content) "type_str": "BaseDynamicConfig", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" } ] } @@ -373,7 +373,7 @@ print(response.content) "type_str": "_BaseStreamT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/stream#basestream" + "doc_url": "/docs/v1/api/core/base/stream#basestream" } ] } @@ -422,7 +422,7 @@ print(response.content) "type_str": "BaseDynamicConfig", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" } ] } @@ -498,7 +498,7 @@ print(response.content) "type_str": "BaseDynamicConfig", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" } ] } @@ -609,13 +609,13 @@ print(response.content) "type_str": "_BaseCallResponseT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response#basecallresponse" + "doc_url": "/docs/v1/api/core/base/call_response#basecallresponse" }, { "type_str": "_BaseStreamT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/stream#basestream" + "doc_url": "/docs/v1/api/core/base/stream#basestream" }, { "type_str": "_ResponseModelT", @@ -752,7 +752,7 @@ asyncio.run(run()) "type_str": "BaseDynamicConfig", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" } ] } @@ -792,7 +792,7 @@ asyncio.run(run()) "type_str": "_BaseCallResponseT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response#basecallresponse" + "doc_url": "/docs/v1/api/core/base/call_response#basecallresponse" } ] } @@ -854,7 +854,7 @@ asyncio.run(run()) "type_str": "BaseDynamicConfig", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" } ] } @@ -894,7 +894,7 @@ asyncio.run(run()) "type_str": "_BaseStreamT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/stream#basestream" + "doc_url": "/docs/v1/api/core/base/stream#basestream" } ] } @@ -956,7 +956,7 @@ asyncio.run(run()) "type_str": "BaseDynamicConfig", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" } ] } @@ -1058,7 +1058,7 @@ asyncio.run(run()) "type_str": "BaseDynamicConfig", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" } ] } @@ -1195,7 +1195,7 @@ asyncio.run(run()) "type_str": "_BaseCallResponseT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response#basecallresponse" + "doc_url": "/docs/v1/api/core/base/call_response#basecallresponse" } ] }, @@ -1214,7 +1214,7 @@ asyncio.run(run()) "type_str": "_BaseStreamT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/stream#basestream" + "doc_url": "/docs/v1/api/core/base/stream#basestream" } ] }, @@ -1288,7 +1288,7 @@ A decorator for setting the `prompt_template` of a `BasePrompt` or `call`. -[Prompts](/docs/mirascope/learn/prompts#prompt-templates-messages) +[Prompts](/docs/v1/learn/prompts#prompt-templates-messages) @@ -1394,7 +1394,7 @@ print(response.metadata) "type_str": "Metadata", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/metadata#metadata" + "doc_url": "/docs/v1/api/core/base/metadata#metadata" } } ]} diff --git a/cloud/content/docs/v1/api/core/base/stream.mdx b/cloud/content/docs/v1/api/core/base/stream.mdx index 10a221c1d..50f26e5c3 100644 --- a/cloud/content/docs/v1/api/core/base/stream.mdx +++ b/cloud/content/docs/v1/api/core/base/stream.mdx @@ -15,7 +15,7 @@ This module contains the base classes for streaming responses from LLMs. A base class for streaming responses from LLMs. **Bases:** -, +, @@ -821,7 +821,7 @@ Constructs the call response. "type_str": "_BaseCallResponseT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response#basecallresponse" + "doc_url": "/docs/v1/api/core/base/call_response#basecallresponse" } ] } @@ -843,7 +843,7 @@ Constructs the call response. "type_str": "BaseStream", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/stream#basestream" + "doc_url": "/docs/v1/api/core/base/stream#basestream" } ] } @@ -882,7 +882,7 @@ Constructs the call response. "type_str": "_BaseDynamicConfigT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" }, { "type_str": "_AsyncBaseDynamicConfigT", @@ -894,7 +894,7 @@ Constructs the call response. "type_str": "_BaseCallParamsT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_params#basecallparams" + "doc_url": "/docs/v1/api/core/base/call_params#basecallparams" }, { "type_str": "_ResponseT", @@ -924,7 +924,7 @@ Constructs the call response. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] }, @@ -955,7 +955,7 @@ Constructs the call response. "type_str": "_BaseDynamicConfigT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" }, { "type_str": "_AsyncBaseDynamicConfigT", @@ -967,7 +967,7 @@ Constructs the call response. "type_str": "_BaseCallParamsT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_params#basecallparams" + "doc_url": "/docs/v1/api/core/base/call_params#basecallparams" }, { "type_str": "_ResponseT", @@ -997,7 +997,7 @@ Constructs the call response. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -1027,13 +1027,13 @@ Constructs the call response. "type_str": "_BaseCallResponseChunkT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response_chunk#basecallresponsechunk" + "doc_url": "/docs/v1/api/core/base/call_response_chunk#basecallresponsechunk" }, { "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -1061,13 +1061,13 @@ Constructs the call response. "type_str": "_BaseCallResponseChunkT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response_chunk#basecallresponsechunk" + "doc_url": "/docs/v1/api/core/base/call_response_chunk#basecallresponsechunk" }, { "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } diff --git a/cloud/content/docs/v1/api/core/base/structured_stream.mdx b/cloud/content/docs/v1/api/core/base/structured_stream.mdx index 2890ff05c..215e69c0e 100644 --- a/cloud/content/docs/v1/api/core/base/structured_stream.mdx +++ b/cloud/content/docs/v1/api/core/base/structured_stream.mdx @@ -25,7 +25,7 @@ A base class for streaming structured outputs from LLMs. "type_str": "BaseStream", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/stream#basestream" + "doc_url": "/docs/v1/api/core/base/stream#basestream" } }, { @@ -93,7 +93,7 @@ A base class for streaming structured outputs from LLMs. "type_str": "_BaseCallResponseT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response#basecallresponse" + "doc_url": "/docs/v1/api/core/base/call_response#basecallresponse" } ] } @@ -115,7 +115,7 @@ A base class for streaming structured outputs from LLMs. "type_str": "_BaseCallResponseChunkT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response_chunk#basecallresponsechunk" + "doc_url": "/docs/v1/api/core/base/call_response_chunk#basecallresponsechunk" } ] } @@ -137,7 +137,7 @@ A base class for streaming structured outputs from LLMs. "type_str": "BaseStream", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/stream#basestream" + "doc_url": "/docs/v1/api/core/base/stream#basestream" } ] } @@ -159,7 +159,7 @@ A base class for streaming structured outputs from LLMs. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -198,7 +198,7 @@ A base class for streaming structured outputs from LLMs. "type_str": "_BaseDynamicConfigT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" }, { "type_str": "_AsyncBaseDynamicConfigT", @@ -210,7 +210,7 @@ A base class for streaming structured outputs from LLMs. "type_str": "_BaseCallParamsT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_params#basecallparams" + "doc_url": "/docs/v1/api/core/base/call_params#basecallparams" }, { "type_str": "_ResponseT", @@ -240,7 +240,7 @@ A base class for streaming structured outputs from LLMs. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] }, @@ -271,7 +271,7 @@ A base class for streaming structured outputs from LLMs. "type_str": "_BaseDynamicConfigT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/dynamic_config#basedynamicconfig" + "doc_url": "/docs/v1/api/core/base/dynamic_config#basedynamicconfig" }, { "type_str": "_AsyncBaseDynamicConfigT", @@ -283,7 +283,7 @@ A base class for streaming structured outputs from LLMs. "type_str": "_BaseCallParamsT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_params#basecallparams" + "doc_url": "/docs/v1/api/core/base/call_params#basecallparams" }, { "type_str": "_ResponseT", @@ -313,7 +313,7 @@ A base class for streaming structured outputs from LLMs. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -337,7 +337,7 @@ A base class for streaming structured outputs from LLMs. "type_str": "_BaseCallResponseChunkT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/call_response_chunk#basecallresponsechunk" + "doc_url": "/docs/v1/api/core/base/call_response_chunk#basecallresponsechunk" } ] } diff --git a/cloud/content/docs/v1/api/core/base/tool.mdx b/cloud/content/docs/v1/api/core/base/tool.mdx index 53e048602..eff0ac07e 100644 --- a/cloud/content/docs/v1/api/core/base/tool.mdx +++ b/cloud/content/docs/v1/api/core/base/tool.mdx @@ -12,7 +12,7 @@ This module defines the base class for tools used in LLM calls. -[Tools](/docs/mirascope/learn/tools) +[Tools](/docs/v1/learn/tools) @@ -109,7 +109,7 @@ class FormatBook(BaseTool): "type_str": "ToolConfig", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#toolconfig" + "doc_url": "/docs/v1/api/core/base/tool#toolconfig" } }, { @@ -267,7 +267,7 @@ Returns this tool type converted from a function. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -302,7 +302,7 @@ Returns this tool type converted from a function. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -333,7 +333,7 @@ Returns this tool type converted from a given base tool type. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -381,7 +381,7 @@ Returns this tool type converted from a given base tool type. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -412,7 +412,7 @@ Returns this tool type converted from a base type. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -460,7 +460,7 @@ Returns this tool type converted from a base type. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } diff --git a/cloud/content/docs/v1/api/core/base/toolkit.mdx b/cloud/content/docs/v1/api/core/base/toolkit.mdx index 397431fa7..8b3bb9ee4 100644 --- a/cloud/content/docs/v1/api/core/base/toolkit.mdx +++ b/cloud/content/docs/v1/api/core/base/toolkit.mdx @@ -12,7 +12,7 @@ The module for defining the toolkit class for LLM call tools. -[Tools](/docs/mirascope/learn/tools#toolkit) +[Tools](/docs/v1/learn/tools#toolkit) @@ -67,7 +67,7 @@ The module for defining the toolkit class for LLM call tools. "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -261,7 +261,7 @@ The method to create the tools. "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -315,13 +315,13 @@ The method to create the tools. "type_str": "_BaseToolKitT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/toolkit#basetoolkit" + "doc_url": "/docs/v1/api/core/base/toolkit#basetoolkit" }, { "type_str": "P", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/toolkit#p" + "doc_url": "/docs/v1/api/core/base/toolkit#p" } ] }, @@ -348,7 +348,7 @@ The method to create the tools. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -397,13 +397,13 @@ The method to create the tools. "type_str": "_BaseToolKitT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/toolkit#basetoolkit" + "doc_url": "/docs/v1/api/core/base/toolkit#basetoolkit" }, { "type_str": "P", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/toolkit#p" + "doc_url": "/docs/v1/api/core/base/toolkit#p" } ] }, @@ -430,7 +430,7 @@ The method to create the tools. "type_str": "_BaseToolT", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } diff --git a/cloud/content/docs/v1/api/core/base/types.mdx b/cloud/content/docs/v1/api/core/base/types.mdx index 50e5e227e..ff53f23f4 100644 --- a/cloud/content/docs/v1/api/core/base/types.mdx +++ b/cloud/content/docs/v1/api/core/base/types.mdx @@ -75,7 +75,7 @@ description: API documentation for mirascope.core.base.types "type_str": "AudioSegment", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/types#audiosegment" + "doc_url": "/docs/v1/api/core/base/types#audiosegment" } }} /> @@ -112,7 +112,7 @@ description: API documentation for mirascope.core.base.types "type_str": "AudioSegment", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/types#audiosegment" + "doc_url": "/docs/v1/api/core/base/types#audiosegment" } }} /> @@ -149,7 +149,7 @@ description: API documentation for mirascope.core.base.types "type_str": "AudioSegment", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/types#audiosegment" + "doc_url": "/docs/v1/api/core/base/types#audiosegment" } }} /> diff --git a/cloud/content/docs/v1/api/core/bedrock/call.mdx b/cloud/content/docs/v1/api/core/bedrock/call.mdx index dbb94d44a..2b74bd1ca 100644 --- a/cloud/content/docs/v1/api/core/bedrock/call.mdx +++ b/cloud/content/docs/v1/api/core/bedrock/call.mdx @@ -12,7 +12,7 @@ A decorator for calling the Bedrock API with a typed function. -[Calls](/docs/mirascope/learn/calls) +[Calls](/docs/v1/learn/calls) @@ -87,7 +87,7 @@ print(response.content) "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "Callable", @@ -169,7 +169,7 @@ print(response.content) "type_str": "BedrockCallResponse", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/bedrock/call_response#bedrockcallresponse" + "doc_url": "/docs/v1/api/core/bedrock/call_response#bedrockcallresponse" }, { "type_str": "ResponseModelT", @@ -217,7 +217,7 @@ print(response.content) "type_str": "BedrockCallParams", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/bedrock/call_params#bedrockcallparams" + "doc_url": "/docs/v1/api/core/bedrock/call_params#bedrockcallparams" }, "description": "The `BedrockCallParams` call parameters to use in the\nAPI call." } diff --git a/cloud/content/docs/v1/api/core/bedrock/call_params.mdx b/cloud/content/docs/v1/api/core/bedrock/call_params.mdx index 74d9db460..2da4591b9 100644 --- a/cloud/content/docs/v1/api/core/bedrock/call_params.mdx +++ b/cloud/content/docs/v1/api/core/bedrock/call_params.mdx @@ -8,7 +8,7 @@ description: API documentation for mirascope.core.bedrock.call_params -[Calls](/docs/mirascope/learn/calls#provider-specific-parameters) +[Calls](/docs/v1/learn/calls#provider-specific-parameters) @@ -19,7 +19,7 @@ The parameters to use when calling the Bedrock API. [Bedrock converse API Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime/client/converse.html) **Bases:** - + -[Calls](/docs/mirascope/learn/calls#handling-responses) +[Calls](/docs/v1/learn/calls#handling-responses) @@ -40,7 +40,7 @@ print(response.content) ``` **Bases:** - + -[Streams](/docs/mirascope/learn/streams#handling-streamed-responses) +[Streams](/docs/v1/learn/streams#handling-streamed-responses) @@ -41,7 +41,7 @@ for chunk, _ in stream: ``` **Bases:** - + AsyncBedrockDynamicConfig -**Type:** +**Type:** ## BedrockDynamicConfig -**Type:** +**Type:** The function return type for functions wrapped with the `bedrock_call` decorator. diff --git a/cloud/content/docs/v1/api/core/bedrock/stream.mdx b/cloud/content/docs/v1/api/core/bedrock/stream.mdx index d721946d3..1043eb2c0 100644 --- a/cloud/content/docs/v1/api/core/bedrock/stream.mdx +++ b/cloud/content/docs/v1/api/core/bedrock/stream.mdx @@ -10,7 +10,7 @@ The `BedrockStream` class for convenience around streaming LLM calls. -[Streams](/docs/mirascope/learn/streams) +[Streams](/docs/v1/learn/streams) @@ -37,7 +37,7 @@ for chunk, _ in stream: ``` **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/bedrock/tool.mdx b/cloud/content/docs/v1/api/core/bedrock/tool.mdx index 713b8f1b1..a6044815d 100644 --- a/cloud/content/docs/v1/api/core/bedrock/tool.mdx +++ b/cloud/content/docs/v1/api/core/bedrock/tool.mdx @@ -12,14 +12,14 @@ The `BedrockTool` class for easy tool usage with Bedrock LLM calls. -[Tools](/docs/mirascope/learn/tools) +[Tools](/docs/v1/learn/tools) ## GenerateBedrockStrictToolJsonSchema **Bases:** - + ## BedrockToolConfig @@ -27,7 +27,7 @@ The `BedrockTool` class for easy tool usage with Bedrock LLM calls. A tool configuration for Bedrock-specific features. **Bases:** - + ## BedrockTool @@ -57,7 +57,7 @@ if tool := response.tool: # returns an `BedrockTool` instance ``` **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/cohere/call.mdx b/cloud/content/docs/v1/api/core/cohere/call.mdx index 93cd3df2c..fb12a8fd3 100644 --- a/cloud/content/docs/v1/api/core/cohere/call.mdx +++ b/cloud/content/docs/v1/api/core/cohere/call.mdx @@ -12,7 +12,7 @@ A decorator for calling the Cohere API with a typed function. -[Calls](/docs/mirascope/learn/calls) +[Calls](/docs/v1/learn/calls) @@ -86,7 +86,7 @@ print(response.content) "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "Callable", @@ -168,7 +168,7 @@ print(response.content) "type_str": "CohereCallResponse", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/cohere/call_response#coherecallresponse" + "doc_url": "/docs/v1/api/core/cohere/call_response#coherecallresponse" }, { "type_str": "ResponseModelT", @@ -216,7 +216,7 @@ print(response.content) "type_str": "CohereCallParams", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/cohere/call_params#coherecallparams" + "doc_url": "/docs/v1/api/core/cohere/call_params#coherecallparams" }, "description": "The `CohereCallParams` call parameters to use in the\nAPI call." } diff --git a/cloud/content/docs/v1/api/core/cohere/call_params.mdx b/cloud/content/docs/v1/api/core/cohere/call_params.mdx index 81dcfe94f..37e78a66d 100644 --- a/cloud/content/docs/v1/api/core/cohere/call_params.mdx +++ b/cloud/content/docs/v1/api/core/cohere/call_params.mdx @@ -8,7 +8,7 @@ description: API documentation for mirascope.core.cohere.call_params -[Calls](/docs/mirascope/learn/calls#provider-specific-parameters) +[Calls](/docs/v1/learn/calls#provider-specific-parameters) @@ -19,7 +19,7 @@ The parameters to use when calling the Cohere API. [Cohere API Reference](https://docs.cohere.com/reference/chat) **Bases:** - + -[Calls](/docs/mirascope/learn/calls#handling-responses) +[Calls](/docs/v1/learn/calls#handling-responses) @@ -39,7 +39,7 @@ print(response.content) ``` **Bases:** - + -[Streams](/docs/mirascope/learn/streams#handling-streamed-responses) +[Streams](/docs/v1/learn/streams#handling-streamed-responses) @@ -40,7 +40,7 @@ for chunk, _ in stream: ``` **Bases:** - + AsyncCohereDynamicConfig -**Type:** +**Type:** ## CohereDynamicConfig -**Type:** +**Type:** The function return type for functions wrapped with the `cohere_call` decorator. diff --git a/cloud/content/docs/v1/api/core/cohere/stream.mdx b/cloud/content/docs/v1/api/core/cohere/stream.mdx index 72d70ded9..673bdae13 100644 --- a/cloud/content/docs/v1/api/core/cohere/stream.mdx +++ b/cloud/content/docs/v1/api/core/cohere/stream.mdx @@ -10,7 +10,7 @@ The `CohereStream` class for convenience around streaming LLM calls. -[Streams](/docs/mirascope/learn/streams) +[Streams](/docs/v1/learn/streams) @@ -36,7 +36,7 @@ for chunk, _ in stream: ``` **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/cohere/tool.mdx b/cloud/content/docs/v1/api/core/cohere/tool.mdx index fa756ee86..1e6ef97c7 100644 --- a/cloud/content/docs/v1/api/core/cohere/tool.mdx +++ b/cloud/content/docs/v1/api/core/cohere/tool.mdx @@ -10,7 +10,7 @@ The `CohereTool` class for easy tool usage with Cohere LLM calls. -[Tools](/docs/mirascope/learn/tools) +[Tools](/docs/v1/learn/tools) @@ -40,7 +40,7 @@ if tool := response.tool: # returns an `CohereTool` instance ``` **Bases:** - + @@ -159,7 +159,7 @@ Constructs an `CohereTool` instance from a `tool_call`. "type_str": "CohereTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/cohere/tool#coheretool" + "doc_url": "/docs/v1/api/core/cohere/tool#coheretool" } }} /> diff --git a/cloud/content/docs/v1/api/core/costs/calculate_cost.mdx b/cloud/content/docs/v1/api/core/costs/calculate_cost.mdx index 0df4a077e..e6ecd3370 100644 --- a/cloud/content/docs/v1/api/core/costs/calculate_cost.mdx +++ b/cloud/content/docs/v1/api/core/costs/calculate_cost.mdx @@ -23,7 +23,7 @@ preserving existing behavior while providing a unified interface. "type_str": "Provider", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/types#provider" + "doc_url": "/docs/v1/api/core/base/types#provider" }, "description": "The LLM provider (e.g., \"openai\", \"anthropic\")" }, @@ -54,7 +54,7 @@ preserving existing behavior while providing a unified interface. "type_str": "CostMetadata", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/types#costmetadata" + "doc_url": "/docs/v1/api/core/base/types#costmetadata" }, { "type_str": "None", diff --git a/cloud/content/docs/v1/api/core/google/call.mdx b/cloud/content/docs/v1/api/core/google/call.mdx index f781a943b..2eea88ba8 100644 --- a/cloud/content/docs/v1/api/core/google/call.mdx +++ b/cloud/content/docs/v1/api/core/google/call.mdx @@ -12,7 +12,7 @@ A decorator for calling the Google API with a typed function. -[Calls](/docs/mirascope/learn/calls) +[Calls](/docs/v1/learn/calls) @@ -86,7 +86,7 @@ print(response.content) "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "Callable", @@ -168,7 +168,7 @@ print(response.content) "type_str": "GoogleCallResponse", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/google/call_response#googlecallresponse" + "doc_url": "/docs/v1/api/core/google/call_response#googlecallresponse" }, { "type_str": "ResponseModelT", @@ -216,7 +216,7 @@ print(response.content) "type_str": "GoogleCallParams", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/google/call_params#googlecallparams" + "doc_url": "/docs/v1/api/core/google/call_params#googlecallparams" }, "description": "The `GoogleCallParams` call parameters to use in the\nAPI call." } diff --git a/cloud/content/docs/v1/api/core/google/call_params.mdx b/cloud/content/docs/v1/api/core/google/call_params.mdx index 898d55395..2b18afaf0 100644 --- a/cloud/content/docs/v1/api/core/google/call_params.mdx +++ b/cloud/content/docs/v1/api/core/google/call_params.mdx @@ -8,7 +8,7 @@ description: API documentation for mirascope.core.google.call_params -[Calls](/docs/mirascope/learn/calls#provider-specific-parameters) +[Calls](/docs/v1/learn/calls#provider-specific-parameters) @@ -19,7 +19,7 @@ The parameters to use when calling the Google API. [Google API Reference](https://ai.google.dev/google-api/docs/text-generation?lang=python) **Bases:** - + -[Calls](/docs/mirascope/learn/calls#handling-responses) +[Calls](/docs/v1/learn/calls#handling-responses) @@ -40,7 +40,7 @@ print(response.content) ``` **Bases:** - + -[Streams](/docs/mirascope/learn/streams#handling-streamed-responses) +[Streams](/docs/v1/learn/streams#handling-streamed-responses) @@ -39,7 +39,7 @@ for chunk, _ in stream: ``` **Bases:** - + GoogleDynamicConfig -**Type:** +**Type:** The function return type for functions wrapped with the `google_call` decorator. diff --git a/cloud/content/docs/v1/api/core/google/stream.mdx b/cloud/content/docs/v1/api/core/google/stream.mdx index 514a9f27b..43a9bc22c 100644 --- a/cloud/content/docs/v1/api/core/google/stream.mdx +++ b/cloud/content/docs/v1/api/core/google/stream.mdx @@ -10,7 +10,7 @@ The `GoogleStream` class for convenience around streaming LLM calls. -[Streams](/docs/mirascope/learn/streams) +[Streams](/docs/v1/learn/streams) @@ -35,7 +35,7 @@ for chunk, _ in stream: ``` **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/google/tool.mdx b/cloud/content/docs/v1/api/core/google/tool.mdx index c27e51805..3c936671d 100644 --- a/cloud/content/docs/v1/api/core/google/tool.mdx +++ b/cloud/content/docs/v1/api/core/google/tool.mdx @@ -10,7 +10,7 @@ The `GoogleTool` class for easy tool usage with Google's Google LLM calls. -[Tools](/docs/mirascope/learn/tools) +[Tools](/docs/v1/learn/tools) @@ -40,7 +40,7 @@ if tool := response.tool: # returns an `GoogleTool` instance ``` **Bases:** - + @@ -146,7 +146,7 @@ Constructs an `GoogleTool` instance from a `tool_call`. "type_str": "GoogleTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/google/tool#googletool" + "doc_url": "/docs/v1/api/core/google/tool#googletool" } }} /> diff --git a/cloud/content/docs/v1/api/core/groq/call.mdx b/cloud/content/docs/v1/api/core/groq/call.mdx index 6de1500b6..8f0ed1e42 100644 --- a/cloud/content/docs/v1/api/core/groq/call.mdx +++ b/cloud/content/docs/v1/api/core/groq/call.mdx @@ -12,7 +12,7 @@ A decorator for calling the Groq API with a typed function. -[Calls](/docs/mirascope/learn/calls) +[Calls](/docs/v1/learn/calls) @@ -86,7 +86,7 @@ print(response.content) "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "Callable", @@ -168,7 +168,7 @@ print(response.content) "type_str": "CohereCallResponse", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/cohere/call_response#coherecallresponse" + "doc_url": "/docs/v1/api/core/cohere/call_response#coherecallresponse" }, { "type_str": "ResponseModelT", @@ -216,7 +216,7 @@ print(response.content) "type_str": "GroqCallParams", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/groq/call_params#groqcallparams" + "doc_url": "/docs/v1/api/core/groq/call_params#groqcallparams" }, "description": "The `GroqCallParams` call parameters to use in the API\ncall." } diff --git a/cloud/content/docs/v1/api/core/groq/call_params.mdx b/cloud/content/docs/v1/api/core/groq/call_params.mdx index 945fce0de..ee9033dcc 100644 --- a/cloud/content/docs/v1/api/core/groq/call_params.mdx +++ b/cloud/content/docs/v1/api/core/groq/call_params.mdx @@ -8,7 +8,7 @@ description: API documentation for mirascope.core.groq.call_params -[Calls](/docs/mirascope/learn/calls#provider-specific-parameters) +[Calls](/docs/v1/learn/calls#provider-specific-parameters) @@ -19,7 +19,7 @@ The parameters to use when calling the Groq API. [Groq API Reference](https://console.groq.com/docs/api-reference#chat-create) **Bases:** - + -[Calls](/docs/mirascope/learn/calls#handling-responses) +[Calls](/docs/v1/learn/calls#handling-responses) @@ -38,7 +38,7 @@ print(response.content) ``` **Bases:** - + -[Streams](/docs/mirascope/learn/streams#handling-streamed-responses) +[Streams](/docs/v1/learn/streams#handling-streamed-responses) @@ -46,7 +46,7 @@ for chunk, _ in stream: ``` **Bases:** - + AsyncGroqDynamicConfig -**Type:** +**Type:** ## GroqDynamicConfig -**Type:** +**Type:** The function return type for functions wrapped with the `groq_call` decorator. diff --git a/cloud/content/docs/v1/api/core/groq/stream.mdx b/cloud/content/docs/v1/api/core/groq/stream.mdx index 04fc9f6d8..80bc87bee 100644 --- a/cloud/content/docs/v1/api/core/groq/stream.mdx +++ b/cloud/content/docs/v1/api/core/groq/stream.mdx @@ -12,7 +12,7 @@ The `GroqStream` class for convenience around streaming LLM calls. -[Streams](/docs/mirascope/learn/streams) +[Streams](/docs/v1/learn/streams) @@ -42,7 +42,7 @@ for chunk, _ in stream: ``` **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/groq/tool.mdx b/cloud/content/docs/v1/api/core/groq/tool.mdx index 17e2454f6..ac99d1495 100644 --- a/cloud/content/docs/v1/api/core/groq/tool.mdx +++ b/cloud/content/docs/v1/api/core/groq/tool.mdx @@ -10,7 +10,7 @@ The `GroqTool` class for easy tool usage with Groq LLM calls. -[Tools](/docs/mirascope/learn/tools) +[Tools](/docs/v1/learn/tools) @@ -39,7 +39,7 @@ if tool := response.tool: # returns an `GroqTool` instance ``` **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/litellm/call.mdx b/cloud/content/docs/v1/api/core/litellm/call.mdx index 7da3e3405..c49be1ff0 100644 --- a/cloud/content/docs/v1/api/core/litellm/call.mdx +++ b/cloud/content/docs/v1/api/core/litellm/call.mdx @@ -12,7 +12,7 @@ A decorator for calling the LiteLLM API with a typed function. -[Calls](/docs/mirascope/learn/calls) +[Calls](/docs/v1/learn/calls) @@ -86,7 +86,7 @@ print(response.content) "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "Callable", @@ -168,7 +168,7 @@ print(response.content) "type_str": "OpenAICallResponse", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/openai/call_response#openaicallresponse" + "doc_url": "/docs/v1/api/core/openai/call_response#openaicallresponse" }, { "type_str": "ResponseModelT", @@ -216,7 +216,7 @@ print(response.content) "type_str": "OpenAICallParams", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/openai/call_params#openaicallparams" + "doc_url": "/docs/v1/api/core/openai/call_params#openaicallparams" }, "description": "The `OpenAICallParams` call parameters to use in the\nAPI call." } diff --git a/cloud/content/docs/v1/api/core/litellm/call_params.mdx b/cloud/content/docs/v1/api/core/litellm/call_params.mdx index 99d53f49a..1318ca2f2 100644 --- a/cloud/content/docs/v1/api/core/litellm/call_params.mdx +++ b/cloud/content/docs/v1/api/core/litellm/call_params.mdx @@ -8,7 +8,7 @@ description: API documentation for mirascope.core.litellm.call_params -[Calls](/docs/mirascope/learn/calls#provider-specific-parameters) +[Calls](/docs/v1/learn/calls#provider-specific-parameters) @@ -19,6 +19,6 @@ A simple wrapper around `OpenAICallParams.` Since LiteLLM uses the OpenAI spec, we change nothing here. **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/litellm/call_response.mdx b/cloud/content/docs/v1/api/core/litellm/call_response.mdx index f659310c1..957be04e2 100644 --- a/cloud/content/docs/v1/api/core/litellm/call_response.mdx +++ b/cloud/content/docs/v1/api/core/litellm/call_response.mdx @@ -10,7 +10,7 @@ This module contains the `LiteLLMCallResponse` class. -[Calls](/docs/mirascope/learn/calls#handling-responses) +[Calls](/docs/v1/learn/calls#handling-responses) @@ -22,7 +22,7 @@ Everything is the same except the `cost` property, which has been updated to use LiteLLM's cost calculations so that cost tracking works for non-OpenAI models. **Bases:** - + -[Streams](/docs/mirascope/learn/streams#handling-streamed-responses) +[Streams](/docs/v1/learn/streams#handling-streamed-responses) @@ -22,6 +22,6 @@ Everything is the same except the `cost` property, which has been updated to use LiteLLM's cost calculations so that cost tracking works for non-OpenAI models. **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/litellm/stream.mdx b/cloud/content/docs/v1/api/core/litellm/stream.mdx index 6a6b3c369..aa85f9de2 100644 --- a/cloud/content/docs/v1/api/core/litellm/stream.mdx +++ b/cloud/content/docs/v1/api/core/litellm/stream.mdx @@ -10,7 +10,7 @@ The `LiteLLMStream` class for convenience around streaming LLM calls. -[Streams](/docs/mirascope/learn/streams) +[Streams](/docs/v1/learn/streams) @@ -23,7 +23,7 @@ the `cost` property so that cost is properly calculated using LiteLLM's cost calculation method. This ensures cost calculation works for non-OpenAI models. **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/litellm/tool.mdx b/cloud/content/docs/v1/api/core/litellm/tool.mdx index 2103e9fa1..1057dfd68 100644 --- a/cloud/content/docs/v1/api/core/litellm/tool.mdx +++ b/cloud/content/docs/v1/api/core/litellm/tool.mdx @@ -10,7 +10,7 @@ The `LiteLLMTool` class for easy tool usage with LiteLLM LLM calls. -[Tools](/docs/mirascope/learn/tools) +[Tools](/docs/v1/learn/tools) @@ -21,6 +21,6 @@ A simple wrapper around `OpenAITool`. Since LiteLLM uses the OpenAI spec, we change nothing here. **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/mistral/call.mdx b/cloud/content/docs/v1/api/core/mistral/call.mdx index e91dedba8..e2b2424d4 100644 --- a/cloud/content/docs/v1/api/core/mistral/call.mdx +++ b/cloud/content/docs/v1/api/core/mistral/call.mdx @@ -12,7 +12,7 @@ A decorator for calling the Mistral API with a typed function. -[Calls](/docs/mirascope/learn/calls) +[Calls](/docs/v1/learn/calls) @@ -86,7 +86,7 @@ print(response.content) "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "Callable", @@ -168,7 +168,7 @@ print(response.content) "type_str": "MistralCallResponse", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/mistral/call_response#mistralcallresponse" + "doc_url": "/docs/v1/api/core/mistral/call_response#mistralcallresponse" }, { "type_str": "ResponseModelT", @@ -216,7 +216,7 @@ print(response.content) "type_str": "MistralCallParams", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/mistral/call_params#mistralcallparams" + "doc_url": "/docs/v1/api/core/mistral/call_params#mistralcallparams" }, "description": "The `MistralCallParams` call parameters to use in\nthe API call." } diff --git a/cloud/content/docs/v1/api/core/mistral/call_params.mdx b/cloud/content/docs/v1/api/core/mistral/call_params.mdx index dc370d7db..84dd3b0be 100644 --- a/cloud/content/docs/v1/api/core/mistral/call_params.mdx +++ b/cloud/content/docs/v1/api/core/mistral/call_params.mdx @@ -8,7 +8,7 @@ description: API documentation for mirascope.core.mistral.call_params -[Calls](/docs/mirascope/learn/calls#provider-specific-parameters) +[Calls](/docs/v1/learn/calls#provider-specific-parameters) @@ -19,7 +19,7 @@ The parameters to use when calling the Mistral API. [Mistral API Reference](https://docs.mistral.ai/api/) **Bases:** - + -[Calls](/docs/mirascope/learn/calls#handling-responses) +[Calls](/docs/v1/learn/calls#handling-responses) @@ -38,7 +38,7 @@ print(response.content) ``` **Bases:** - + -[Streams](/docs/mirascope/learn/streams#handling-streamed-responses) +[Streams](/docs/v1/learn/streams#handling-streamed-responses) @@ -40,7 +40,7 @@ for chunk, _ in stream: ``` **Bases:** - + MistralDynamicConfig -**Type:** +**Type:** The function return type for functions wrapped with the `mistral_call` decorator. diff --git a/cloud/content/docs/v1/api/core/mistral/stream.mdx b/cloud/content/docs/v1/api/core/mistral/stream.mdx index f91517ec6..981ed8073 100644 --- a/cloud/content/docs/v1/api/core/mistral/stream.mdx +++ b/cloud/content/docs/v1/api/core/mistral/stream.mdx @@ -10,7 +10,7 @@ The `MistralStream` class for convenience around streaming LLM calls. -[Streams](/docs/mirascope/learn/streams) +[Streams](/docs/v1/learn/streams) @@ -36,7 +36,7 @@ for chunk, _ in stream: ``` **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/mistral/tool.mdx b/cloud/content/docs/v1/api/core/mistral/tool.mdx index 8fe6e1532..6d2c796f7 100644 --- a/cloud/content/docs/v1/api/core/mistral/tool.mdx +++ b/cloud/content/docs/v1/api/core/mistral/tool.mdx @@ -10,7 +10,7 @@ The `MistralTool` class for easy tool usage with Mistral LLM calls. -[Tools](/docs/mirascope/learn/tools) +[Tools](/docs/v1/learn/tools) @@ -39,7 +39,7 @@ if tool := response.tool: # returns a `MistralTool` instance ``` **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/openai/call.mdx b/cloud/content/docs/v1/api/core/openai/call.mdx index 0ec2f93b0..89f40f329 100644 --- a/cloud/content/docs/v1/api/core/openai/call.mdx +++ b/cloud/content/docs/v1/api/core/openai/call.mdx @@ -12,7 +12,7 @@ A decorator for calling the OpenAI API with a typed function. -[Calls](/docs/mirascope/learn/calls) +[Calls](/docs/v1/learn/calls) @@ -86,7 +86,7 @@ print(response.content) "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "Callable", @@ -168,7 +168,7 @@ print(response.content) "type_str": "OpenAICallResponse", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/openai/call_response#openaicallresponse" + "doc_url": "/docs/v1/api/core/openai/call_response#openaicallresponse" }, { "type_str": "ResponseModelT", @@ -216,7 +216,7 @@ print(response.content) "type_str": "OpenAICallParams", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/openai/call_params#openaicallparams" + "doc_url": "/docs/v1/api/core/openai/call_params#openaicallparams" }, "description": "The `OpenAICallParams` call parameters to use in the\nAPI call." } diff --git a/cloud/content/docs/v1/api/core/openai/call_params.mdx b/cloud/content/docs/v1/api/core/openai/call_params.mdx index 781a44ec6..2668fda3d 100644 --- a/cloud/content/docs/v1/api/core/openai/call_params.mdx +++ b/cloud/content/docs/v1/api/core/openai/call_params.mdx @@ -10,7 +10,7 @@ description: API documentation for mirascope.core.openai.call_params -[Calls](/docs/mirascope/learn/calls#provider-specific-parameters) +[Calls](/docs/v1/learn/calls#provider-specific-parameters) @@ -30,7 +30,7 @@ The parameters to use when calling the OpenAI API. [OpenAI API Reference](https://platform.openai.com/docs/api-reference/chat/create) **Bases:** - + -[Calls](/docs/mirascope/learn/calls#handling-responses) +[Calls](/docs/v1/learn/calls#handling-responses) @@ -67,7 +67,7 @@ print(response.content) ``` **Bases:** - + -[Streams](/docs/mirascope/learn/streams#handling-streamed-responses) +[Streams](/docs/v1/learn/streams#handling-streamed-responses) @@ -47,7 +47,7 @@ for chunk, _ in stream: ``` **Bases:** - + AsyncOpenAIDynamicConfig -**Type:** +**Type:** ## OpenAIDynamicConfig -**Type:** +**Type:** The function return type for functions wrapped with the `openai_call` decorator. diff --git a/cloud/content/docs/v1/api/core/openai/stream.mdx b/cloud/content/docs/v1/api/core/openai/stream.mdx index 77787ce0f..6702087a2 100644 --- a/cloud/content/docs/v1/api/core/openai/stream.mdx +++ b/cloud/content/docs/v1/api/core/openai/stream.mdx @@ -12,7 +12,7 @@ The `OpenAIStream` class for convenience around streaming LLM calls. -[Streams](/docs/mirascope/learn/streams) +[Streams](/docs/v1/learn/streams) @@ -43,7 +43,7 @@ for chunk, _ in stream: ``` **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/openai/tool.mdx b/cloud/content/docs/v1/api/core/openai/tool.mdx index 19c0d29c0..c04470713 100644 --- a/cloud/content/docs/v1/api/core/openai/tool.mdx +++ b/cloud/content/docs/v1/api/core/openai/tool.mdx @@ -12,14 +12,14 @@ The `OpenAITool` class for easy tool usage with OpenAI LLM calls. -[Tools](/docs/mirascope/learn/tools) +[Tools](/docs/v1/learn/tools) ## GenerateOpenAIStrictToolJsonSchema **Bases:** - + ## OpenAIToolConfig @@ -27,7 +27,7 @@ The `OpenAITool` class for easy tool usage with OpenAI LLM calls. A tool configuration for OpenAI-specific features. **Bases:** - + + diff --git a/cloud/content/docs/v1/api/core/xai/call.mdx b/cloud/content/docs/v1/api/core/xai/call.mdx index 739e60d18..9372923ea 100644 --- a/cloud/content/docs/v1/api/core/xai/call.mdx +++ b/cloud/content/docs/v1/api/core/xai/call.mdx @@ -12,7 +12,7 @@ A decorator for calling the xAI API with a typed function. -[Calls](/docs/mirascope/learn/calls) +[Calls](/docs/v1/learn/calls) @@ -86,7 +86,7 @@ print(response.content) "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "Callable", @@ -168,7 +168,7 @@ print(response.content) "type_str": "OpenAICallResponse", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/openai/call_response#openaicallresponse" + "doc_url": "/docs/v1/api/core/openai/call_response#openaicallresponse" }, { "type_str": "ResponseModelT", @@ -216,7 +216,7 @@ print(response.content) "type_str": "OpenAICallParams", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/openai/call_params#openaicallparams" + "doc_url": "/docs/v1/api/core/openai/call_params#openaicallparams" }, "description": "The `OpenAICallParams` call parameters to use in the\nAPI call." } diff --git a/cloud/content/docs/v1/api/core/xai/call_params.mdx b/cloud/content/docs/v1/api/core/xai/call_params.mdx index a29f73a64..a794137e5 100644 --- a/cloud/content/docs/v1/api/core/xai/call_params.mdx +++ b/cloud/content/docs/v1/api/core/xai/call_params.mdx @@ -8,7 +8,7 @@ description: API documentation for mirascope.core.xai.call_params -[Calls](/docs/mirascope/learn/calls#provider-specific-parameters) +[Calls](/docs/v1/learn/calls#provider-specific-parameters) @@ -19,6 +19,6 @@ A simple wrapper around `OpenAICallParams.` Since xAI supports the OpenAI spec, we change nothing here. **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/xai/call_response.mdx b/cloud/content/docs/v1/api/core/xai/call_response.mdx index 9254b8835..04335387b 100644 --- a/cloud/content/docs/v1/api/core/xai/call_response.mdx +++ b/cloud/content/docs/v1/api/core/xai/call_response.mdx @@ -10,7 +10,7 @@ This module contains the `XAICallResponse` class. -[Calls](/docs/mirascope/learn/calls#handling-responses) +[Calls](/docs/v1/learn/calls#handling-responses) @@ -22,6 +22,6 @@ Everything is the same except the `cost` property, which has been updated to use xAI's cost calculations so that cost tracking works for non-OpenAI models. **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/xai/call_response_chunk.mdx b/cloud/content/docs/v1/api/core/xai/call_response_chunk.mdx index 1bc8e1402..12cfb22ac 100644 --- a/cloud/content/docs/v1/api/core/xai/call_response_chunk.mdx +++ b/cloud/content/docs/v1/api/core/xai/call_response_chunk.mdx @@ -10,7 +10,7 @@ This module contains the `XAICallResponseChunk` class. -[Streams](/docs/mirascope/learn/streams#handling-streamed-responses) +[Streams](/docs/v1/learn/streams#handling-streamed-responses) @@ -22,6 +22,6 @@ Everything is the same except the `cost` property, which has been updated to use xAI's cost calculations so that cost tracking works for non-OpenAI models. **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/xai/stream.mdx b/cloud/content/docs/v1/api/core/xai/stream.mdx index 7f036e634..7896c08f1 100644 --- a/cloud/content/docs/v1/api/core/xai/stream.mdx +++ b/cloud/content/docs/v1/api/core/xai/stream.mdx @@ -10,7 +10,7 @@ The `XAIStream` class for convenience around streaming xAI LLM calls. -[Streams](/docs/mirascope/learn/streams) +[Streams](/docs/v1/learn/streams) @@ -23,7 +23,7 @@ the `cost` property so that cost is properly calculated using xAI's cost calculation method. This ensures cost calculation works for non-OpenAI models. **Bases:** - + diff --git a/cloud/content/docs/v1/api/core/xai/tool.mdx b/cloud/content/docs/v1/api/core/xai/tool.mdx index 03a883fbe..648f5f390 100644 --- a/cloud/content/docs/v1/api/core/xai/tool.mdx +++ b/cloud/content/docs/v1/api/core/xai/tool.mdx @@ -10,7 +10,7 @@ The `XAITool` class for easy tool usage with xAI LLM calls. -[Tools](/docs/mirascope/learn/tools) +[Tools](/docs/v1/learn/tools) @@ -21,6 +21,6 @@ A simple wrapper around `OpenAITool`. Since xAI supports the OpenAI spec, we change nothing here. **Bases:** - + diff --git a/cloud/content/docs/v1/api/llm/call.mdx b/cloud/content/docs/v1/api/llm/call.mdx index a5de3bea4..c67b1df1f 100644 --- a/cloud/content/docs/v1/api/llm/call.mdx +++ b/cloud/content/docs/v1/api/llm/call.mdx @@ -12,7 +12,7 @@ A decorator for making provider-agnostic LLM API calls with a typed function. -[Calls](/docs/mirascope/learn/calls) +[Calls](/docs/v1/learn/calls) @@ -56,13 +56,13 @@ print(response.content) "type_str": "Provider", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/types#provider" + "doc_url": "/docs/v1/api/core/base/types#provider" }, { "type_str": "LocalProvider", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/types#localprovider" + "doc_url": "/docs/v1/api/core/base/types#localprovider" } ] }, @@ -116,7 +116,7 @@ print(response.content) "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" }, { "type_str": "Callable", @@ -198,7 +198,7 @@ print(response.content) "type_str": "CallResponse", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/llm/call_response#callresponse" + "doc_url": "/docs/v1/api/llm/call_response#callresponse" }, { "type_str": "ResponseModelT", diff --git a/cloud/content/docs/v1/api/llm/call_response.mdx b/cloud/content/docs/v1/api/llm/call_response.mdx index 1d73ecde9..cc1e945a9 100644 --- a/cloud/content/docs/v1/api/llm/call_response.mdx +++ b/cloud/content/docs/v1/api/llm/call_response.mdx @@ -15,7 +15,7 @@ A provider-agnostic CallResponse class. We rely on _response having `common_` methods or properties for normalization. **Bases:** - + CallResponseChunk **Bases:** - + + @@ -119,13 +119,13 @@ Returns the tool message parameters for tool call results. "type_str": "Tool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/llm/tool#tool" + "doc_url": "/docs/v1/api/llm/tool#tool" }, { "type_str": "JsonableType", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/types#jsonabletype" + "doc_url": "/docs/v1/api/core/base/types#jsonabletype" } ] } @@ -153,7 +153,7 @@ Returns the tool message parameters for tool call results. "type_str": "BaseMessageParam", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/message_param#basemessageparam" + "doc_url": "/docs/v1/api/core/base/message_param#basemessageparam" } ] } diff --git a/cloud/content/docs/v1/api/llm/tool.mdx b/cloud/content/docs/v1/api/llm/tool.mdx index b84a69f30..64bdd23e5 100644 --- a/cloud/content/docs/v1/api/llm/tool.mdx +++ b/cloud/content/docs/v1/api/llm/tool.mdx @@ -15,6 +15,6 @@ A provider-agnostic Tool class. - Relies on _response having `common_` methods/properties if needed. **Bases:** - + diff --git a/cloud/content/docs/v1/api/mcp/client.mdx b/cloud/content/docs/v1/api/mcp/client.mdx index 09ccd9894..ae51e29ed 100644 --- a/cloud/content/docs/v1/api/mcp/client.mdx +++ b/cloud/content/docs/v1/api/mcp/client.mdx @@ -138,7 +138,7 @@ Read a resource from the MCP server. "type_str": "TextPart", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/message_param#textpart" + "doc_url": "/docs/v1/api/core/base/message_param#textpart" }, { "type_str": "BlobResourceContents", @@ -272,7 +272,7 @@ Get a prompt template from the MCP server. "type_str": "BaseMessageParam", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/message_param#basemessageparam" + "doc_url": "/docs/v1/api/core/base/message_param#basemessageparam" } ] } @@ -331,7 +331,7 @@ List all tools available on the MCP server. "type_str": "BaseTool", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/tool#basetool" + "doc_url": "/docs/v1/api/core/base/tool#basetool" } ] } @@ -492,7 +492,7 @@ List all tools available on the MCP server. "type_str": "MCPClient", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/mcp/client#mcpclient" + "doc_url": "/docs/v1/api/mcp/client#mcpclient" } ] } @@ -598,7 +598,7 @@ Returns: "type_str": "MCPClient", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/mcp/client#mcpclient" + "doc_url": "/docs/v1/api/mcp/client#mcpclient" } ] } diff --git a/cloud/content/docs/v1/api/retries/fallback.mdx b/cloud/content/docs/v1/api/retries/fallback.mdx index 801e16945..69c7f7aa2 100644 --- a/cloud/content/docs/v1/api/retries/fallback.mdx +++ b/cloud/content/docs/v1/api/retries/fallback.mdx @@ -122,7 +122,7 @@ The override arguments to use for this fallback attempt. "type_str": "Provider", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/core/base/types#provider" + "doc_url": "/docs/v1/api/core/base/types#provider" } ] } @@ -298,7 +298,7 @@ This must use the provider-agnostic `llm.call` decorator. "type_str": "Fallback", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/retries/fallback#fallback" + "doc_url": "/docs/v1/api/retries/fallback#fallback" } ] } @@ -312,7 +312,7 @@ This must use the provider-agnostic `llm.call` decorator. "type_str": "FallbackDecorator", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/retries/fallback#fallbackdecorator" + "doc_url": "/docs/v1/api/retries/fallback#fallbackdecorator" }, "description": "The decorated function." }} diff --git a/cloud/content/docs/v1/api/tools/system/docker_operation.mdx b/cloud/content/docs/v1/api/tools/system/docker_operation.mdx index 466fb0c46..9ecd5d23d 100644 --- a/cloud/content/docs/v1/api/tools/system/docker_operation.mdx +++ b/cloud/content/docs/v1/api/tools/system/docker_operation.mdx @@ -53,7 +53,7 @@ Configuration for `DockerOperationToolKit` toolkit Base class for Docker operations. **Bases:** -, +, ## DockerContainer @@ -69,7 +69,7 @@ Base class for Docker operations. "type_str": "DockerOperationToolKitConfig", "description": null, "kind": "simple", - "doc_url": "/docs/mirascope/api/tools/system/docker_operation#dockeroperationtoolkitconfig" + "doc_url": "/docs/v1/api/tools/system/docker_operation#dockeroperationtoolkitconfig" } }, { @@ -90,7 +90,7 @@ Base class for Docker operations. ToolKit for executing Python code and shell commands in a Docker container. **Bases:** - + + + , +, + + + + + + + - + - + diff --git a/cloud/content/docs/v1/guides/agents/blog-writing-agent.mdx b/cloud/content/docs/v1/guides/agents/blog-writing-agent.mdx index 81c2084cc..c8780a263 100644 --- a/cloud/content/docs/v1/guides/agents/blog-writing-agent.mdx +++ b/cloud/content/docs/v1/guides/agents/blog-writing-agent.mdx @@ -9,13 +9,13 @@ This recipe demonstrates how to build an Agent Executor using Mirascope to autom diff --git a/cloud/content/docs/v1/guides/agents/documentation-agent.mdx b/cloud/content/docs/v1/guides/agents/documentation-agent.mdx index 76d0ccd2c..59cbf0e9c 100644 --- a/cloud/content/docs/v1/guides/agents/documentation-agent.mdx +++ b/cloud/content/docs/v1/guides/agents/documentation-agent.mdx @@ -5,16 +5,16 @@ description: Create an AI assistant that answers questions about documentation b # Documentation Agent -In this recipe, we will be building a `DocumentationAgent` that has access to some documentation. We will be using Mirascope documentation in this example, but this should work on all types of documents. This is implemented using `OpenAI`, see [Local Chat with Codebase](/docs/mirascope/guides/agents/local-chat-with-codebase) for the Llama3.1 implementation. +In this recipe, we will be building a `DocumentationAgent` that has access to some documentation. We will be using Mirascope documentation in this example, but this should work on all types of documents. This is implemented using `OpenAI`, see [Local Chat with Codebase](/docs/v1/guides/agents/local-chat-with-codebase) for the Llama3.1 implementation. diff --git a/cloud/content/docs/v1/guides/agents/local-chat-with-codebase.mdx b/cloud/content/docs/v1/guides/agents/local-chat-with-codebase.mdx index 3a005c852..44e6ec5be 100644 --- a/cloud/content/docs/v1/guides/agents/local-chat-with-codebase.mdx +++ b/cloud/content/docs/v1/guides/agents/local-chat-with-codebase.mdx @@ -9,9 +9,9 @@ In this recipe, we will be using all Open Source Software to build a local ChatB diff --git a/cloud/content/docs/v1/guides/agents/localized-agent.mdx b/cloud/content/docs/v1/guides/agents/localized-agent.mdx index 7ce6f9217..e04cd6910 100644 --- a/cloud/content/docs/v1/guides/agents/localized-agent.mdx +++ b/cloud/content/docs/v1/guides/agents/localized-agent.mdx @@ -9,10 +9,10 @@ This recipe will show you how to use Nimble to make a simple Q&A ChatBot based o diff --git a/cloud/content/docs/v1/guides/agents/qwant-search-agent-with-sources.mdx b/cloud/content/docs/v1/guides/agents/qwant-search-agent-with-sources.mdx index 027480e59..9d73a49fe 100644 --- a/cloud/content/docs/v1/guides/agents/qwant-search-agent-with-sources.mdx +++ b/cloud/content/docs/v1/guides/agents/qwant-search-agent-with-sources.mdx @@ -9,11 +9,11 @@ This notebook tutorial walks through the implementation of a web agent that uses diff --git a/cloud/content/docs/v1/guides/agents/sql-agent.mdx b/cloud/content/docs/v1/guides/agents/sql-agent.mdx index d413ffdb1..c9ba174ee 100644 --- a/cloud/content/docs/v1/guides/agents/sql-agent.mdx +++ b/cloud/content/docs/v1/guides/agents/sql-agent.mdx @@ -9,11 +9,11 @@ In this recipe, we will be using OpenAI GPT-4o-mini to act as a co-pilot for a D @@ -251,7 +251,7 @@ await librarian.run() Note that the SQL statements in the dialogue are there for development purposes. -Having established that we can have a quality conversation with our `Librarian`, we can now enhance our prompt. However, we must ensure that these improvements don't compromise the Librarian's core functionality. Check out [Evaluating SQL Agent](/docs/mirascope/guides/evals/evaluating-sql-agent) for an in-depth guide on how we evaluate the quality of our prompt. +Having established that we can have a quality conversation with our `Librarian`, we can now enhance our prompt. However, we must ensure that these improvements don't compromise the Librarian's core functionality. Check out [Evaluating SQL Agent](/docs/v1/guides/evals/evaluating-sql-agent) for an in-depth guide on how we evaluate the quality of our prompt.
    diff --git a/cloud/content/docs/v1/guides/agents/web-search-agent.mdx b/cloud/content/docs/v1/guides/agents/web-search-agent.mdx index faaa7b5e5..e42c73338 100644 --- a/cloud/content/docs/v1/guides/agents/web-search-agent.mdx +++ b/cloud/content/docs/v1/guides/agents/web-search-agent.mdx @@ -9,9 +9,9 @@ In this recipe, we'll explore using Mirascope to enhance our Large Language Mode @@ -45,7 +45,7 @@ We'll use two pre-made tools from `mirascope.tools`: The `DuckDuckGoSearch` tool provides search results with URLs while `ParseURLContent` handles the content extraction. We save our search results into `search_history` to provide context for future searches. -For a full list of available pre-made tools and their capabilities, check out the Pre-made Tools documentation. +For a full list of available pre-made tools and their capabilities, check out the Pre-made Tools documentation. ## Add Q&A Functionality @@ -219,7 +219,7 @@ await web_assistant.run() Note that by giving the LLM the current date, it'll automatically search for the most up-to-date information. -Check out [Evaluating Web Search Agent](/docs/mirascope/guides/evals/evaluating-web-search-agent) for an in-depth guide on how we evaluate the quality of our agent. +Check out [Evaluating Web Search Agent](/docs/v1/guides/evals/evaluating-web-search-agent) for an in-depth guide on how we evaluate the quality of our agent.
      diff --git a/cloud/content/docs/v1/guides/evals/evaluating-documentation-agent.mdx b/cloud/content/docs/v1/guides/evals/evaluating-documentation-agent.mdx index 4e91750e5..484e3f7cd 100644 --- a/cloud/content/docs/v1/guides/evals/evaluating-documentation-agent.mdx +++ b/cloud/content/docs/v1/guides/evals/evaluating-documentation-agent.mdx @@ -5,19 +5,19 @@ description: Implement testing and evaluation strategies for a documentation age # Evaluating Documentation Agent -In this recipe, we will be using taking our [Documentation Agent](/docs/mirascope/guides/agents/documentation-agent) example and running evaluations on the LLM call. We will be exploring various different evaluations we can run to ensure quality and expected behavior. +In this recipe, we will be using taking our [Documentation Agent](/docs/v1/guides/agents/documentation-agent) example and running evaluations on the LLM call. We will be exploring various different evaluations we can run to ensure quality and expected behavior. -We will be using our `DocumentationAgent` for our evaluations. For a detailed explanation regarding this code snippet, refer to the [Documentation Agent Tutorial](/docs/mirascope/guides/agents/documentation-agent). +We will be using our `DocumentationAgent` for our evaluations. For a detailed explanation regarding this code snippet, refer to the [Documentation Agent Tutorial](/docs/v1/guides/agents/documentation-agent). ## Setup diff --git a/cloud/content/docs/v1/guides/evals/evaluating-sql-agent.mdx b/cloud/content/docs/v1/guides/evals/evaluating-sql-agent.mdx index 50e40afe7..eb158a572 100644 --- a/cloud/content/docs/v1/guides/evals/evaluating-sql-agent.mdx +++ b/cloud/content/docs/v1/guides/evals/evaluating-sql-agent.mdx @@ -5,20 +5,20 @@ description: Techniques for evaluating SQL agent performance. This guide demonst # Evaluating Generating SQL with LLM -In this recipe, we will be using taking our [SQL Agent](/docs/mirascope/guides/agents/sql-agent) example and running evaluations on LLM call. We will be exploring various different evaluations we can run to ensure quality and expected behavior. +In this recipe, we will be using taking our [SQL Agent](/docs/v1/guides/agents/sql-agent) example and running evaluations on LLM call. We will be exploring various different evaluations we can run to ensure quality and expected behavior. -We will be using our `LibrarianAgent` for our evaluations. For a detailed explanation regarding this code snippet, refer to the [SQL Agent Tutorial](/docs/mirascope/guides/agents/sql-agent). +We will be using our `LibrarianAgent` for our evaluations. For a detailed explanation regarding this code snippet, refer to the [SQL Agent Tutorial](/docs/v1/guides/agents/sql-agent). ## Setup diff --git a/cloud/content/docs/v1/guides/evals/evaluating-web-search-agent.mdx b/cloud/content/docs/v1/guides/evals/evaluating-web-search-agent.mdx index c37579cbf..e06bc9199 100644 --- a/cloud/content/docs/v1/guides/evals/evaluating-web-search-agent.mdx +++ b/cloud/content/docs/v1/guides/evals/evaluating-web-search-agent.mdx @@ -5,20 +5,20 @@ description: Learn how to evaluate and improve the quality of a web search agent # Evaluating Web Search Agent with LLM -In this recipe, we will be using taking our [Web Search Agent Tutorial](/docs/mirascope/guides/agents/web-search-agent) and running evaluations on the LLM call. We will be exploring writing a context relevance test since that is one of the most important aspects of web search. +In this recipe, we will be using taking our [Web Search Agent Tutorial](/docs/v1/guides/agents/web-search-agent) and running evaluations on the LLM call. We will be exploring writing a context relevance test since that is one of the most important aspects of web search. -We will be using our `WebAssistantAgent` for our evaluations. For a detailed explanation regarding this code snippet, refer to the [Web Search Agent Tutorial](/docs/mirascope/guides/agents/web-search-agent). +We will be using our `WebAssistantAgent` for our evaluations. For a detailed explanation regarding this code snippet, refer to the [Web Search Agent Tutorial](/docs/v1/guides/agents/web-search-agent). ## Setup diff --git a/cloud/content/docs/v1/guides/getting-started/dynamic-configuration-and-chaining.mdx b/cloud/content/docs/v1/guides/getting-started/dynamic-configuration-and-chaining.mdx index fe0c68029..b9ae0b732 100644 --- a/cloud/content/docs/v1/guides/getting-started/dynamic-configuration-and-chaining.mdx +++ b/cloud/content/docs/v1/guides/getting-started/dynamic-configuration-and-chaining.mdx @@ -9,7 +9,7 @@ This notebook provides a detailed introduction to using Dynamic Configuration an ## Setup -First, let's install Mirascope and set up our environment. We'll use OpenAI for our examples, but you can adapt these to other providers supported by Mirascope. For more information on supported providers, see the [Calls documentation](/docs/mirascope/learn/calls). +First, let's install Mirascope and set up our environment. We'll use OpenAI for our examples, but you can adapt these to other providers supported by Mirascope. For more information on supported providers, see the [Calls documentation](/docs/v1/learn/calls). ```python @@ -80,7 +80,7 @@ print(response.content) I recommend **"An Ember in the Ashes"** by Sabaa Tahir. This young adult fantasy novel is set in a brutal, ancient Rome-inspired world where the oppressed must fight against a tyrannical regime. The story follows Laia, a young woman who becomes a spy for the resistance to save her brother, and Elias, a soldier who wants to escape the oppressive society. The book is rich in world-building, features complex characters, and explores themes of freedom, loyalty, and sacrifice. It's an engaging read for young adults looking for an exciting fantasy adventure! ``` -When using string templates, you can use computed fields to dynamically generate or modify template variables used in your prompt. For more information on prompt templates, see the [Prompts documentation](/docs/mirascope/learn/prompts). +When using string templates, you can use computed fields to dynamically generate or modify template variables used in your prompt. For more information on prompt templates, see the [Prompts documentation](/docs/v1/learn/prompts). ```python @@ -136,7 +136,7 @@ else: The Girl with the Dragon Tattoo by Stieg Larsson (Mystery) ``` -For more advanced usage of tools, including the `BaseToolKit` class, please refer to the [Tools documentation](/docs/mirascope/learn/tools) and the [Tools and Agents Tutorial](/docs/mirascope/guides/getting-started/tools-and-agents). +For more advanced usage of tools, including the `BaseToolKit` class, please refer to the [Tools documentation](/docs/v1/learn/tools) and the [Tools and Agents Tutorial](/docs/v1/guides/getting-started/tools-and-agents). ## Chaining diff --git a/cloud/content/docs/v1/guides/getting-started/structured-outputs.mdx b/cloud/content/docs/v1/guides/getting-started/structured-outputs.mdx index a97d31057..d23ab0e63 100644 --- a/cloud/content/docs/v1/guides/getting-started/structured-outputs.mdx +++ b/cloud/content/docs/v1/guides/getting-started/structured-outputs.mdx @@ -7,7 +7,7 @@ description: Learn techniques for extracting structured data from LLM outputs us Large Language Models (LLMs) generate unstructured text data by default. Structured outputs are essential for building reliable and efficient AI applications, and this notebook demonstrates various techniques for structuring LLM outputs using Mirascope. -These methods help ensure consistency, type safety, and easier integration of LLM responses into your application. For more detailed information on structured outputs in Mirascope, refer to the [Response Models](/docs/mirascope/learn/response_models) documentation, [JSON Mode](/docs/mirascope/learn/json_mode) documentation, and [Output Parser](/docs/mirascope/learn/output_parsers) documentation. +These methods help ensure consistency, type safety, and easier integration of LLM responses into your application. For more detailed information on structured outputs in Mirascope, refer to the [Response Models](/docs/v1/learn/response_models) documentation, [JSON Mode](/docs/v1/learn/json_mode) documentation, and [Output Parser](/docs/v1/learn/output_parsers) documentation. ## Setup @@ -319,6 +319,6 @@ These techniques provide a solid foundation for structuring outputs in your LLM If you like what you've seen so far, [give us a star](https://github.com/Mirascope/mirascope) and [join our community](https://mirascope.com/discord-invite). -For more advanced topics and best practices, refer to the Mirascope [Response Models](/docs/mirascope/learn/response_models) documentation, [JSON Mode](/docs/mirascope/learn/json_mode) documentation, and [Output Parsers](/docs/mirascope/learn/output_parsers) documentation. +For more advanced topics and best practices, refer to the Mirascope [Response Models](/docs/v1/learn/response_models) documentation, [JSON Mode](/docs/v1/learn/json_mode) documentation, and [Output Parsers](/docs/v1/learn/output_parsers) documentation. -We also recommend taking a look at our [Tenacity Integration](/docs/mirascope/learn/retries) to learn how to build more robust pipelines that automatically re-insert validation errors into a subsequent call, enabling the LLM to learn from its mistakes and (hopefully) output the correct answer on the next attempt. +We also recommend taking a look at our [Tenacity Integration](/docs/v1/learn/retries) to learn how to build more robust pipelines that automatically re-insert validation errors into a subsequent call, enabling the LLM to learn from its mistakes and (hopefully) output the correct answer on the next attempt. diff --git a/cloud/content/docs/v1/guides/getting-started/tools-and-agents.mdx b/cloud/content/docs/v1/guides/getting-started/tools-and-agents.mdx index a7db87b01..2c57d4732 100644 --- a/cloud/content/docs/v1/guides/getting-started/tools-and-agents.mdx +++ b/cloud/content/docs/v1/guides/getting-started/tools-and-agents.mdx @@ -14,8 +14,8 @@ In this notebook we'll implement a `WebSearchAgent` to demonstrate how to built For more detailed information on these concepts, refer to the following Mirascope documentation: -- [Tools documentation](/docs/mirascope/learn/tools) -- [Agents documentation](/docs/mirascope/learn/agents) +- [Tools documentation](/docs/v1/learn/tools) +- [Agents documentation](/docs/v1/learn/agents) ## Setting Up the Environment @@ -33,7 +33,7 @@ import os os.environ["OPENAI_API_KEY"] = "your-api-key-here" ``` -For more information on setting up Mirascope and its dependencies, see the [Mirascope getting started guide](/docs/mirascope/guides/getting-started/quickstart). +For more information on setting up Mirascope and its dependencies, see the [Mirascope getting started guide](/docs/v1/guides/getting-started/quickstart). ## Building a Basic Chatbot @@ -168,7 +168,7 @@ print(tool_type.tool_schema()) This schema provides the LLM with information about the tool's name, description, and expected input parameters. The LLM uses this information to determine when and how to use the tool during its reasoning process. -For more details on implementing and using Tools in Mirascope, see the [Tools documentation](/docs/mirascope/learn/tools). +For more details on implementing and using Tools in Mirascope, see the [Tools documentation](/docs/v1/learn/tools). ## Tools With Access To Agent State @@ -395,10 +395,10 @@ For more advanced usage, you can explore concepts like: For more techniques and best practices in using Mirascope, refer to the following documentation: -- [Tools](/docs/mirascope/learn/tools) -- [Agents](/docs/mirascope/learn/agents) -- [Streams](/docs/mirascope/learn/streams) -- [Chaining](/docs/mirascope/learn/chaining) +- [Tools](/docs/v1/learn/tools) +- [Agents](/docs/v1/learn/agents) +- [Streams](/docs/v1/learn/streams) +- [Chaining](/docs/v1/learn/chaining) ## 9. Conclusion diff --git a/cloud/content/docs/v1/guides/more-advanced/code-generation-and-execution.mdx b/cloud/content/docs/v1/guides/more-advanced/code-generation-and-execution.mdx index aca42623c..dc16288e0 100644 --- a/cloud/content/docs/v1/guides/more-advanced/code-generation-and-execution.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/code-generation-and-execution.mdx @@ -9,10 +9,10 @@ In this recipe, we will be using OpenAI GPT-4o-mini to use write code to solve p diff --git a/cloud/content/docs/v1/guides/more-advanced/document-segmentation.mdx b/cloud/content/docs/v1/guides/more-advanced/document-segmentation.mdx index a1771f84a..ef6b9dcc9 100644 --- a/cloud/content/docs/v1/guides/more-advanced/document-segmentation.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/document-segmentation.mdx @@ -9,9 +9,9 @@ In this recipe, we go over how to do semantic document segmentation. Topics and diff --git a/cloud/content/docs/v1/guides/more-advanced/extract-from-pdf.mdx b/cloud/content/docs/v1/guides/more-advanced/extract-from-pdf.mdx index 299123f56..6ef451424 100644 --- a/cloud/content/docs/v1/guides/more-advanced/extract-from-pdf.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/extract-from-pdf.mdx @@ -9,9 +9,9 @@ This recipe demonstrates how to leverage Large Language Models (LLMs) -- specifi diff --git a/cloud/content/docs/v1/guides/more-advanced/extraction-using-vision.mdx b/cloud/content/docs/v1/guides/more-advanced/extraction-using-vision.mdx index 895ee5129..4776fcfd4 100644 --- a/cloud/content/docs/v1/guides/more-advanced/extraction-using-vision.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/extraction-using-vision.mdx @@ -9,9 +9,9 @@ This recipe shows how to use LLMs — in this case, OpenAI GPT-4o and Anthropic diff --git a/cloud/content/docs/v1/guides/more-advanced/generating-captions.mdx b/cloud/content/docs/v1/guides/more-advanced/generating-captions.mdx index dd7d1e3e4..3550704f7 100644 --- a/cloud/content/docs/v1/guides/more-advanced/generating-captions.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/generating-captions.mdx @@ -9,9 +9,9 @@ In this recipe, we go over how to use LLMs to generate a descriptive caption set diff --git a/cloud/content/docs/v1/guides/more-advanced/generating-synthetic-data.mdx b/cloud/content/docs/v1/guides/more-advanced/generating-synthetic-data.mdx index 8212b1407..2a4357aee 100644 --- a/cloud/content/docs/v1/guides/more-advanced/generating-synthetic-data.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/generating-synthetic-data.mdx @@ -17,9 +17,9 @@ and more, LLMs are far easier to use and yield better (or the only feasible) res diff --git a/cloud/content/docs/v1/guides/more-advanced/knowledge-graph.mdx b/cloud/content/docs/v1/guides/more-advanced/knowledge-graph.mdx index ef8432feb..f4b6fec63 100644 --- a/cloud/content/docs/v1/guides/more-advanced/knowledge-graph.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/knowledge-graph.mdx @@ -9,9 +9,9 @@ Often times, data is messy and not always stored in a structured manner ready fo diff --git a/cloud/content/docs/v1/guides/more-advanced/llm-validation-with-retries.mdx b/cloud/content/docs/v1/guides/more-advanced/llm-validation-with-retries.mdx index 1acc2ea37..4be8c90c0 100644 --- a/cloud/content/docs/v1/guides/more-advanced/llm-validation-with-retries.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/llm-validation-with-retries.mdx @@ -9,10 +9,10 @@ This recipe demonstrates how to leverage Large Language Models (LLMs) -- specifi diff --git a/cloud/content/docs/v1/guides/more-advanced/named-entity-recognition.mdx b/cloud/content/docs/v1/guides/more-advanced/named-entity-recognition.mdx index 7e046ba61..15532e00f 100644 --- a/cloud/content/docs/v1/guides/more-advanced/named-entity-recognition.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/named-entity-recognition.mdx @@ -9,9 +9,9 @@ This guide demonstrates techniques to perform Named Entity Recognition (NER) usi diff --git a/cloud/content/docs/v1/guides/more-advanced/o1-style-thinking.mdx b/cloud/content/docs/v1/guides/more-advanced/o1-style-thinking.mdx index 96fc246a2..9c01d2983 100644 --- a/cloud/content/docs/v1/guides/more-advanced/o1-style-thinking.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/o1-style-thinking.mdx @@ -10,9 +10,9 @@ This makes LLMs to breakdown the task in multiple steps and generate a coherent diff --git a/cloud/content/docs/v1/guides/more-advanced/pii-scrubbing.mdx b/cloud/content/docs/v1/guides/more-advanced/pii-scrubbing.mdx index 54ce3fe3b..9318de37e 100644 --- a/cloud/content/docs/v1/guides/more-advanced/pii-scrubbing.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/pii-scrubbing.mdx @@ -9,9 +9,9 @@ In this recipe, we go over how to detect Personal Identifiable Information, or P diff --git a/cloud/content/docs/v1/guides/more-advanced/query-plan.mdx b/cloud/content/docs/v1/guides/more-advanced/query-plan.mdx index cf3444620..aa86b1ed4 100644 --- a/cloud/content/docs/v1/guides/more-advanced/query-plan.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/query-plan.mdx @@ -9,12 +9,12 @@ This recipe shows how to use LLMs — in this case, Anthropic’s Claude 3.5 Son diff --git a/cloud/content/docs/v1/guides/more-advanced/removing-semantic-duplicates.mdx b/cloud/content/docs/v1/guides/more-advanced/removing-semantic-duplicates.mdx index 364c0db1e..dd67086c6 100644 --- a/cloud/content/docs/v1/guides/more-advanced/removing-semantic-duplicates.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/removing-semantic-duplicates.mdx @@ -9,9 +9,9 @@ description: Learn how to use LLMs to identify and remove semantically equivalen diff --git a/cloud/content/docs/v1/guides/more-advanced/search-with-sources.mdx b/cloud/content/docs/v1/guides/more-advanced/search-with-sources.mdx index 9715e4fb7..110d80113 100644 --- a/cloud/content/docs/v1/guides/more-advanced/search-with-sources.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/search-with-sources.mdx @@ -9,11 +9,11 @@ This recipe shows how to use LLMs — in this case, GPT 4o mini — to answer qu diff --git a/cloud/content/docs/v1/guides/more-advanced/speech-transcription.mdx b/cloud/content/docs/v1/guides/more-advanced/speech-transcription.mdx index c7320c897..cb22057f5 100644 --- a/cloud/content/docs/v1/guides/more-advanced/speech-transcription.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/speech-transcription.mdx @@ -9,9 +9,9 @@ In this recipe, we go over how to transcribe the speech from an audio file using diff --git a/cloud/content/docs/v1/guides/more-advanced/support-ticket-routing.mdx b/cloud/content/docs/v1/guides/more-advanced/support-ticket-routing.mdx index fc071255e..248122f40 100644 --- a/cloud/content/docs/v1/guides/more-advanced/support-ticket-routing.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/support-ticket-routing.mdx @@ -9,11 +9,11 @@ This recipe shows how to take an incoming support ticket/call transcript then us diff --git a/cloud/content/docs/v1/guides/more-advanced/text-classification.mdx b/cloud/content/docs/v1/guides/more-advanced/text-classification.mdx index bf46bba14..3ca4f3bb8 100644 --- a/cloud/content/docs/v1/guides/more-advanced/text-classification.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/text-classification.mdx @@ -10,9 +10,9 @@ In this recipe we’ll explore using Mirascope to implement binary classificatio diff --git a/cloud/content/docs/v1/guides/more-advanced/text-summarization.mdx b/cloud/content/docs/v1/guides/more-advanced/text-summarization.mdx index 9523d3ff9..d477e909b 100644 --- a/cloud/content/docs/v1/guides/more-advanced/text-summarization.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/text-summarization.mdx @@ -9,9 +9,9 @@ In this recipe, we show some techniques to improve an LLM’s ability to summari diff --git a/cloud/content/docs/v1/guides/more-advanced/text-translation.mdx b/cloud/content/docs/v1/guides/more-advanced/text-translation.mdx index 235809890..599bd71ca 100644 --- a/cloud/content/docs/v1/guides/more-advanced/text-translation.mdx +++ b/cloud/content/docs/v1/guides/more-advanced/text-translation.mdx @@ -17,9 +17,9 @@ These techniques can be applied to various LLMs, including OpenAI's GPT-4, Anthr diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/chain-of-verification.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/chain-of-verification.mdx index 791d5c708..3967bb01b 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/chain-of-verification.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/chain-of-verification.mdx @@ -9,9 +9,9 @@ This recipe demonstrates how to implement the Chain of Verification technique us diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/decomposed-prompting.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/decomposed-prompting.mdx index b8a287f54..ab7896594 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/decomposed-prompting.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/decomposed-prompting.mdx @@ -9,10 +9,10 @@ This recipe demonstrates how to implement the Decomposed Prompting (DECOMP) tech diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/diverse.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/diverse.mdx index 93983baac..37f774800 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/diverse.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/diverse.mdx @@ -9,9 +9,9 @@ This recipe demonstrates how to implement the DiVeRSe (Diverse Verifier on Reaso diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/least-to-most.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/least-to-most.mdx index 46f098e24..1a8a7b98f 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/least-to-most.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/least-to-most.mdx @@ -9,9 +9,9 @@ This recipe demonstrates how to implement the Least to Most technique using Larg diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/mixture-of-reasoning.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/mixture-of-reasoning.mdx index 3b2bbdf82..2d4a13e96 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/mixture-of-reasoning.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/mixture-of-reasoning.mdx @@ -9,9 +9,9 @@ Mixture of Reasoning is a prompt engineering technique where you set up multiple @@ -113,9 +113,9 @@ print(best_response.reasoning) This implementation consists of several key components: 1. Three different prompt engineering techniques: - - `cot_call`: [Chain of Thought reasoning](/docs/mirascope/guides/prompt-engineering/text-based/chain-of-thought) - - `emotion_prompting_call`: [Emotion prompting](/docs/mirascope/guides/prompt-engineering/text-based/emotion-prompting) - - `rar_call`: [Rephrase and Respond](/docs/mirascope/guides/prompt-engineering/text-based/rephrase-and-respond) + - `cot_call`: [Chain of Thought reasoning](/docs/v1/guides/prompt-engineering/text-based/chain-of-thought) + - `emotion_prompting_call`: [Emotion prompting](/docs/v1/guides/prompt-engineering/text-based/emotion-prompting) + - `rar_call`: [Rephrase and Respond](/docs/v1/guides/prompt-engineering/text-based/rephrase-and-respond) 2. A `BestResponse` model to structure the output of the final evaluation. diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/prompt-paraphrasing.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/prompt-paraphrasing.mdx index 609e0f368..770c9ffc0 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/prompt-paraphrasing.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/prompt-paraphrasing.mdx @@ -9,9 +9,9 @@ description: Explore Prompt Paraphrasing, a technique for creating ensembles of diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/reverse-chain-of-thought.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/reverse-chain-of-thought.mdx index 68eaea333..642da01cf 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/reverse-chain-of-thought.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/reverse-chain-of-thought.mdx @@ -9,9 +9,9 @@ This recipe demonstrates how to implement the Reverse Chain of Thought technique diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-consistency.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-consistency.mdx index a8b8f340b..ea3de45cc 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-consistency.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-consistency.mdx @@ -9,9 +9,9 @@ This recipe demonstrates how to implement the Self-Consistency technique using L diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-refine.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-refine.mdx index f2bc8aa14..3ebfdeace 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-refine.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/self-refine.mdx @@ -9,8 +9,8 @@ This recipe demonstrates how to implement the Self-Refine technique using Large diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/sim-to-m.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/sim-to-m.mdx index d408e4a61..18362e075 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/sim-to-m.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/sim-to-m.mdx @@ -9,8 +9,8 @@ This recipe demonstrates how to implement the Sim to M (Simulation Theory of Min diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/skeleton-of-thought.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/skeleton-of-thought.mdx index 84cc7ce65..6c19b92a3 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/skeleton-of-thought.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/skeleton-of-thought.mdx @@ -11,9 +11,9 @@ This recipe demonstrates how to implement the Skeleton of Thought technique usin diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/step-back.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/step-back.mdx index c6369b0c3..cc0c46afb 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/step-back.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/step-back.mdx @@ -9,8 +9,8 @@ This recipe demonstrates how to implement the Step-back prompting technique usin diff --git a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/system-to-attention.mdx b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/system-to-attention.mdx index f950b7e1f..4e346c434 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/system-to-attention.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/chaining-based/system-to-attention.mdx @@ -9,9 +9,9 @@ This recipe demonstrates how to implement the System to Attention (S2A) techniqu diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/chain-of-thought.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/chain-of-thought.mdx index 2e27d9582..4a8e7138f 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/text-based/chain-of-thought.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/chain-of-thought.mdx @@ -9,8 +9,8 @@ description: Implement chain-of-thought prompting to improve LLM reasoning. This diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/common-phrases.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/common-phrases.mdx index d626c6b00..eb7cca4dd 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/text-based/common-phrases.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/common-phrases.mdx @@ -9,8 +9,8 @@ Sometimes, an LLM can appear to know more or less about a topic depending on the diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/contrastive-chain-of-thought.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/contrastive-chain-of-thought.mdx index 9c3e00167..a0c23cc8a 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/text-based/contrastive-chain-of-thought.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/contrastive-chain-of-thought.mdx @@ -9,8 +9,8 @@ description: Learn to apply Contrastive Chain of Thought by providing both corre diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/emotion-prompting.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/emotion-prompting.mdx index fb6f4f999..af4bf21a4 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/text-based/emotion-prompting.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/emotion-prompting.mdx @@ -9,8 +9,8 @@ description: Enhance LLM responses by adding emotionally significant phrases to diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/plan-and-solve.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/plan-and-solve.mdx index 134a2ecaa..6790a501a 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/text-based/plan-and-solve.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/plan-and-solve.mdx @@ -9,8 +9,8 @@ description: Implement the Plan and Solve technique, a variation of Chain of Tho diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/rephrase-and-respond.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/rephrase-and-respond.mdx index 6f10c40ac..c9e95112b 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/text-based/rephrase-and-respond.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/rephrase-and-respond.mdx @@ -9,8 +9,8 @@ description: Learn to implement the Rephrase and Respond technique to improve LL diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/rereading.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/rereading.mdx index cda26d174..b7ea3f5ef 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/text-based/rereading.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/rereading.mdx @@ -13,8 +13,8 @@ description: Explore the Rereading technique that improves LLM performance by as diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/role-prompting.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/role-prompting.mdx index fd8f2562e..309ff4c39 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/text-based/role-prompting.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/role-prompting.mdx @@ -9,8 +9,8 @@ description: Implement role-based prompting to improve LLM responses. This guide diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/tabular-chain-of-thought.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/tabular-chain-of-thought.mdx index 31bdc1e0e..bf3bb1f3a 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/text-based/tabular-chain-of-thought.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/tabular-chain-of-thought.mdx @@ -9,8 +9,8 @@ description: Implement the Tabular Chain of Thought technique to structure LLM r diff --git a/cloud/content/docs/v1/guides/prompt-engineering/text-based/thread-of-thought.mdx b/cloud/content/docs/v1/guides/prompt-engineering/text-based/thread-of-thought.mdx index 96b5acc8d..86fd882ab 100644 --- a/cloud/content/docs/v1/guides/prompt-engineering/text-based/thread-of-thought.mdx +++ b/cloud/content/docs/v1/guides/prompt-engineering/text-based/thread-of-thought.mdx @@ -5,12 +5,12 @@ description: Explore Thread of Thought (THoT), an extension of Chain of Thought # Thread of Thought -[Thread of Thought](https://arxiv.org/pdf/2311.08734) (THoT) is an extension of zero-shot [Chain of Thought](/docs/mirascope/guides/prompt-engineering/text-based/chain-of-thought) where the request to walk through the reasoning steps is improved. The paper tests the results of various phrases, but finds the best to be "Walk me through this context in manageable parts step by step, summarizing and analyzing as we go." It is applicable to reasoning and mathematical tasks just like CoT, but is most useful for tasks with retrieval / large amounts of context and Q and A on this context. +[Thread of Thought](https://arxiv.org/pdf/2311.08734) (THoT) is an extension of zero-shot [Chain of Thought](/docs/v1/guides/prompt-engineering/text-based/chain-of-thought) where the request to walk through the reasoning steps is improved. The paper tests the results of various phrases, but finds the best to be "Walk me through this context in manageable parts step by step, summarizing and analyzing as we go." It is applicable to reasoning and mathematical tasks just like CoT, but is most useful for tasks with retrieval / large amounts of context and Q and A on this context. diff --git a/cloud/content/docs/v1/index.mdx b/cloud/content/docs/v1/index.mdx index 4927ec650..f99756f8c 100644 --- a/cloud/content/docs/v1/index.mdx +++ b/cloud/content/docs/v1/index.mdx @@ -18,7 +18,7 @@ Mirascope is a powerful, flexible, and user-friendly library that simplifies the Whether you're generating text, extracting structured information, or developing complex AI-driven agent systems, Mirascope provides the tools you need to streamline your development process and create powerful, robust applications.
      - + @@ -108,6 +108,6 @@ For comparison, here's how you would achieve the same result using the provider' /> -If you'd like a more in-depth guide to getting started with Mirascope, check out our [quickstart guide](/docs/mirascope/guides/getting-started/quickstart/) +If you'd like a more in-depth guide to getting started with Mirascope, check out our [quickstart guide](/docs/v1/guides/getting-started/quickstart/) We're excited to see what you'll build with Mirascope, and we're here to help! Don't hesitate to reach out :) diff --git a/cloud/content/docs/v1/learn/agents.mdx b/cloud/content/docs/v1/learn/agents.mdx index 93642c14f..5bd540179 100644 --- a/cloud/content/docs/v1/learn/agents.mdx +++ b/cloud/content/docs/v1/learn/agents.mdx @@ -12,7 +12,7 @@ When working with Large Language Models (LLMs), an "agent" refers to an autonomo In this section we will implement a toy `Librarian` agent to demonstrate key concepts in Mirascope that will help you build agents. - If you haven't already, we recommend first reading the section on [Tools](/docs/mirascope/learn/tools) + If you haven't already, we recommend first reading the section on [Tools](/docs/v1/learn/tools) @@ -631,4 +631,4 @@ Librarian().run() This section is just the tip of the iceberg when it comes to building agents, implementing just one type of simple agent flow. It's important to remember that "agent" is quite a general term and can mean different things for different use-cases. Mirascope's various features make building agents easier, but it will be up to you to determine the architecture that best suits your goals. -Next, we recommend taking a look at our [Agent Tutorials](/docs/mirascope/guides/agents/web-search-agent) to see examples of more complex, real-world agents. \ No newline at end of file +Next, we recommend taking a look at our [Agent Tutorials](/docs/v1/guides/agents/web-search-agent) to see examples of more complex, real-world agents. \ No newline at end of file diff --git a/cloud/content/docs/v1/learn/async.mdx b/cloud/content/docs/v1/learn/async.mdx index 9d1892f01..a10ceaa34 100644 --- a/cloud/content/docs/v1/learn/async.mdx +++ b/cloud/content/docs/v1/learn/async.mdx @@ -44,7 +44,7 @@ Asynchronous programming is a crucial concept when building applications with LL ## Basic Usage and Syntax -If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls) +If you haven't already, we recommend first reading the section on [Calls](/docs/v1/learn/calls) To use async in Mirascope, simply define the function as async and use the `await` keyword when calling it. Here's a basic example: @@ -158,7 +158,7 @@ We are using `asyncio.gather` to run and await multiple asynchronous tasks concu ## Async Streaming -If you haven't already, we recommend first reading the section on [Streams](/docs/mirascope/learn/streams) +If you haven't already, we recommend first reading the section on [Streams](/docs/v1/learn/streams) Streaming with async works similarly to synchronous streaming, but you use `async for` instead of a regular `for` loop: @@ -211,7 +211,7 @@ asyncio.run(main()) ## Async Tools -If you haven't already, we recommend first reading the section on [Tools](/docs/mirascope/learn/tools) +If you haven't already, we recommend first reading the section on [Tools](/docs/v1/learn/tools) When using tools asynchronously, you can make the `call` method of a tool async: @@ -327,4 +327,4 @@ By leveraging these async features in Mirascope, you can build more efficient an This section concludes the core functionality Mirascope supports. If you haven't already, we recommend taking a look at any previous sections you've missed to learn about what you can do with Mirascope. -You can also check out the section on [Provider-Specific Features](/docs/mirascope/learn/provider-specific/openai) to learn about how to use features that only certain providers support, such as OpenAI's structured outputs. \ No newline at end of file +You can also check out the section on [Provider-Specific Features](/docs/v1/learn/provider-specific/openai) to learn about how to use features that only certain providers support, such as OpenAI's structured outputs. \ No newline at end of file diff --git a/cloud/content/docs/v1/learn/calls.mdx b/cloud/content/docs/v1/learn/calls.mdx index 12957a853..fad0f1a89 100644 --- a/cloud/content/docs/v1/learn/calls.mdx +++ b/cloud/content/docs/v1/learn/calls.mdx @@ -6,7 +6,7 @@ description: Learn how to make API calls to various LLM providers using Mirascop # Calls - If you haven't already, we recommend first reading the section on writing [Prompts](/docs/mirascope/learn/prompts) + If you haven't already, we recommend first reading the section on writing [Prompts](/docs/v1/learn/prompts) When working with Large Language Model (LLM) APIs in Mirascope, a "call" refers to making a request to a LLM provider's API with a particular setting and prompt. @@ -18,7 +18,7 @@ We currently support [OpenAI](https://openai.com/), [Anthropic](https://www.anth If there are any providers we don't yet support that you'd like to see supported, let us know! - [`mirascope.llm.call`](/docs/mirascope/api/llm/call) + [`mirascope.llm.call`](/docs/v1/api/llm/call) ## Basic Usage and Syntax @@ -134,10 +134,10 @@ print(override_response.content) ### Common Response Properties and Methods - [`mirascope.core.base.call_response`](/docs/mirascope/api/core/base/call_response) + [`mirascope.core.base.call_response`](/docs/v1/api/core/base/call_response) -All [`BaseCallResponse`](/docs/mirascope/api) objects share these common properties: +All [`BaseCallResponse`](/docs/v1/api) objects share these common properties: - `content`: The main text content of the response. If no content is present, this will be the empty string. - `finish_reasons`: A list of reasons why the generation finished (e.g., "stop", "length"). These will be typed specifically for the provider used. If no finish reasons are present, this will be `None`. @@ -148,8 +148,8 @@ All [`BaseCallResponse`](/docs/mirascope/api) objects share these common propert - `output_tokens`: The number of output tokens generated if available. Otherwise this will be `None`. - `cost`: An estimated cost of the API call if available. Otherwise this will be `None`. - `message_param`: The assistant's response formatted as a message parameter. -- `tools`: A list of provider-specific tools used in the response, if any. Otherwise this will be `None`. Check out the [`Tools`](/docs/mirascope/learn/tools) documentation for more details. -- `tool`: The first tool used in the response, if any. Otherwise this will be `None`. Check out the [`Tools`](/docs/mirascope/learn/tools) documentation for more details. +- `tools`: A list of provider-specific tools used in the response, if any. Otherwise this will be `None`. Check out the [`Tools`](/docs/v1/learn/tools) documentation for more details. +- `tool`: The first tool used in the response, if any. Otherwise this will be `None`. Check out the [`Tools`](/docs/v1/learn/tools) documentation for more details. - `tool_types`: A list of tool types used in the call, if any. Otherwise this will be `None`. - `prompt_template`: The prompt template used for the call. - `fn_args`: The arguments passed to the function. @@ -165,7 +165,7 @@ All [`BaseCallResponse`](/docs/mirascope/api) objects share these common propert There are also two common methods: - `__str__`: Returns the `content` property of the response for easy printing. -- `tool_message_params`: Creates message parameters for tool call results. Check out the [`Tools`](/docs/mirascope/learn/tools) documentation for more information. +- `tool_message_params`: Creates message parameters for tool call results. Check out the [`Tools`](/docs/v1/learn/tools) documentation for more information. ## Multi-Modal Outputs @@ -218,11 +218,11 @@ When using models that support audio outputs, you'll have access to: There are several common parameters that you'll find across all providers when using the `call` decorator. These parameters allow you to control various aspects of the LLM call: - `model`: The only required parameter for all providers, which may be passed in as a standard argument (whereas all others are optional and must be provided as keyword arguments). It specifies which language model to use for the generation. Each provider has its own set of available models. -- `stream`: A boolean that determines whether the response should be streamed or returned as a complete response. We cover this in more detail in the [`Streams`](/docs/mirascope/learn/streams) documentation. -- `response_model`: A Pydantic `BaseModel` type that defines how to structure the response. We cover this in more detail in the [`Response Models`](/docs/mirascope/learn/response_models) documentation. -- `output_parser`: A function for parsing the response output. We cover this in more detail in the [`Output Parsers`](/docs/mirascope/learn/output_parsers) documentation. -- `json_mode`: A boolean that deterines whether to use JSON mode or not. We cover this in more detail in the [`JSON Mode`](/docs/mirascope/learn/json_mode) documentation. -- `tools`: A list of tools that the model may request to use in its response. We cover this in more detail in the [`Tools`](/docs/mirascope/learn/tools) documentation. +- `stream`: A boolean that determines whether the response should be streamed or returned as a complete response. We cover this in more detail in the [`Streams`](/docs/v1/learn/streams) documentation. +- `response_model`: A Pydantic `BaseModel` type that defines how to structure the response. We cover this in more detail in the [`Response Models`](/docs/v1/learn/response_models) documentation. +- `output_parser`: A function for parsing the response output. We cover this in more detail in the [`Output Parsers`](/docs/v1/learn/output_parsers) documentation. +- `json_mode`: A boolean that deterines whether to use JSON mode or not. We cover this in more detail in the [`JSON Mode`](/docs/v1/learn/json_mode) documentation. +- `tools`: A list of tools that the model may request to use in its response. We cover this in more detail in the [`Tools`](/docs/v1/learn/tools) documentation. - `client`: A custom client to use when making the call to the LLM. We cover this in more detail in the [`Custom Client`](#custom-client) section below. - `call_params`: The provider-specific parameters to use when making the call to that provider's API. We cover this in more detail in the [`Provider-Specific Usage`](#provider-specific-usage) section below. @@ -354,16 +354,16 @@ print(response.content) For details on provider-specific modules, see the API documentation for each provider: - - [`mirascope.core.openai.call`](/docs/mirascope/api/core/openai/call) - - [`mirascope.core.anthropic.call`](/docs/mirascope/api/core/anthropic/call) - - [`mirascope.core.mistral.call`](/docs/mirascope/api/core/mistral/call) - - [`mirascope.core.google.call`](/docs/mirascope/api/core/google/call) - - [`mirascope.core.azure.call`](/docs/mirascope/api/core/azure/call) - - [`mirascope.core.cohere.call`](/docs/mirascope/api/core/cohere/call) - - [`mirascope.core.groq.call`](/docs/mirascope/api/core/groq/call) - - [`mirascope.core.xai.call`](/docs/mirascope/api/core/xai/call) - - [`mirascope.core.bedrock.call`](/docs/mirascope/api/core/bedrock/call) - - [`mirascope.core.litellm.call`](/docs/mirascope/api/core/litellm/call) + - [`mirascope.core.openai.call`](/docs/v1/api/core/openai/call) + - [`mirascope.core.anthropic.call`](/docs/v1/api/core/anthropic/call) + - [`mirascope.core.mistral.call`](/docs/v1/api/core/mistral/call) + - [`mirascope.core.google.call`](/docs/v1/api/core/google/call) + - [`mirascope.core.azure.call`](/docs/v1/api/core/azure/call) + - [`mirascope.core.cohere.call`](/docs/v1/api/core/cohere/call) + - [`mirascope.core.groq.call`](/docs/v1/api/core/groq/call) + - [`mirascope.core.xai.call`](/docs/v1/api/core/xai/call) + - [`mirascope.core.bedrock.call`](/docs/v1/api/core/bedrock/call) + - [`mirascope.core.litellm.call`](/docs/v1/api/core/litellm/call) While Mirascope provides a consistent interface across different LLM providers, you can also use provider-specific modules with refined typing for an individual provider. @@ -422,7 +422,7 @@ You can also configure the client dynamically at runtime through the dynamic con - A common mistake is to use the synchronous client with async calls. Read the section on [Async Custom Client](/docs/mirascope/learn/async#custom-client) to see how to use a custom client with asynchronous calls. + A common mistake is to use the synchronous client with async calls. Read the section on [Async Custom Client](/docs/v1/learn/async#custom-client) to see how to use a custom client with asynchronous calls. ## Error Handling @@ -474,10 +474,10 @@ By mastering calls in Mirascope, you'll be well-equipped to build robust, flexib Next, we recommend choosing one of: -- [Streams](/docs/mirascope/learn/streams) to see how to stream call responses for a more real-time interaction. -- [Chaining](/docs/mirascope/learn/chaining) to see how to chain calls together. -- [Response Models](/docs/mirascope/learn/response_models) to see how to generate structured outputs. -- [Tools](/docs/mirascope/learn/tools) to see how to give LLMs access to custom tools to extend their capabilities. -- [Async](/docs/mirascope/learn/async) to see how to better take advantage of asynchronous programming and parallelization for improved performance. +- [Streams](/docs/v1/learn/streams) to see how to stream call responses for a more real-time interaction. +- [Chaining](/docs/v1/learn/chaining) to see how to chain calls together. +- [Response Models](/docs/v1/learn/response_models) to see how to generate structured outputs. +- [Tools](/docs/v1/learn/tools) to see how to give LLMs access to custom tools to extend their capabilities. +- [Async](/docs/v1/learn/async) to see how to better take advantage of asynchronous programming and parallelization for improved performance. Pick whichever path aligns best with what you're hoping to get from Mirascope. \ No newline at end of file diff --git a/cloud/content/docs/v1/learn/chaining.mdx b/cloud/content/docs/v1/learn/chaining.mdx index 317c37c7c..7fff4e773 100644 --- a/cloud/content/docs/v1/learn/chaining.mdx +++ b/cloud/content/docs/v1/learn/chaining.mdx @@ -6,7 +6,7 @@ description: Learn how to combine multiple LLM calls in sequence to solve comple # Chaining - If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls) + If you haven't already, we recommend first reading the section on [Calls](/docs/v1/learn/calls) Chaining in Mirascope allows you to combine multiple LLM calls or operations in a sequence to solve complex tasks. This approach is particularly useful for breaking down complex problems into smaller, manageable steps. @@ -346,10 +346,10 @@ print(f"Rewritten Summary: {rewritten_summary}") -[Response Models](/docs/mirascope/learn/response_models) are a great way to add more structure to your chains, and [parallel async calls](/docs/mirascope/learn/async#parallel-async-calls) can be particularly powerful for making your chains more efficient. +[Response Models](/docs/v1/learn/response_models) are a great way to add more structure to your chains, and [parallel async calls](/docs/v1/learn/async#parallel-async-calls) can be particularly powerful for making your chains more efficient. ## Next Steps By mastering Mirascope's chaining techniques, you can create sophisticated LLM-powered applications that tackle complex, multi-step problems with greater accuracy, control, and observability. -Next, we recommend taking a look at the [Response Models](/docs/mirascope/learn/response_models) documentation, which shows you how to generate structured outputs. \ No newline at end of file +Next, we recommend taking a look at the [Response Models](/docs/v1/learn/response_models) documentation, which shows you how to generate structured outputs. \ No newline at end of file diff --git a/cloud/content/docs/v1/learn/evals.mdx b/cloud/content/docs/v1/learn/evals.mdx index b31dc6a76..f8dfc6860 100644 --- a/cloud/content/docs/v1/learn/evals.mdx +++ b/cloud/content/docs/v1/learn/evals.mdx @@ -6,7 +6,7 @@ description: Learn how to evaluate LLM outputs using multiple approaches includi # Evals: Evaluating LLM Outputs -If you haven't already, we recommend first reading the section on [Response Models](/docs/mirascope/learn/response_models) +If you haven't already, we recommend first reading the section on [Response Models](/docs/v1/learn/response_models) Evaluating the outputs of Large Language Models (LLMs) is a crucial step in developing robust and reliable AI applications. This section covers various approaches to evaluating LLM outputs, including using LLMs as evaluators as well as implementing hardcoded evaluation criteria. @@ -284,10 +284,10 @@ for evaluation in evaluations: -We are taking advantage of [provider-agnostic prompts](/docs/mirascope/learn/calls#provider-agnostic-usage) in this example to easily call multiple providers with the same prompt. Of course, you can always engineer each judge specifically for a given provider instead. +We are taking advantage of [provider-agnostic prompts](/docs/v1/learn/calls#provider-agnostic-usage) in this example to easily call multiple providers with the same prompt. Of course, you can always engineer each judge specifically for a given provider instead. - We highly recommend using [parallel asynchronous calls](/docs/mirascope/learn/async#parallel-async-calls) to run your evaluations more quickly since each call can (and should) be run in parallel. + We highly recommend using [parallel asynchronous calls](/docs/v1/learn/async#parallel-async-calls) to run your evaluations more quickly since each call can (and should) be run in parallel. ## Hardcoded Evaluation Criteria diff --git a/cloud/content/docs/v1/learn/extensions/custom_provider.mdx b/cloud/content/docs/v1/learn/extensions/custom_provider.mdx index 67a8bafc2..2a1364d59 100644 --- a/cloud/content/docs/v1/learn/extensions/custom_provider.mdx +++ b/cloud/content/docs/v1/learn/extensions/custom_provider.mdx @@ -5,7 +5,7 @@ description: Learn how to implement a custom LLM provider for Mirascope by creat # Implementing a Custom Provider -This guide explains how to implement a custom provider for Mirascope using the `call_factory` method. Before proceeding, ensure you're familiar with Mirascope's core concepts as covered in the [Learn section](/docs/mirascope/learn) of the documentation. +This guide explains how to implement a custom provider for Mirascope using the `call_factory` method. Before proceeding, ensure you're familiar with Mirascope's core concepts as covered in the [Learn section](/docs/v1/learn) of the documentation. ## Overview diff --git a/cloud/content/docs/v1/learn/index.mdx b/cloud/content/docs/v1/learn/index.mdx index e01ec79af..621e5c491 100644 --- a/cloud/content/docs/v1/learn/index.mdx +++ b/cloud/content/docs/v1/learn/index.mdx @@ -9,7 +9,7 @@ This section is designed to help you master Mirascope, a toolkit for building AI Our documentation is tailored for developers who have at least some experience with Python and LLMs. Whether you're coming from other development tool libraries or have worked directly with provider SDKs and APIs, Mirascope offers a familiar but enhanced experience. -If you haven't already, we recommend checking out [Getting Started](/docs/mirascope/guides/getting-started/quickstart) and [Why Use Mirascope](/docs/mirascope/getting-started/why). +If you haven't already, we recommend checking out [Getting Started](/docs/v1/guides/getting-started/quickstart) and [Why Use Mirascope](/docs/v1/getting-started/why). ## Key Features and Benefits @@ -46,79 +46,79 @@ We encourage you to dive into each component's documentation to gain a deeper un

      Prompts

      Learn how to create and manage prompts effectively

      - Read more → + Read more →

      Calls

      Understand how to make calls to LLMs using Mirascope

      - Read more → + Read more →

      Streams

      Explore streaming responses for real-time applications

      - Read more → + Read more →

      Chaining

      Understand the art of chaining multiple LLM calls for complex tasks

      - Read more → + Read more →

      Response Models

      Define and use structured output models with automatic validation

      - Read more → + Read more →

      JSON Mode

      Work with structured JSON data responses from LLMs

      - Read more → + Read more →

      Output Parsers

      Process and transform custom LLM output structures effectively

      - Read more → + Read more →

      Tools

      Discover how to extend LLM capabilities with custom tools

      - Read more → + Read more →

      Agents

      Put everything together to build advanced AI agents using Mirascope

      - Read more → + Read more →

      Evals

      Apply core components to build evaluation strategies for your LLM applications

      - Read more → + Read more →

      Async

      Maximize efficiency with asynchronous programming

      - Read more → + Read more →

      Retries

      Understand how to automatically retry failed API calls

      - Read more → + Read more →

      Local Models

      Learn how to use Mirascope with locally deployed LLMs

      - Read more → + Read more →
      diff --git a/cloud/content/docs/v1/learn/json_mode.mdx b/cloud/content/docs/v1/learn/json_mode.mdx index 88272cde7..792d4252c 100644 --- a/cloud/content/docs/v1/learn/json_mode.mdx +++ b/cloud/content/docs/v1/learn/json_mode.mdx @@ -6,7 +6,7 @@ description: Learn how to request structured JSON outputs from LLMs with Mirasco # JSON Mode - If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls) + If you haven't already, we recommend first reading the section on [Calls](/docs/v1/learn/calls) JSON Mode is a feature in Mirascope that allows you to request structured JSON output from Large Language Models (LLMs). This mode is particularly useful when you need to extract structured information from the model's responses, making it easier to parse and use the data in your applications. @@ -124,11 +124,11 @@ except json.JSONDecodeError: # [!code highlight] While this example catches errors for invalid JSON, there's always a chance that the LLM returns valid JSON that doesn't conform to your expected schema (such as missing fields or incorrect types). - For more robust validation, we recommend using [Response Models](/docs/mirascope/learn/response_models) for easier structuring and validation of LLM outputs. + For more robust validation, we recommend using [Response Models](/docs/v1/learn/response_models) for easier structuring and validation of LLM outputs. ## Next Steps By leveraging JSON Mode, you can create more robust and data-driven applications that efficiently process and utilize LLM outputs. This approach allows for easy integration with databases, APIs, or user interfaces, demonstrating the power of JSON Mode in creating robust, data-driven applications. -Next, we recommend reading the section on [Output Parsers](/docs/mirascope/learn/output_parsers) to see how to engineer prompts with specific output structures and parse the outputs automatically on every call. \ No newline at end of file +Next, we recommend reading the section on [Output Parsers](/docs/v1/learn/output_parsers) to see how to engineer prompts with specific output structures and parse the outputs automatically on every call. \ No newline at end of file diff --git a/cloud/content/docs/v1/learn/local_models.mdx b/cloud/content/docs/v1/learn/local_models.mdx index 3e507113f..8ed3890ce 100644 --- a/cloud/content/docs/v1/learn/local_models.mdx +++ b/cloud/content/docs/v1/learn/local_models.mdx @@ -5,7 +5,7 @@ description: Learn how to use Mirascope with locally hosted open-source models t # Local (Open-Source) Models -You can use the [`llm.call`](/docs/mirascope/api) decorator to interact with models running with [Ollama](https://github.com/ollama/ollama) or [vLLM](https://github.com/vllm-project/vllm): +You can use the [`llm.call`](/docs/v1/api) decorator to interact with models running with [Ollama](https://github.com/ollama/ollama) or [vLLM](https://github.com/vllm-project/vllm): @@ -87,7 +87,7 @@ print(book) ## OpenAI Compatibility -When hosting (fine-tuned) open-source LLMs yourself locally or in your own cloud with tools that have OpenAI compatibility, you can use the [`openai.call`](/docs/mirascope/api) decorator with a [custom client](/docs/mirascope/learn/calls#custom-client) to interact with your model using all of Mirascope's various features. +When hosting (fine-tuned) open-source LLMs yourself locally or in your own cloud with tools that have OpenAI compatibility, you can use the [`openai.call`](/docs/v1/api) decorator with a [custom client](/docs/v1/learn/calls#custom-client) to interact with your model using all of Mirascope's various features. diff --git a/cloud/content/docs/v1/learn/mcp/client.mdx b/cloud/content/docs/v1/learn/mcp/client.mdx index d7711b0a5..a6ac77bc9 100644 --- a/cloud/content/docs/v1/learn/mcp/client.mdx +++ b/cloud/content/docs/v1/learn/mcp/client.mdx @@ -157,4 +157,4 @@ This enables full editor support and type checking when using server components. ## Next Steps -By using the MCP client with Mirascope's standard features like [Calls](/docs/mirascope/learn/calls), [Tools](/docs/mirascope/learn/tools), and [Prompts](/docs/mirascope/learn/prompts), you can build powerful applications that leverage local services through MCP servers. \ No newline at end of file +By using the MCP client with Mirascope's standard features like [Calls](/docs/v1/learn/calls), [Tools](/docs/v1/learn/tools), and [Prompts](/docs/v1/learn/prompts), you can build powerful applications that leverage local services through MCP servers. \ No newline at end of file diff --git a/cloud/content/docs/v1/learn/output_parsers.mdx b/cloud/content/docs/v1/learn/output_parsers.mdx index 1ae12afd1..2261a7376 100644 --- a/cloud/content/docs/v1/learn/output_parsers.mdx +++ b/cloud/content/docs/v1/learn/output_parsers.mdx @@ -6,7 +6,7 @@ description: Learn how to process and structure raw LLM outputs into usable form # Output Parsers - If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls) + If you haven't already, we recommend first reading the section on [Calls](/docs/v1/learn/calls) Output Parsers in Mirascope provide a flexible way to process and structure the raw output from Large Language Models (LLMs). They allow you to transform the LLM's response into a more usable format, enabling easier integration with your application logic and improving the overall reliability of your LLM-powered features. @@ -14,7 +14,7 @@ Output Parsers in Mirascope provide a flexible way to process and structure the ## Basic Usage and Syntax - [`mirascope.llm.call.output_parser`](/docs/mirascope/api/llm/call) + [`mirascope.llm.call.output_parser`](/docs/v1/api/llm/call) Output Parsers are functions that take the call response object as input and return an output of a specified type. When you supply an output parser to a `call` decorator, it modifies the return type of the decorated function to match the output type of the parser. @@ -194,4 +194,4 @@ print(json.loads(json_response)) By leveraging Output Parsers effectively, you can create more robust and reliable LLM-powered applications, ensuring that the raw model outputs are transformed into structured data that's easy to work with in your application logic. -Next, we recommend taking a look at the section on [Tools](/docs/mirascope/learn/tools) to learn how to extend the capabilities of LLMs with custom functions. \ No newline at end of file +Next, we recommend taking a look at the section on [Tools](/docs/v1/learn/tools) to learn how to extend the capabilities of LLMs with custom functions. \ No newline at end of file diff --git a/cloud/content/docs/v1/learn/prompts.mdx b/cloud/content/docs/v1/learn/prompts.mdx index 6e3cb4149..384ca72d3 100644 --- a/cloud/content/docs/v1/learn/prompts.mdx +++ b/cloud/content/docs/v1/learn/prompts.mdx @@ -7,7 +7,7 @@ description: Master the art of creating effective prompts for LLMs using Mirasco -[`mirascope.core.base.message_param.BaseMessageParam`](/docs/mirascope/api/core/base/message_param#basemessageparam) +[`mirascope.core.base.message_param.BaseMessageParam`](/docs/v1/api/core/base/message_param#basemessageparam) @@ -17,7 +17,7 @@ When working with Large Language Model (LLM) APIs, the "prompt" is generally a l Let's look at how we can write prompts using Mirascope in a reusable, modular, and provider-agnostic way. -For the following explanations we will be talking *only* about the messages aspect of prompt engineering and will discuss calling the API later in the [Calls](/docs/mirascope/learn/calls) documentation. +For the following explanations we will be talking *only* about the messages aspect of prompt engineering and will discuss calling the API later in the [Calls](/docs/v1/learn/calls) documentation. In that section we will show how to use these provider-agnostic prompts to actually call a provider's API as well as how to engineer and tie a prompt to a specific call. @@ -127,7 +127,7 @@ The return type `Messages.Type` accepts all shorthand methods as well as `BaseMe Mirascope prompt templates currently support the `system`, `user`, and `assistant` roles. When using string templates, the roles are parsed by their corresponding all caps keyword (e.g. SYSTEM). -For messages with the `tool` role, see how Mirascope automatically generates these messages for you in the [Tools](/docs/mirascope/learn/tools) and [Agents](/docs/mirascope/learn/agents) sections. +For messages with the `tool` role, see how Mirascope automatically generates these messages for you in the [Tools](/docs/v1/learn/tools) and [Agents](/docs/v1/learn/agents) sections. ## Multi-Line Prompts @@ -583,7 +583,7 @@ print(recommend_book_prompt(book)) -It's worth noting that this also works with `self` when using prompt templates inside of a class, which is particularly important when building [Agents](/docs/mirascope/learn/agents). +It's worth noting that this also works with `self` when using prompt templates inside of a class, which is particularly important when building [Agents](/docs/v1/learn/agents). ## Format Specifiers @@ -795,4 +795,4 @@ There are various other parts of an LLM API call that we may want to configure d By mastering prompts in Mirascope, you'll be well-equipped to build robust, flexible, and reusable LLM applications. -Next, we recommend taking a look at the [Calls](/docs/mirascope/learn/calls) documentation, which shows you how to use your prompt templates to actually call LLM APIs and generate a response. \ No newline at end of file +Next, we recommend taking a look at the [Calls](/docs/v1/learn/calls) documentation, which shows you how to use your prompt templates to actually call LLM APIs and generate a response. \ No newline at end of file diff --git a/cloud/content/docs/v1/learn/provider-specific/openai.mdx b/cloud/content/docs/v1/learn/provider-specific/openai.mdx index fc7c44e1f..bedfd5dc9 100644 --- a/cloud/content/docs/v1/learn/provider-specific/openai.mdx +++ b/cloud/content/docs/v1/learn/provider-specific/openai.mdx @@ -13,7 +13,7 @@ This feature can be extremely useful when extracting structured information or u ### Tools -To use structured outputs with tools, use the `OpenAIToolConfig` and set `strict=True`. You can then use the tool as described in our [Tools documentation](/docs/mirascope/learn/tools): +To use structured outputs with tools, use the `OpenAIToolConfig` and set `strict=True`. You can then use the tool as described in our [Tools documentation](/docs/v1/learn/tools): ```python from mirascope.core import BaseTool, openai diff --git a/cloud/content/docs/v1/learn/response_models.mdx b/cloud/content/docs/v1/learn/response_models.mdx index d3c5099fe..cc8494c34 100644 --- a/cloud/content/docs/v1/learn/response_models.mdx +++ b/cloud/content/docs/v1/learn/response_models.mdx @@ -6,7 +6,7 @@ description: Learn how to structure and validate LLM outputs using Pydantic mode # Response Models - If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls) + If you haven't already, we recommend first reading the section on [Calls](/docs/v1/learn/calls) Response Models in Mirascope provide a powerful way to structure and validate the output from Large Language Models (LLMs). By leveraging Pydantic's [`BaseModel`](https://docs.pydantic.dev/latest/usage/models/), Response Models offer type safety, automatic validation, and easier data manipulation for your LLM responses. While we cover some details in this documentation, we highly recommend reading through Pydantic's documentation for a deeper, comprehensive dive into everything you can do with Pydantic's `BaseModel`. @@ -78,9 +78,9 @@ print(book) Notice how Mirascope makes generating structured outputs significantly simpler than the official SDKs. It also greatly reduces boilerplate and standardizes the interaction across all supported LLM providers. - By default, `response_model` will use [Tools](/docs/mirascope/learn/tools) under the hood, forcing to the LLM to call that specific tool and constructing the response model from the tool's arguments. + By default, `response_model` will use [Tools](/docs/v1/learn/tools) under the hood, forcing to the LLM to call that specific tool and constructing the response model from the tool's arguments. - We default to using tools because all supported providers support tools. You can also optionally set `json_mode=True` to use [JSON Mode](/docs/mirascope/learn/json_mode) instead, which we cover in [more detail below](#json-mode). + We default to using tools because all supported providers support tools. You can also optionally set `json_mode=True` to use [JSON Mode](/docs/v1/learn/json_mode) instead, which we cover in [more detail below](#json-mode). ### Accessing Original Call Response @@ -307,7 +307,7 @@ except ValidationError as e: # [!code highlight] Without additional prompt engineering, this call will fail every single time. It's important to engineer your prompts to reduce errors, but LLMs are far from perfect, so always remember to catch and handle validation errors gracefully. -We highly recommend taking a look at our section on [retries](/docs/mirascope/learn/retries) to learn more about automatically retrying and re-inserting validation errors, which enables retrying the call such that the LLM can learn from its previous mistakes. +We highly recommend taking a look at our section on [retries](/docs/v1/learn/retries) to learn more about automatically retrying and re-inserting validation errors, which enables retrying the call such that the LLM can learn from its previous mistakes. ### Accessing Original Call Response On Error @@ -388,7 +388,7 @@ This allows you to gracefully handle errors as well as inspect the original LLM ## JSON Mode -By default, `response_model` uses [Tools](/docs/mirascope/learn/tools) under the hood. You can instead use [JSON Mode](/docs/mirascope/learn/json_mode) in conjunction with `response_model` by setting `json_mode=True`: +By default, `response_model` uses [Tools](/docs/v1/learn/tools) under the hood. You can instead use [JSON Mode](/docs/v1/learn/json_mode) in conjunction with `response_model` by setting `json_mode=True`: @@ -583,7 +583,7 @@ for partial_book in book_stream: # [!code highlight] Once exhausted, you can access the final, full response model through the `constructed_response_model` property of the structured stream. Note that this will also give you access to the [`._response` property](#accessing-original-call-response) that every `BaseModel` receives. -You can also use the `stream` property to access the `BaseStream` instance and [all of it's properties](/docs/mirascope/learn/streams#common-stream-properties-and-methods). +You can also use the `stream` property to access the `BaseStream` instance and [all of it's properties](/docs/v1/learn/streams#common-stream-properties-and-methods). ## FromCallArgs @@ -692,5 +692,5 @@ By following these best practices and leveraging Response Models effectively, yo Next, we recommend taking a look at one of: -- [JSON Mode](/docs/mirascope/learn/json_mode) to see an alternate way to generate structured outputs where using Pydantic to validate outputs is optional. -- [Evals](/docs/mirascope/learn/evals) to see how to use `response_model` to evaluate your prompts. \ No newline at end of file +- [JSON Mode](/docs/v1/learn/json_mode) to see an alternate way to generate structured outputs where using Pydantic to validate outputs is optional. +- [Evals](/docs/v1/learn/evals) to see how to use `response_model` to evaluate your prompts. \ No newline at end of file diff --git a/cloud/content/docs/v1/learn/streams.mdx b/cloud/content/docs/v1/learn/streams.mdx index e5eee70b6..2fe8777da 100644 --- a/cloud/content/docs/v1/learn/streams.mdx +++ b/cloud/content/docs/v1/learn/streams.mdx @@ -6,7 +6,7 @@ description: Learn how to process LLM responses in real-time as they are generat # Streams - If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls) + If you haven't already, we recommend first reading the section on [Calls](/docs/v1/learn/calls) Streaming is a powerful feature when using LLMs that allows you to process chunks of an LLM response in real-time as they are generated. This can be particularly useful for long-running tasks, providing immediate feedback to users, or implementing more responsive applications. @@ -43,12 +43,12 @@ This approach offers several benefits: 5. **Early termination**: If the desired information is found early in the response, processing can be stopped without waiting for the full generation. - [`mirascope.core.base.stream`](/docs/mirascope/api/core/base/stream) + [`mirascope.core.base.stream`](/docs/v1/api/core/base/stream) ## Basic Usage and Syntax -To use streaming, simply set the `stream` parameter to `True` in your [`call`](/docs/mirascope/learn/calls) decorator: +To use streaming, simply set the `stream` parameter to `True` in your [`call`](/docs/v1/learn/calls) decorator: @@ -98,13 +98,13 @@ In this example: ## Handling Streamed Responses - [`mirascope.core.base.call_response_chunk`](/docs/mirascope/api/core/base/call_response_chunk) + [`mirascope.core.base.call_response_chunk`](/docs/v1/api/core/base/call_response_chunk) -When streaming, the initial response will be a provider-specific [`BaseStream`](/docs/mirascope/api) instance (e.g. `OpenAIStream`), which is a generator that yields tuples `(chunk, tool)` where `chunk` is a provider-specific [`BaseCallResponseChunk`](/docs/mirascope/api) (e.g. `OpenAICallResponseChunk`) that wraps the original chunk in the provider's response. These objects provide a consistent interface across providers while still allowing access to provider-specific details. +When streaming, the initial response will be a provider-specific [`BaseStream`](/docs/v1/api) instance (e.g. `OpenAIStream`), which is a generator that yields tuples `(chunk, tool)` where `chunk` is a provider-specific [`BaseCallResponseChunk`](/docs/v1/api) (e.g. `OpenAICallResponseChunk`) that wraps the original chunk in the provider's response. These objects provide a consistent interface across providers while still allowing access to provider-specific details. - You'll notice in the above example that we ignore the `tool` in each tuple. If no tools are set in the call, then `tool` will always be `None` and can be safely ignored. For more details, check out the documentation on [streaming tools](/docs/mirascope/learn/tools#streaming-tools) + You'll notice in the above example that we ignore the `tool` in each tuple. If no tools are set in the call, then `tool` will always be `None` and can be safely ignored. For more details, check out the documentation on [streaming tools](/docs/v1/learn/tools#streaming-tools) ### Common Chunk Properties and Methods @@ -125,7 +125,7 @@ All `BaseCallResponseChunk` objects share these common properties: To access these properties, you must first exhaust the stream by iterating through it. -Once exhausted, all `BaseStream` objects share the [same common properties and methods as `BaseCallResponse`](/docs/mirascope/learn/calls#common-response-properties-and-methods), except for `usage`, `tools`, `tool`, and `__str__`. +Once exhausted, all `BaseStream` objects share the [same common properties and methods as `BaseCallResponse`](/docs/v1/learn/calls#common-response-properties-and-methods), except for `usage`, `tools`, `tool`, and `__str__`. @@ -319,4 +319,4 @@ Note how we wrap the iteration loop in a try/except block to catch any errors th By leveraging streaming effectively, you can create more responsive and efficient LLM-powered applications with Mirascope's streaming capabilities. -Next, we recommend taking a look at the [Chaining](/docs/mirascope/learn/chaining) documentation, which shows you how to break tasks down into smaller, more directed calls and chain them together. \ No newline at end of file +Next, we recommend taking a look at the [Chaining](/docs/v1/learn/chaining) documentation, which shows you how to break tasks down into smaller, more directed calls and chain them together. \ No newline at end of file diff --git a/cloud/content/docs/v1/learn/tools.mdx b/cloud/content/docs/v1/learn/tools.mdx index c285f4e36..967a3c862 100644 --- a/cloud/content/docs/v1/learn/tools.mdx +++ b/cloud/content/docs/v1/learn/tools.mdx @@ -6,7 +6,7 @@ description: Learn how to define, use, and chain together LLM-powered tools in M # Tools - If you haven't already, we recommend first reading the section on [Calls](/docs/mirascope/learn/calls) + If you haven't already, we recommend first reading the section on [Calls](/docs/v1/learn/calls) Tools are user-defined functions that an LLM (Large Language Model) can ask the user to invoke on its behalf. This greatly enhances the capabilities of LLMs by enabling them to perform specific tasks, access external data, interact with other systems, and more. @@ -14,7 +14,7 @@ Tools are user-defined functions that an LLM (Large Language Model) can ask the Mirascope enables defining tools in a provider-agnostic way, which can be used across all supported LLM providers without modification. - When an LLM decides to use a tool, it indicates the tool name and argument values in its response. It's important to note that the LLM doesn't actually execute the function; instead, you are responsible for calling the tool and (optionally) providing the output back to the LLM in a subsequent interaction. For more details on such iterative tool-use flows, check out the [Tool Message Parameters](#tool-message-parameters) section below as well as the section on [Agents](/docs/mirascope/learn/agents). + When an LLM decides to use a tool, it indicates the tool name and argument values in its response. It's important to note that the LLM doesn't actually execute the function; instead, you are responsible for calling the tool and (optionally) providing the output back to the LLM in a subsequent interaction. For more details on such iterative tool-use flows, check out the [Tool Message Parameters](#tool-message-parameters) section below as well as the section on [Agents](/docs/v1/learn/agents). ```mermaid sequenceDiagram @@ -35,7 +35,7 @@ Mirascope enables defining tools in a provider-agnostic way, which can be used a ## Basic Usage and Syntax - [`mirascope.llm.tool`](/docs/mirascope/api) + [`mirascope.llm.tool`](/docs/v1/api) There are two ways of defining tools in Mirascope: `BaseTool` and functions. @@ -203,7 +203,7 @@ The core idea to understand here is that the LLM is asking us to call the tool o This is particularly important for buildling applications with access to live information and external systems. -For the purposes of this example we are showing just a single tool call. Generally, you would then give the tool call's output back to the LLM and make another call so the LLM can generate a response based on the output of the tool. We cover this in more detail in the section on [Agents](/docs/mirascope/learn/agents) +For the purposes of this example we are showing just a single tool call. Generally, you would then give the tool call's output back to the LLM and make another call so the LLM can generate a response based on the output of the tool. We cover this in more detail in the section on [Agents](/docs/v1/learn/agents) ### Accessing Original Tool Call @@ -375,7 +375,7 @@ While Mirascope provides a consistent interface, type support varies among provi *Legend: ✓ (Supported), — (Not Supported)* -Consider provider-specific capabilities when working with advanced type structures. Even for supported types, LLM outputs may sometimes be incorrect or of the wrong type. In such cases, prompt engineering or error handling (like [retries](/docs/mirascope/learn/retries) and [reinserting validation errors](/docs/mirascope/learn/retries#error-reinsertion)) may be necessary. +Consider provider-specific capabilities when working with advanced type structures. Even for supported types, LLM outputs may sometimes be incorrect or of the wrong type. In such cases, prompt engineering or error handling (like [retries](/docs/v1/learn/retries) and [reinserting validation errors](/docs/v1/learn/retries#error-reinsertion)) may be necessary. ## Parallel Tool Calls @@ -526,7 +526,7 @@ else: -If your tool calls are I/O-bound, it's often worth writing [async tools](/docs/mirascope/learn/async#async-tools) so that you can run all of the tools calls [in parallel](/docs/mirascope/learn/async#parallel-async-calls) for better efficiency. +If your tool calls are I/O-bound, it's often worth writing [async tools](/docs/v1/learn/async#async-tools) so that you can run all of the tools calls [in parallel](/docs/v1/learn/async#parallel-async-calls) for better efficiency. ## Streaming Tools @@ -863,7 +863,7 @@ for chunk, tool in stream: ## Tool Message Parameters - Calling tools and inserting their outputs into subsequent LLM API calls in a loop in the most basic form of an agent. While we cover this briefly here, we recommend reading the section on [Agents](/docs/mirascope/learn/agents) for more details and examples. + Calling tools and inserting their outputs into subsequent LLM API calls in a loop in the most basic form of an agent. While we cover this briefly here, we recommend reading the section on [Agents](/docs/v1/learn/agents) for more details and examples. Generally the next step after the LLM returns a tool call is for you to call the tool on its behalf and supply the output in a subsequent call. @@ -1243,7 +1243,7 @@ In this example we've added additional validation, but it's important that you s ## Few-Shot Examples -Just like with [Response Models](/docs/mirascope/learn/response_models#few-shot-examples), you can add few-shot examples to your tools: +Just like with [Response Models](/docs/v1/learn/response_models#few-shot-examples), you can add few-shot examples to your tools: @@ -1439,7 +1439,7 @@ Both approaches will result in the same tool schema with examples included. The ## ToolKit - [`mirascope.llm.toolkit`](/docs/mirascope/api) + [`mirascope.llm.toolkit`](/docs/v1/api) The `BaseToolKit` class enables: @@ -1576,18 +1576,18 @@ Mirascope provides several pre-made tools and toolkits to help you get started q ### Pre-Made Tools - - [`mirascope.tools.web.DuckDuckGoSearch`](/docs/mirascope/api/tools/web/duckduckgo) - - [`mirascope.tools.web.HTTPX`](/docs/mirascope/api/tools/web/httpx) - - [`mirascope.tools.web.ParseURLContent`](/docs/mirascope/api/tools/web/parse_url_content) - - [`mirascope.tools.web.Requests`](/docs/mirascope/api/tools/web/requests) + - [`mirascope.tools.web.DuckDuckGoSearch`](/docs/v1/api/tools/web/duckduckgo) + - [`mirascope.tools.web.HTTPX`](/docs/v1/api/tools/web/httpx) + - [`mirascope.tools.web.ParseURLContent`](/docs/v1/api/tools/web/parse_url_content) + - [`mirascope.tools.web.Requests`](/docs/v1/api/tools/web/requests) | Tool | Primary Use | Dependencies | Key Features | Characteristics | |------ |------------- |-------------- |-------------- |----------------- | -| [`DuckDuckGoSearch`](/docs/mirascope/api/tools/web/duckduckgo) | Web Searching | [`duckduckgo-search`](https://pypi.org/project/duckduckgo-search/) | • Multiple query support
      • Title/URL/snippet extraction
      • Result count control
      • Automated formatting | • Privacy-focused search
      • Async support (AsyncDuckDuckGoSearch)
      • Automatic filtering
      • Structured results | -| [`HTTPX`](/docs/mirascope/api/tools/web/httpx) | Advanced HTTP Requests | [`httpx`](https://pypi.org/project/httpx/) | • Full HTTP method support (GET/POST/PUT/DELETE)
      • Custom header support
      • File upload/download
      • Form data handling | • Async support (AsyncHTTPX)
      • Configurable timeouts
      • Comprehensive error handling
      • Redirect control | -| [`ParseURLContent`](/docs/mirascope/api/tools/web/parse_url_content) | Web Content Extraction | [`beautifulsoup4`](https://pypi.org/project/beautifulsoup4/), [`httpx`](https://pypi.org/project/httpx/) | • HTML content fetching
      • Main content extraction
      • Element filtering
      • Text normalization | • Automatic cleaning
      • Configurable parser
      • Timeout settings
      • Error handling | -| [`Requests`](/docs/mirascope/api/tools/web/requests) | Simple HTTP Requests | [`requests`](https://pypi.org/project/requests/) | • Basic HTTP methods
      • Simple API
      • Response text retrieval
      • Basic authentication | • Minimal configuration
      • Intuitive interface
      • Basic error handling
      • Lightweight implementation | +| [`DuckDuckGoSearch`](/docs/v1/api/tools/web/duckduckgo) | Web Searching | [`duckduckgo-search`](https://pypi.org/project/duckduckgo-search/) | • Multiple query support
      • Title/URL/snippet extraction
      • Result count control
      • Automated formatting | • Privacy-focused search
      • Async support (AsyncDuckDuckGoSearch)
      • Automatic filtering
      • Structured results | +| [`HTTPX`](/docs/v1/api/tools/web/httpx) | Advanced HTTP Requests | [`httpx`](https://pypi.org/project/httpx/) | • Full HTTP method support (GET/POST/PUT/DELETE)
      • Custom header support
      • File upload/download
      • Form data handling | • Async support (AsyncHTTPX)
      • Configurable timeouts
      • Comprehensive error handling
      • Redirect control | +| [`ParseURLContent`](/docs/v1/api/tools/web/parse_url_content) | Web Content Extraction | [`beautifulsoup4`](https://pypi.org/project/beautifulsoup4/), [`httpx`](https://pypi.org/project/httpx/) | • HTML content fetching
      • Main content extraction
      • Element filtering
      • Text normalization | • Automatic cleaning
      • Configurable parser
      • Timeout settings
      • Error handling | +| [`Requests`](/docs/v1/api/tools/web/requests) | Simple HTTP Requests | [`requests`](https://pypi.org/project/requests/) | • Basic HTTP methods
      • Simple API
      • Response text retrieval
      • Basic authentication | • Minimal configuration
      • Intuitive interface
      • Basic error handling
      • Lightweight implementation | Example using DuckDuckGoSearch: @@ -1675,14 +1675,14 @@ if tool := response.tool: ### Pre-Made ToolKits - - [`mirascope.tools.system.FileSystemToolKit`](/docs/mirascope/api/tools/system/file_system) - - [`mirascope.tools.system.DockerOperationToolKit`](/docs/mirascope/api/tools/system/docker_operation) + - [`mirascope.tools.system.FileSystemToolKit`](/docs/v1/api/tools/system/file_system) + - [`mirascope.tools.system.DockerOperationToolKit`](/docs/v1/api/tools/system/docker_operation) | ToolKit | Primary Use | Dependencies | Tools and Features | Characteristics | |--------- |--------------------------|------------------------------------------------------------------- |------------------- |----------------- | -| [`FileSystemToolKit`](/docs/mirascope/api/tools/system/file_system) | File System Operations | None | • ReadFile: File content reading
      • WriteFile: Content writing
      • ListDirectory: Directory listing
      • CreateDirectory: Directory creation
      • DeleteFile: File deletion | • Path traversal protection
      • File size limits
      • Extension validation
      • Robust error handling
      • Base directory isolation | -| [`DockerOperationToolKit`](/docs/mirascope/api/tools/system/docker_operation) | Code & Command Execution | [`docker`](https://pypi.org/project/docker/), [`docker engine`](https://docs.docker.com/engine/install/) | • ExecutePython: Python code execution with optional package installation
      • ExecuteShell: Shell command execution | • Docker container isolation
      • Memory limits
      • Network control
      • Security restrictions
      • Resource cleanup | +| [`FileSystemToolKit`](/docs/v1/api/tools/system/file_system) | File System Operations | None | • ReadFile: File content reading
      • WriteFile: Content writing
      • ListDirectory: Directory listing
      • CreateDirectory: Directory creation
      • DeleteFile: File deletion | • Path traversal protection
      • File size limits
      • Extension validation
      • Robust error handling
      • Base directory isolation | +| [`DockerOperationToolKit`](/docs/v1/api/tools/system/docker_operation) | Code & Command Execution | [`docker`](https://pypi.org/project/docker/), [`docker engine`](https://docs.docker.com/engine/install/) | • ExecutePython: Python code execution with optional package installation
      • ExecuteShell: Shell command execution | • Docker container isolation
      • Memory limits
      • Network control
      • Security restrictions
      • Resource cleanup | Example using FileSystemToolKit: @@ -1751,4 +1751,4 @@ Tools can significantly extend LLM capabilities, enabling more interactive and d Mirascope hopes to provide a simple and clean interface that is both easy to learn and easy to use; however, we understand that LLM tools can be a difficult concept regardless of the supporting tooling. -Next, we recommend learning about how to build [Agents](/docs/mirascope/learn/agents) that take advantage of these tools. \ No newline at end of file +Next, we recommend learning about how to build [Agents](/docs/v1/learn/agents) that take advantage of these tools. \ No newline at end of file