Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion cloud/app/components/home-page.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ export const MirascopeBlock = ({ onScrollToTop }: MirascopeBlockProps) => {

<div className="mt-2 flex w-full max-w-3xl flex-col items-center justify-center gap-4 sm:flex-row">
<ButtonLink
href="/docs/mirascope"
href="/docs/v1"
variant="default"
size="default"
className="box-shade w-full min-w-[200px] bg-mirple px-6 py-4 text-center font-handwriting font-medium text-white hover:bg-mirple-dark/90 sm:w-auto"
Expand Down
10 changes: 5 additions & 5 deletions cloud/content/blog/advanced-prompt-engineering.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ Let’s walk through it step-by-step:
Therefore, after both Bella and Carlos have toggled the switch, the light is once again in the "on" position.
```

You can also see a version of this [RaR prompt in Python](/docs/mirascope/guides/prompt-engineering/text-based/rephrase-and-respond) in our code tutorials.
You can also see a version of this [RaR prompt in Python](/docs/v1/guides/prompt-engineering/text-based/rephrase-and-respond) in our code tutorials.

### 3. Chain of Thought (CoT)

Expand Down Expand Up @@ -124,7 +124,7 @@ Marbles left = 40 - 28 = 12
So, Liam has 12 marbles left after giving marbles to his friends.
```

You can also check out our code tutorial for creating both zero- and few-shot variants of [CoT prompts](/docs/mirascope/guides/prompt-engineering/text-based/chain-of-thought) using Mirascope.
You can also check out our code tutorial for creating both zero- and few-shot variants of [CoT prompts](/docs/v1/guides/prompt-engineering/text-based/chain-of-thought) using Mirascope.

### 4. Self-Ask Prompt

Expand All @@ -151,7 +151,7 @@ Intermediate answer: Peak fall foliage in Colorado typically occurs from late Se
Final Answer: Peak fall foliage in the state where ZIP code 80302 is located (Colorado) typically happens from late September to early October.
```

Our [self-ask prompt coding recipe](/docs/mirascope/guides/prompt-engineering/text-based/self-ask) shows you how to implement both basic and enhanced (with dynamic example selection) versions.
Our [self-ask prompt coding recipe](/docs/v1/guides/prompt-engineering/text-based/self-ask) shows you how to implement both basic and enhanced (with dynamic example selection) versions.

### 5. Tree-of-Thought (ToT)

Expand Down Expand Up @@ -247,7 +247,7 @@ Verified List of Explorers and Their Achievements:
- Roald Amundsen – First to reach the South Pole (1911)
```

See our [coding tutorial](/docs/mirascope/guides/prompt-engineering/chaining-based/chain-of-verification) for a complete example of building a CoV prompt using Python.
See our [coding tutorial](/docs/v1/guides/prompt-engineering/chaining-based/chain-of-verification) for a complete example of building a CoV prompt using Python.

### 7. Self-Consistency

Expand Down Expand Up @@ -302,7 +302,7 @@ Conclusion:
The most frequent and correct answer is 17 marbles. Each method, whether direct calculation, step-by-step subtraction, algebraic reasoning, or estimation, correctly accounts for the 28 marbles Liam gave away. Therefore, 17 marbles is the final and accurate answer.
```

Our coding tutorial shows you how to construct both a basic version of the [self-consistency prompt](/docs/mirascope/guides/prompt-engineering/chaining-based/self-consistency), as well as using automated answer extraction.
Our coding tutorial shows you how to construct both a basic version of the [self-consistency prompt](/docs/v1/guides/prompt-engineering/chaining-based/self-consistency), as well as using automated answer extraction.

## Why Prompts Alone Aren’t Enough

Expand Down
4 changes: 2 additions & 2 deletions cloud/content/blog/context-engineering-platform.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -116,11 +116,11 @@ This provides a snapshot of all that directly influences an LLM call's outcome.

Both Lilypad and Mirascope (for which Lilypad gives first-class support) provide out-of-the box pythonic abstractions for managing the context fed into the prompt.

For example, in the code above, [Mirascope’s `@llm.call` decorator](/docs/mirascope/learn/calls) turns the prompt function into a call with minimal boilerplate code.
For example, in the code above, [Mirascope’s `@llm.call` decorator](/docs/v1/learn/calls) turns the prompt function into a call with minimal boilerplate code.

`@llm.call` provides a unified interface for working with model providers like OpenAI, Grok, Google (Gemini/Vertex), Anthropic, and many others, and you can change the provider by changing the values for model and provider in the decorator’s arguments.

This decorator also provides an interface for [tool calling](/docs/mirascope/learn/tools), [structured outputs and schema](/docs/mirascope/learn/output_parsers), Pydantic-based input validation, [prompt chaining](/blog/prompt-chaining), type hints (integrated into your IDE), and others.
This decorator also provides an interface for [tool calling](/docs/v1/learn/tools), [structured outputs and schema](/docs/v1/learn/output_parsers), Pydantic-based input validation, [prompt chaining](/blog/prompt-chaining), type hints (integrated into your IDE), and others.

You can work with prompts and their associated code within the Lilypad UI, a no-code environment that lets non-technical users test, edit, and evaluate prompts while keeping them tied to the exact versioned code that runs in production.

Expand Down
18 changes: 9 additions & 9 deletions cloud/content/blog/engineers-should-handle-prompting-llms.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ It was important to us to be able to just code in Python, without having to lear

For instance, we don’t make you implement directed acyclic graphs in the context of sequencing function calls. We provide code that’s eminently readable, lightweight, and maintainable.

An example of this is our [`prompt_template`](/docs/mirascope/learn/prompts/#prompt-templates-messages) decorator, which encapsulates as much logic within the prompt as feasible.
An example of this is our [`prompt_template`](/docs/v1/learn/prompts/#prompt-templates-messages) decorator, which encapsulates as much logic within the prompt as feasible.

Within a decorated function, the return value can be used as the prompt template. The following example requests book recommendations based on particular genre:

Expand Down Expand Up @@ -293,7 +293,7 @@ except ValidationError as e:

```

You can also validate data in ways that are difficult if not impossible to code successfully, but that LLMs excel at, such as analyzing sentiment. For instance, you can add Pydantic’s [`AfterValidator`](/docs/mirascope/learn/response_models/#validation-and-error-handling) annotation to Mirascope’s extracted output as shown below:
You can also validate data in ways that are difficult if not impossible to code successfully, but that LLMs excel at, such as analyzing sentiment. For instance, you can add Pydantic’s [`AfterValidator`](/docs/v1/learn/response_models/#validation-and-error-handling) annotation to Mirascope’s extracted output as shown below:

```python
from typing import Annotated, Literal
Expand Down Expand Up @@ -384,7 +384,7 @@ print(response.content)

We also made sure that Mirascope works well with or integrates directly with tools such as Logfire, OpenTelemetry, HyperDX, Langfuse, and more for easily tracking machine learning experiments and visualizing data as well as improving prompt effectiveness through automated refinement and testing. All together these tools work together with Mirascope as [an alternative to LangChain](/blog/langchain-alternatives).

Beyond OpenAI, Mirascope supports currently these [other LLM providers](/docs/mirascope/learn/calls):
Beyond OpenAI, Mirascope supports currently these [other LLM providers](/docs/v1/learn/calls):

- Anthropic
- Mistral
Expand All @@ -399,7 +399,7 @@ We've made sure that every example in our documentation shows how to use each pr

We consistently add support for new providers, so if there is a provider you want to see supported that isn't yet, let us know!

If you want to switch to another model provider (like [Anthropic](/docs/mirascope/api/core/anthropic/call/) or [Mistral](/docs/mirascope/api/core/mistral/call/), for instance), you just need to change the decorator and the corresponding call parameters:
If you want to switch to another model provider (like [Anthropic](/docs/v1/api/core/anthropic/call/) or [Mistral](/docs/v1/api/core/mistral/call/), for instance), you just need to change the decorator and the corresponding call parameters:

```python
from mirascope.core import anthropic
Expand All @@ -414,7 +414,7 @@ print(response.content)

### Expand LLM Capabilities with Tools

Although LLMs are known mostly for text generation, you can provide them with specific tools (also known as [function calling](/docs/mirascope/learn/tools)) to extend their capabilities.
Although LLMs are known mostly for text generation, you can provide them with specific tools (also known as [function calling](/docs/v1/learn/tools)) to extend their capabilities.

Examples of what you can do with tools include:

Expand All @@ -423,7 +423,7 @@ Examples of what you can do with tools include:
- Allowing access to the Google Cloud Natural Language API for evaluating customer feedback and reviews to determine sentiment and help businesses quickly identify areas for improvement.
- Providing a Machine Learning (ML) recommendation engine API for giving personalized content or product recommendations for an e-commerce website, based on natural language interactions with users.

Mirascope lets you easily [define tools](/docs/mirascope/learn/tools/#basic-usage-and-syntax) by documenting any function using a docstring as shown below. It automatically converts this into a tool, saving you additional work.
Mirascope lets you easily [define tools](/docs/v1/learn/tools/#basic-usage-and-syntax) by documenting any function using a docstring as shown below. It automatically converts this into a tool, saving you additional work.

```python
from typing import Literal
Expand Down Expand Up @@ -472,7 +472,7 @@ class GetWeather(BaseTool):

Tools allow you to dynamically generate prompts based on current or user-specified data such as extracting current weather data in a given city before generating a prompt like, "Given the current weather conditions in Tokyo, what are fun outdoor activities?"

See our [documentation](/docs/mirascope/learn/tools) for details on generating prompts in this way (for instance, by calling the `call` method).
See our [documentation](/docs/v1/learn/tools) for details on generating prompts in this way (for instance, by calling the `call` method).

### Extract Structured Data from LLM-Generated Text

Expand All @@ -483,7 +483,7 @@ LLMs are great at producing conversations in text, which is unstructured informa
- Pulling out specific medical data such as symptoms, diagnoses, medication names, dosages, and patient history from clinical notes.
- Extracting financial metrics, stock data, company performance indicators, and market trends from financial reports and news articles.

To handle such scenarios, we support extraction with the [`response_model`](/docs/mirascope/learn/response_models) argument in the decorator, which leverages tools (or optionally `json_mode=True`) to reliably extract structured data from the outputs of LLMs according to the schema defined in a Pydantic `BaseModel`. In the example below you can see how due dates, priorities, and descriptions are being extracted:
To handle such scenarios, we support extraction with the [`response_model`](/docs/v1/learn/response_models) argument in the decorator, which leverages tools (or optionally `json_mode=True`) to reliably extract structured data from the outputs of LLMs according to the schema defined in a Pydantic `BaseModel`. In the example below you can see how due dates, priorities, and descriptions are being extracted:

```python
from typing import Literal
Expand Down Expand Up @@ -513,7 +513,7 @@ print(task_details)

```

You can define schema parameters against which to extract data in Pydantic’s `BaseModel` class, by setting certain attributes and fields in that class. Mirascope also lets you set the number of retries to extract data in case a failure occurs. But you also don’t have to use a detailed schema like `BaseModel` if you’re [extracting built-in types](/docs/mirascope/learn/response_models/#built-in-types) like strings, integers, booleans, etc. The code sample below shows how extraction for a simple structure like a list of strings doesn’t need a full-fledged schema definition.
You can define schema parameters against which to extract data in Pydantic’s `BaseModel` class, by setting certain attributes and fields in that class. Mirascope also lets you set the number of retries to extract data in case a failure occurs. But you also don’t have to use a detailed schema like `BaseModel` if you’re [extracting built-in types](/docs/v1/learn/response_models/#built-in-types) like strings, integers, booleans, etc. The code sample below shows how extraction for a simple structure like a list of strings doesn’t need a full-fledged schema definition.

```python
from mirascope.core import openai
Expand Down
2 changes: 1 addition & 1 deletion cloud/content/blog/how-to-build-a-knowledge-graph.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ We’ll also use Mirascope to [define and build](/blog/advanced-prompt-engineeri
* NetworkX and Matplotlib for visual rendering of the graph
* Neo4j at the end for improving graph storage and querying

Our example is based on a Mirascope tutorial on [building a knowledge graph](/docs/mirascope/guides/more-advanced/knowledge-graph/).
Our example is based on a Mirascope tutorial on [building a knowledge graph](/docs/v1/guides/more-advanced/knowledge-graph/).

### Set Up the Environment

Expand Down
4 changes: 2 additions & 2 deletions cloud/content/blog/how-to-make-a-chatbot.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -337,7 +337,7 @@ We split this tutorial into two parts:
* First, we show you how to build a simple chatbot in Python (without a knowledge base) featuring simple conversational interactions with users via a loop.
* We then give the bot a search tool to go search the web autonomously (if needed)

These steps are based on [Mirascope’s tutorial](/docs/mirascope/guides/langgraph-vs-mirascope/quickstart/) where we show you how to add even more advanced features to the chatbot.
These steps are based on [Mirascope’s tutorial](/docs/v1/guides/langgraph-vs-mirascope/quickstart/) where we show you how to add even more advanced features to the chatbot.

### Set Up the Environment

Expand Down Expand Up @@ -635,4 +635,4 @@ Let me know if you want to dive deeper into any of these areas!

Build AI-driven solutions that engage users naturally, make you more productive, and provide accurate, real-time support. Our streamlined tools make it easy to create smarter interactions and improve user satisfaction.

Want to learn more about Mirascope’s tools for building AI agents? You can find Mirascope code samples both in our [documentation](/docs/mirascope/) and our [GitHub repository](https://github.com/mirascope/mirascope).
Want to learn more about Mirascope’s tools for building AI agents? You can find Mirascope code samples both in our [documentation](/docs/v1/) and our [GitHub repository](https://github.com/mirascope/mirascope).
2 changes: 1 addition & 1 deletion cloud/content/blog/langchain-alternatives.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ Response models also feature auto-complete for expected values:

If the LLM strays from the format Mirascope will throw a clear `ValidationError` at runtime, so issues never go unnoticed. It’s response handling with guardrails and built for real-world dev work.

Start building intelligent [LLM agents](/blog/llm-agents/) faster with Mirascope’s [developer-friendly toolkit](https://github.com/mirascope/mirascope). You can also find code samples and more information in our [documentation](/docs/mirascope/learn/).
Start building intelligent [LLM agents](/blog/llm-agents/) faster with Mirascope’s [developer-friendly toolkit](https://github.com/mirascope/mirascope). You can also find code samples and more information in our [documentation](/docs/v1/learn/).

## 3. LlamaIndex: Empowers Developers to Build Data-Driven Apps

Expand Down
4 changes: 2 additions & 2 deletions cloud/content/blog/langchain-prompt-template.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -178,9 +178,9 @@ First, the `prompt_template` decorator wraps the function with a prompt string (
BaseMessageParam(role='user', content='Recommend a comedy movie')
```

The model-agnostic [`llm.call` decorator](/docs/mirascope/learn/calls/) sends this structured message to the model and the return value is a `CallResponse` object that includes generated content, a full list of input messages, model parameters, usage metrics, and any tool or modality-specific metadata.
The model-agnostic [`llm.call` decorator](/docs/v1/learn/calls/) sends this structured message to the model and the return value is a `CallResponse` object that includes generated content, a full list of input messages, model parameters, usage metrics, and any tool or modality-specific metadata.

[`CallResponse`](/docs/mirascope/api/core/base/call_response/) is useful because whether you're calling OpenAI, Anthropic, Google, or another provider, the shape of the response remains the same. That means you can build [LLM tools](/blog/llm-tools/), logging, analytics, or debugging features once and reuse them everywhere.
[`CallResponse`](/docs/v1/api/core/base/call_response/) is useful because whether you're calling OpenAI, Anthropic, Google, or another provider, the shape of the response remains the same. That means you can build [LLM tools](/blog/llm-tools/), logging, analytics, or debugging features once and reuse them everywhere.

It also includes fields like `.messages` for the exact prompt, `.usage` for token tracking, and `.model` to confirm which backend handled the call (among others).

Expand Down
4 changes: 2 additions & 2 deletions cloud/content/blog/langchain-structured-output.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -374,7 +374,7 @@ Below are a few of the most useful settings available for response models:

#### **Returning Outputs in Valid JSON**

Setting `json_mode=True` in the call decorator will apply [JSON mode](/docs/mirascope/learn/json_mode/), if it’s supported by your LLM, rendering the outputs as valid JSON:
Setting `json_mode=True` in the call decorator will apply [JSON mode](/docs/v1/learn/json_mode/), if it’s supported by your LLM, rendering the outputs as valid JSON:

```python
import json
Expand Down Expand Up @@ -426,7 +426,7 @@ print(destination)
# > name='PARIS' country='France'
```

You can learn more about Mirascope’s response model [here](/docs/mirascope/learn/response_models).
You can learn more about Mirascope’s response model [here](/docs/v1/learn/response_models).

## Versioning and Observability in LangChain (And How Lilypad Does It Better)

Expand Down
2 changes: 1 addition & 1 deletion cloud/content/blog/langchain-sucks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ For instance, the method signature for `invoke` below indicates it expects `”t

![LangChain No Type Safety](/assets/blog/langchain-sucks/langchain-no-type-safety.webp)

In contrast, Mirascope implements type safety via its Pydantic-based [response model](/docs/mirascope/learn/response_models/), enforcing what values functions return, and how LLM calls interact with those functions.
In contrast, Mirascope implements type safety via its Pydantic-based [response model](/docs/v1/learn/response_models/), enforcing what values functions return, and how LLM calls interact with those functions.

We provide full linting and editor support, offering warnings, errors, and autocomplete as you code. This helps catch potential issues early and ensures code consistency.

Expand Down
4 changes: 2 additions & 2 deletions cloud/content/blog/langfuse-integration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Mirascope integrates with Langfuse with a decorator `@with_langfuse`. This gives

### Call

This is a basic call example that will work across all [Mirascope call function settings](/docs/mirascope/learn/calls), including [streams](/docs/mirascope/learn/streams), [async](/docs/mirascope/learn/async), and more.
This is a basic call example that will work across all [Mirascope call function settings](/docs/v1/learn/calls), including [streams](/docs/v1/learn/streams), [async](/docs/v1/learn/async), and more.

```python
import os
Expand All @@ -41,7 +41,7 @@ And that’s it! Now your Mirascope class methods will be sent to Langfuse trace

### Response Models

Mirascope's [`response_model`](/docs/mirascope/learn/response_models) argument enables extracting or generating structured outputs with LLMs. You can easily observe these structured outputs in Langfuse as well so you can assess the quality of your data and ensure your results are accurate.
Mirascope's [`response_model`](/docs/v1/learn/response_models) argument enables extracting or generating structured outputs with LLMs. You can easily observe these structured outputs in Langfuse as well so you can assess the quality of your data and ensure your results are accurate.

```python
import os
Expand Down
Loading