Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ AUTH_SECRET=****
AI_GATEWAY_API_KEY=****


# Optional: MiniMax API key for using MiniMax models directly
# Get your API key at https://platform.minimax.io
MINIMAX_API_KEY=****

# Instructions to create a Vercel Blob Store here: https://vercel.com/docs/vercel-blob
BLOB_READ_WRITE_TOKEN=****

Expand Down
13 changes: 11 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
- [AI SDK](https://ai-sdk.dev/docs/introduction)
- Unified API for generating text, structured objects, and tool calls with LLMs
- Hooks for building dynamic chat and generative user interfaces
- Supports OpenAI, Anthropic, Google, xAI, and other model providers via AI Gateway
- Supports OpenAI, Anthropic, Google, xAI, MiniMax, and other model providers
- [shadcn/ui](https://ui.shadcn.com)
- Styling with [Tailwind CSS](https://tailwindcss.com)
- Component primitives from [Radix UI](https://radix-ui.com) for accessibility and flexibility
Expand All @@ -36,14 +36,23 @@

## Model Providers

This template uses the [Vercel AI Gateway](https://vercel.com/docs/ai-gateway) to access multiple AI models through a unified interface. The default model is [OpenAI](https://openai.com) GPT-4.1 Mini, with support for Anthropic, Google, and xAI models.
This template uses the [Vercel AI Gateway](https://vercel.com/docs/ai-gateway) to access multiple AI models through a unified interface. The default model is [OpenAI](https://openai.com) GPT-4.1 Mini, with support for Anthropic, Google, xAI, and [MiniMax](https://platform.minimax.io) models.

### AI Gateway Authentication

**For Vercel deployments**: Authentication is handled automatically via OIDC tokens.

**For non-Vercel deployments**: You need to provide an AI Gateway API key by setting the `AI_GATEWAY_API_KEY` environment variable in your `.env.local` file.

### MiniMax

[MiniMax](https://platform.minimax.io) models are accessed directly via the OpenAI-compatible API (not via AI Gateway). Set the `MINIMAX_API_KEY` environment variable to enable MiniMax models. Available models:

- **MiniMax M2.7** – Latest flagship model with enhanced reasoning and coding
- **MiniMax M2.7 Highspeed** – High-speed version of M2.7 for low-latency scenarios
- **MiniMax M2.5** – Peak performance with 204K context window
- **MiniMax M2.5 Highspeed** – Same performance, faster and more agile

With the [AI SDK](https://ai-sdk.dev/docs/introduction), you can also switch to direct LLM providers like [OpenAI](https://openai.com), [Anthropic](https://anthropic.com), [Cohere](https://cohere.com/), and [many more](https://ai-sdk.dev/providers/ai-sdk-providers) with just a few lines of code.

## Deploy Your Own
Expand Down
1 change: 1 addition & 0 deletions components/multimodal-input.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -482,6 +482,7 @@ function PureModelSelectorCompact({
openai: "OpenAI",
google: "Google",
xai: "xAI",
minimax: "MiniMax",
reasoning: "Reasoning",
};

Expand Down
25 changes: 25 additions & 0 deletions lib/ai/models.ts
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,31 @@ export const chatModels: ChatModel[] = [
provider: "xai",
description: "Fast with 30K context",
},
// MiniMax
{
id: "minimax/MiniMax-M2.7",
name: "MiniMax M2.7",
provider: "minimax",
description: "Latest flagship model with enhanced reasoning and coding",
},
{
id: "minimax/MiniMax-M2.7-highspeed",
name: "MiniMax M2.7 Highspeed",
provider: "minimax",
description: "High-speed version of M2.7 for low-latency scenarios",
},
{
id: "minimax/MiniMax-M2.5",
name: "MiniMax M2.5",
provider: "minimax",
description: "Peak performance with 204K context window",
},
{
id: "minimax/MiniMax-M2.5-highspeed",
name: "MiniMax M2.5 Highspeed",
provider: "minimax",
description: "Same performance, faster and more agile",
},
// Reasoning models (extended thinking)
{
id: "anthropic/claude-3.7-sonnet-thinking",
Expand Down
15 changes: 15 additions & 0 deletions lib/ai/providers.ts
Original file line number Diff line number Diff line change
@@ -1,11 +1,20 @@
import { gateway } from "@ai-sdk/gateway";
import { createOpenAI } from "@ai-sdk/openai";
import {
customProvider,
extractReasoningMiddleware,
wrapLanguageModel,
} from "ai";
import { isTestEnvironment } from "../constants";

function getMiniMaxProvider() {
return createOpenAI({
name: "minimax",
apiKey: process.env.MINIMAX_API_KEY,
baseURL: process.env.MINIMAX_BASE_URL ?? "https://api.minimax.io/v1",
});
}

const THINKING_SUFFIX_REGEX = /-thinking$/;

export const myProvider = isTestEnvironment
Expand All @@ -32,6 +41,12 @@ export function getLanguageModel(modelId: string) {
return myProvider.languageModel(modelId);
}

// MiniMax models use direct API (not via AI Gateway)
if (modelId.startsWith("minimax/")) {
const minimax = getMiniMaxProvider();
return minimax(modelId.replace("minimax/", ""));
}

const isReasoningModel =
modelId.endsWith("-thinking") ||
(modelId.includes("reasoning") && !modelId.includes("non-reasoning"));
Expand Down
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
},
"dependencies": {
"@ai-sdk/gateway": "^3.0.15",
"@ai-sdk/openai": "^3.0.41",
"@ai-sdk/provider": "^3.0.3",
"@ai-sdk/react": "3.0.39",
"@codemirror/lang-javascript": "^6.2.2",
Expand Down
Loading