Skip to content

Commit 93906aa

Browse files
NiallJoeMaherclaude
andcommitted
docs: fix all chapter documentation to match actual code
- CHAPTER-0: Updated to show createUIMessageStream pattern, full useChat config with transport, correct systemPrompt signature - CHAPTER-2: Fixed agent types (UIMessageStreamWriter), gateway.languageModel pattern, tutor params (depth/context), route handler structure - CHAPTER-3: Rewrote quiz-master and planner to show artifact creation with dataStream.write(), correct models (artifact-model), DB save, error handling - CHAPTER-4: Fixed CustomUIDataTypes, added focusAreas param, artifact-model - CHAPTER-5: Added analyst.ts to file structure, fixed inputSchema usage, updated architecture diagrams, added analyst to orchestrator tools All code snippets are now copy-paste ready and match the actual implementation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
1 parent aa6d61a commit 93906aa

File tree

5 files changed

+695
-389
lines changed

5 files changed

+695
-389
lines changed

CHAPTER-0.md

Lines changed: 111 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -42,31 +42,51 @@ The heart of the application is `/app/(chat)/api/chat/route.ts`. This is where m
4242
### Basic Chat Flow
4343

4444
```typescript
45-
// app/(chat)/api/chat/route.ts (simplified)
46-
import { streamText } from "ai";
45+
// app/(chat)/api/chat/route.ts (core streaming logic)
46+
import {
47+
convertToModelMessages,
48+
createUIMessageStream,
49+
JsonToSseTransformStream,
50+
smoothStream,
51+
streamText,
52+
} from "ai";
53+
import { type RequestHints, systemPrompt } from "@/lib/ai/prompts";
4754
import { myProvider } from "@/lib/ai/providers";
48-
import { systemPrompt } from "@/lib/ai/prompts";
4955

50-
export async function POST(request: Request) {
51-
const { messages } = await request.json();
52-
53-
// Stream the AI response
54-
const result = streamText({
55-
model: myProvider.languageModel("chat-model"),
56-
system: systemPrompt(),
57-
messages,
58-
});
59-
60-
return result.toDataStreamResponse();
61-
}
56+
// Inside POST handler, after authentication and message loading:
57+
const stream = createUIMessageStream({
58+
execute: ({ writer: dataStream }) => {
59+
const result = streamText({
60+
model: myProvider.languageModel(selectedChatModel),
61+
system: systemPrompt({ selectedChatModel, requestHints }),
62+
messages: convertToModelMessages(uiMessages),
63+
experimental_transform: smoothStream({ chunking: "word" }),
64+
});
65+
66+
result.consumeStream();
67+
68+
dataStream.merge(
69+
result.toUIMessageStream({
70+
sendReasoning: true,
71+
})
72+
);
73+
},
74+
generateId: generateUUID,
75+
onFinish: async ({ messages }) => {
76+
// Save messages to database
77+
},
78+
});
79+
80+
return new Response(stream.pipeThrough(new JsonToSseTransformStream()));
6281
```
6382

6483
### Key Concepts
6584

66-
1. **`streamText`**: The AI SDK function that sends messages to the model and streams the response token by token.
67-
2. **`myProvider`**: Our configured AI provider (Claude Haiku via AI Gateway).
68-
3. **`systemPrompt`**: Instructions that tell the AI how to behave.
69-
4. **`toDataStreamResponse`**: Converts the stream into a format the frontend can consume.
85+
1. **`createUIMessageStream`**: Creates a stream that handles UI message updates with proper typing.
86+
2. **`streamText`**: The AI SDK function that sends messages to the model and streams the response.
87+
3. **`myProvider`**: Our configured AI provider (Claude Haiku via AI Gateway).
88+
4. **`systemPrompt`**: Function that builds instructions for the AI (takes model and location hints).
89+
5. **`JsonToSseTransformStream`**: Converts the stream into Server-Sent Events format for the frontend.
7090

7191
## How Streaming Works
7292

@@ -90,36 +110,56 @@ When you send a message:
90110

91111
## The Frontend Chat Hook
92112

93-
The frontend uses `useChat` from the AI SDK React package:
113+
The frontend uses `useChat` from the AI SDK React package with a custom transport configuration:
94114

95115
```typescript
96-
// Simplified usage in a chat component
116+
// components/chat.tsx (key parts)
97117
import { useChat } from "@ai-sdk/react";
118+
import { DefaultChatTransport } from "@ai-sdk/react/internal";
119+
120+
export function Chat({ id, initialMessages, selectedChatModel }) {
121+
const {
122+
messages,
123+
setMessages,
124+
sendMessage,
125+
status,
126+
stop,
127+
regenerate,
128+
resumeStream,
129+
} = useChat<ChatMessage>({
130+
id,
131+
messages: initialMessages,
132+
experimental_throttle: 100,
133+
generateId: generateUUID,
134+
transport: new DefaultChatTransport({
135+
api: "/api/chat",
136+
fetch: fetchWithErrorHandlers,
137+
prepareSendMessagesRequest(request) {
138+
return {
139+
...request,
140+
body: {
141+
id,
142+
message: request.messages[request.messages.length - 1],
143+
selectedChatModel,
144+
selectedVisibilityType: visibilityType,
145+
},
146+
};
147+
},
148+
}),
149+
onFinish: () => {
150+
mutate("/api/history");
151+
},
152+
});
98153

99-
export function Chat() {
100-
const { messages, input, handleSubmit, handleInputChange } = useChat();
101-
102-
return (
103-
<div>
104-
{messages.map((message) => (
105-
<div key={message.id}>
106-
<strong>{message.role}:</strong> {message.content}
107-
</div>
108-
))}
109-
<form onSubmit={handleSubmit}>
110-
<input value={input} onChange={handleInputChange} />
111-
<button type="submit">Send</button>
112-
</form>
113-
</div>
114-
);
154+
// ... component JSX
115155
}
116156
```
117157

118158
The `useChat` hook handles:
119-
- Managing message history
120-
- Sending messages to the API
121-
- Streaming response updates
122-
- Input state management
159+
- Managing message history with proper typing
160+
- Sending messages via custom transport
161+
- Streaming response updates with throttling
162+
- Request/response transformation
123163

124164
## Message Format
125165

@@ -137,14 +177,40 @@ type Message = {
137177

138178
## The System Prompt
139179

140-
The system prompt shapes the AI's personality and behavior:
180+
The system prompt shapes the AI's personality and behavior. It takes the selected model and geolocation hints as parameters:
141181

142182
```typescript
143183
// lib/ai/prompts.ts
144-
export const systemPrompt = () => `
145-
You are a helpful AI assistant. Be concise and helpful.
146-
Today's date is ${new Date().toLocaleDateString()}.
184+
import type { Geo } from "@vercel/functions";
185+
186+
export const regularPrompt =
187+
"You are a friendly study buddy assistant! Keep your responses concise and helpful.";
188+
189+
export type RequestHints = {
190+
latitude: Geo["latitude"];
191+
longitude: Geo["longitude"];
192+
city: Geo["city"];
193+
country: Geo["country"];
194+
};
195+
196+
export const getRequestPromptFromHints = (requestHints: RequestHints) => `\
197+
About the origin of user's request:
198+
- lat: ${requestHints.latitude}
199+
- lon: ${requestHints.longitude}
200+
- city: ${requestHints.city}
201+
- country: ${requestHints.country}
147202
`;
203+
204+
export const systemPrompt = ({
205+
selectedChatModel,
206+
requestHints,
207+
}: {
208+
selectedChatModel: string;
209+
requestHints: RequestHints;
210+
}) => {
211+
const requestPrompt = getRequestPromptFromHints(requestHints);
212+
return `${regularPrompt}\n\n${requestPrompt}`;
213+
};
148214
```
149215

150216
## Exercise: Trace a Message

0 commit comments

Comments
 (0)