Skip to content

Commit f4b0e72

Browse files
authored
[cookbook] Add Streamdown markdown rendering and rebrand to Agentic Cookbook (#91)
* Add Streamdown to multi-turn chat * Improve auto-scroll following of token stream * Changed name from GenAI Cookbook to Agentic Cookbook
1 parent 490bc8f commit f4b0e72

File tree

9 files changed

+2395
-141
lines changed

9 files changed

+2395
-141
lines changed

genai-cookbook/README.md

Lines changed: 14 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
# Modular GenAI Cookbook
1+
# Modular Agentic Cookbook
22

3-
The GenAI Cookbook is collection of recipes demonstrating how to build modern fullstack web apps using Modular MAX, Next.js, and the Vercel AI SDK. Unlike other recipes in the MAX Recipes repo—which are Python-based—the GenAI Cookbook is written exclusively in TypeScript, providing production-ready patterns for building interactive AI experiences. Each recipe demonstrates an end-to-end workflow with both frontend and backend implementations, including detailed code comments.
3+
The Agentic Cookbook is collection of recipes demonstrating how to build modern fullstack web apps using Modular MAX, Next.js, and the Vercel AI SDK. Unlike other recipes in the MAX Recipes repo—which are Python-based—the Agentic Cookbook is written exclusively in TypeScript, providing production-ready patterns for building interactive AI experiences. Each recipe demonstrates an end-to-end workflow with both frontend and backend implementations, including detailed code comments.
44

55
<img src="https://github.com/user-attachments/assets/e2302038-a950-41a8-acec-47c0d9c09ed6" />
66

@@ -92,7 +92,7 @@ Create an intelligent image captioning system that generates natural language de
9292

9393
## Architecture
9494

95-
The GenAI Cookbook follows a modern fullstack architecture optimized for AI applications, organized as a pnpm workspace monorepo:
95+
The Agentic Cookbook follows a modern fullstack architecture optimized for AI applications, organized as a pnpm workspace monorepo:
9696

9797
```
9898
genai-cookbook/
@@ -185,7 +185,7 @@ To use the cookbook with MAX:
185185
186186
## Running with Docker
187187
188-
The GenAI Cookbook can be run entirely within a Docker container, including the MAX model server and web application. The container uses the universal MAX image with the nightly build, supporting both NVIDIA and AMD GPUs.
188+
The Agentic Cookbook can be run entirely within a Docker container, including the MAX model server and web application. The container uses the universal MAX image with the nightly build, supporting both NVIDIA and AMD GPUs.
189189
190190
### Building the Container
191191
@@ -202,23 +202,25 @@ docker build --ulimit nofile=65535:65535 -t max-cookbook:latest .
202202
You can customize the Docker build using these arguments to reduce container size:
203203

204204
- **MAX_GPU**: Selects the base image (default: `universal`)
205-
- `universal``modular/max-full` (larger, supports all GPU types)
206-
- `amd``modular/max-amd` (smaller, AMD-specific)
207-
- `nvidia``modular/max-nvidia-full` (smaller, NVIDIA-specific)
205+
- `universal``modular/max-full` (larger, supports all GPU types)
206+
- `amd``modular/max-amd` (smaller, AMD-specific)
207+
- `nvidia``modular/max-nvidia-full` (smaller, NVIDIA-specific)
208208

209209
- **MAX_TAG**: Selects the image version (default: `latest`)
210-
- `latest` → Latest stable release
211-
- `nightly` → Nightly development builds
212-
- Specific versions (e.g., `25.7.0`)
210+
- `latest` → Latest stable release
211+
- `nightly` → Nightly development builds
212+
- Specific versions (e.g., `25.7.0`)
213213

214214
**Examples:**
215215

216216
Build smaller AMD-specific container:
217+
217218
```bash
218219
docker build --build-arg MAX_GPU=amd --ulimit nofile=65535:65535 -t max-cookbook:amd .
219220
```
220221

221222
Build smaller NVIDIA-specific container with nightly builds:
223+
222224
```bash
223225
docker build --build-arg MAX_GPU=nvidia --build-arg MAX_TAG=nightly --ulimit nofile=65535:65535 -t max-cookbook:nvidia-nightly .
224226
```
@@ -256,8 +258,9 @@ docker run \
256258
```
257259

258260
**Configuration:**
261+
259262
- **Port 8000**: MAX model serving endpoint
260-
- **Port 3000**: GenAI Cookbook web application
263+
- **Port 3000**: Agentic Cookbook web application
261264
- **HF_TOKEN**: Your HuggingFace token for downloading models
262265
- **MAX_MODEL**: The model to serve (e.g., `google/gemma-3-27b-it`)
263266
- **Volume mount**: Caches downloaded models in `~/.cache/huggingface`

genai-cookbook/apps/cookbook/app/layout.tsx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ import { CookbookProvider } from '@/context'
99
import { endpointsRoute } from '@/utils/constants'
1010

1111
export const metadata: Metadata = {
12-
title: 'Modular GenAI Cookbook',
12+
title: 'Modular Agentic Cookbook',
1313
}
1414

1515
export default function RootLayout({ children }: { children: React.ReactNode }) {

genai-cookbook/apps/cookbook/components/Header.tsx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ export default function Header({
3737
</ActionIcon>
3838
</Group>
3939
<Title style={{ fontWeight: 'normal' }} order={5}>
40-
<Link href={cookbookRoute()}>Modular GenAI Cookbook</Link>
40+
<Link href={cookbookRoute()}>Modular Agentic Cookbook</Link>
4141
</Title>
4242
<ThemeToggle stroke={iconStroke} />
4343
</Flex>

genai-cookbook/apps/cookbook/package.json

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,20 +11,21 @@
1111
"format:check": "prettier --check ."
1212
},
1313
"dependencies": {
14-
"@modular/recipes": "workspace:*",
1514
"@ai-sdk/openai": "^2.0.23",
1615
"@ai-sdk/react": "^2.0.30",
1716
"@mantine/core": "^7.17.8",
1817
"@mantine/dropzone": "^7.17.8",
1918
"@mantine/hooks": "^7.17.8",
19+
"@modular/recipes": "workspace:*",
2020
"@tabler/icons-react": "^3.34.1",
2121
"ai": "^5.0.28",
2222
"nanoid": "^5.1.5",
2323
"next": "^14",
2424
"openai": "^5.20.2",
2525
"react": "^18",
2626
"react-dom": "^18",
27-
"react-syntax-highlighter": "^15.6.6"
27+
"react-syntax-highlighter": "^15.6.6",
28+
"streamdown": "^1.3.0"
2829
},
2930
"devDependencies": {
3031
"@types/node": "^20",

genai-cookbook/metadata.yaml

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
version: 1.0
2-
long_title: "Collection of recipes featuring MAX, Next.js, and Vercel AI SDK"
3-
short_title: "Modular GenAI Cookbook"
4-
author: "Bill Welense"
5-
author_image: "author/billw.jpg"
6-
author_url: "https://www.linkedin.com/in/welense/"
7-
github_repo: "https://github.com/modular/max-recipes/tree/main/genai-cookbook"
8-
date: "22-09-2025"
9-
difficulty: "beginner"
2+
long_title: 'Collection of recipes featuring MAX, Next.js, and Vercel AI SDK'
3+
short_title: 'Modular Agentic Cookbook'
4+
author: 'Bill Welense'
5+
author_image: 'author/billw.jpg'
6+
author_url: 'https://www.linkedin.com/in/welense/'
7+
github_repo: 'https://github.com/modular/max-recipes/tree/main/genai-cookbook'
8+
date: '22-09-2025'
9+
difficulty: 'beginner'
1010
tags:
11-
- max-serve
12-
- gui
11+
- max-serve
12+
- gui
1313

1414
tasks:
15-
- pnpm dev
15+
- pnpm dev

genai-cookbook/packages/recipes/package.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,8 @@
2121
"next": "^14",
2222
"openai": "^5.20.2",
2323
"react": "^18",
24-
"react-dom": "^18"
24+
"react-dom": "^18",
25+
"streamdown": "^1.3.0"
2526
},
2627
"devDependencies": {
2728
"@types/node": "^20",

genai-cookbook/packages/recipes/src/multiturn-chat/ui.module.css

Lines changed: 0 additions & 12 deletions
This file was deleted.

genai-cookbook/packages/recipes/src/multiturn-chat/ui.tsx

Lines changed: 59 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,8 @@
2020
import { useEffect, useRef, useState } from 'react'
2121
import { DefaultChatTransport } from 'ai'
2222
import { useChat } from '@ai-sdk/react'
23-
import { Box, ScrollArea } from '@mantine/core'
23+
import { Box, Paper, ScrollArea, Space, Stack, Text } from '@mantine/core'
2424
import { RecipeProps } from '../types'
25-
import styles from './ui.module.css'
2625

2726
// ============================================================================
2827
// Chat surface component
@@ -61,30 +60,54 @@ export default function Recipe({ endpoint, model, pathname }: RecipeProps) {
6160
const [followStream, setFollowStream] = useState(true)
6261
const viewportRef = useRef<HTMLDivElement>(null)
6362
const bottomRef = useRef<HTMLDivElement>(null)
63+
const isAutoScrolling = useRef(false)
6464

6565
// Whenever a new message arrives, keep the latest tokens in view unless
6666
// we've intentionally scrolled upward to review earlier context.
6767
useEffect(() => {
6868
if (!followStream) return
69+
70+
// Mark that we're about to programmatically scroll
71+
isAutoScrolling.current = true
6972
bottomRef.current?.scrollIntoView({ behavior: 'smooth', block: 'end' })
73+
74+
// Clear the flag after scroll animation starts (~100ms is enough)
75+
const timer = setTimeout(() => {
76+
isAutoScrolling.current = false
77+
}, 100)
78+
79+
return () => clearTimeout(timer)
7080
}, [messages, followStream])
7181

7282
return (
7383
<>
7484
<Box style={{ flex: 1, minHeight: 0 }}>
7585
<ScrollArea
76-
// Mantine exposes scroll info through a forwarded ref so we can detect manual scrolling.
86+
// Mantine exposes scroll info through a forwarded ref
87+
// so we can detect manual scrolling.
7788
h="100%"
7889
type="auto"
7990
viewportRef={viewportRef}
8091
onScrollPositionChange={() => {
8192
const el = viewportRef.current
8293
if (!el) return
94+
8395
const distanceToBottom =
8496
el.scrollHeight - el.scrollTop - el.clientHeight
85-
const nearBottomThreshold = 4 // px
97+
const nearBottomThreshold = 20 // px
8698
const nearBottom = distanceToBottom <= nearBottomThreshold
8799

100+
// If user has clearly scrolled away (>50px from bottom),
101+
// they want to stop following - even during auto-scroll
102+
if (isAutoScrolling.current && distanceToBottom > 50) {
103+
isAutoScrolling.current = false
104+
setFollowStream(false)
105+
return
106+
}
107+
108+
// Don't interfere with programmatic auto-scroll when close to bottom
109+
if (isAutoScrolling.current) return
110+
88111
// Pause auto-follow when the user scrolls up to read older content.
89112
setFollowStream(nearBottom)
90113
}}
@@ -105,7 +128,7 @@ export default function Recipe({ endpoint, model, pathname }: RecipeProps) {
105128
}
106129

107130
// ============================================================================
108-
// Message panel types and components
131+
// Message panel
109132
// ============================================================================
110133

111134
/*
@@ -116,6 +139,7 @@ export default function Recipe({ endpoint, model, pathname }: RecipeProps) {
116139
*/
117140
import type { UIMessage } from 'ai'
118141
import type { RefObject } from 'react'
142+
import { Streamdown } from 'streamdown'
119143

120144
/**
121145
* Shared props for our message history block.
@@ -127,62 +151,40 @@ interface MessagesPanelProps {
127151
}
128152

129153
/**
130-
* Lists every chat exchange and injects a fade-in animation for readability.
154+
* Displays chat messages using Streamdown, a part of the Vercel AI SDK.
131155
*/
132156
function MessagesPanel({ messages, bottomRef }: MessagesPanelProps) {
133157
return (
134-
<dl>
135-
{messages.map((m) => (
136-
// Pair the speaker label with the text body for each message in order.
137-
<div key={m.id} className="pb-4">
138-
<MessageRole message={m} />
139-
<MessageContent message={m} />
140-
</div>
158+
<Stack align="flex-start" justify="flex-start" gap="sm">
159+
{messages.map((message) => (
160+
// The outer loop maps each message from the user or assistant
161+
<Box key={message.id} w="100%">
162+
<Text fw="bold" tt="capitalize">
163+
{message.role}
164+
</Text>
165+
<Paper>
166+
{message.parts
167+
// The inner loop maps each message part, with support
168+
// for streaming responses from the LLM
169+
.filter((part) => part.type === 'text')
170+
.map((part, index) => (
171+
<Streamdown
172+
controls={false}
173+
shikiTheme={[
174+
'material-theme-lighter',
175+
'material-theme-darker',
176+
]}
177+
key={index}
178+
>
179+
{part.text}
180+
</Streamdown>
181+
))}
182+
</Paper>
183+
<Space h="xs" />
184+
</Box>
141185
))}
142-
<div ref={bottomRef} />
143-
</dl>
144-
)
145-
}
146-
147-
/** Keeps type information consistent across role and content renderers. */
148-
interface MessageContentProps {
149-
message: UIMessage
150-
}
151-
152-
/**
153-
* Displays who said the message (user vs. assistant) with quick capitalization.
154-
*/
155-
function MessageRole({ message }: MessageContentProps) {
156-
return (
157-
<dt className={`${styles.messageFade} font-bold`}>
158-
{/* GenAI roles come through lowercase, so we prettify them for the UI. */}
159-
{message.role.charAt(0).toUpperCase() + message.role.slice(1)}
160-
</dt>
161-
)
162-
}
163-
164-
/**
165-
* Streams message text with simple formatting that respects whitespace.
166-
*/
167-
function MessageContent({ message }: MessageContentProps) {
168-
return (
169-
<dd>
170-
<pre
171-
style={{ fontFamily: 'inherit' }}
172-
className="whitespace-pre-wrap break-words"
173-
>
174-
{/* Only render text parts so other message types (like tool calls) can be added later. */}
175-
{message.parts
176-
.filter((p) => p.type === 'text')
177-
.map((p, i) =>
178-
'text' in p ? (
179-
<span key={i} className={styles.messageFade}>
180-
{p.text}
181-
</span>
182-
) : null
183-
)}
184-
</pre>
185-
</dd>
186+
<Box ref={bottomRef} />
187+
</Stack>
186188
)
187189
}
188190

0 commit comments

Comments
 (0)