The current search implementation follows a sequential approach:
- Search API call (
/api/search) - waits for completion - AI Answer API call (
/api/ai-answer) - starts only after search completes
This happens because the AI answer endpoint requires search results as input to generate contextual responses.
To achieve Google-like behavior where search results and AI answers load independently, here are several architectural approaches:
Implementation Strategy:
- Start both API calls simultaneously
- Use Promise.allSettled() instead of sequential await
- Update UI components as each response completes
Code Changes in /app/page.tsx:
const performSearch = useCallback(async (query: string) => {
const searchStartTime = Date.now();
// Reset states
setSearchState('loading');
setAIAnswerState('loading');
setCurrentQuery(query);
setError('');
setAIError('');
setSearchResults(null);
setAIAnswer('');
log.info('Starting concurrent search and AI answer generation', { query });
// Create both promises simultaneously
const searchPromise = fetch('/api/search', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ query }),
}).then(response => response.json());
// Start AI answer immediately with query only - modify API to handle this
const aiAnswerPromise = fetch('/api/ai-answer-concurrent', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ query }),
}).then(response => response.json());
// Handle both promises concurrently
Promise.allSettled([searchPromise, aiAnswerPromise]).then(([searchResult, aiResult]) => {
// Handle search results
if (searchResult.status === 'fulfilled' && searchResult.value) {
setSearchResults(searchResult.value);
setSearchState('success');
} else {
setError(searchResult.reason?.message || 'Search failed');
setSearchState('error');
}
// Handle AI answer results
if (aiResult.status === 'fulfilled' && aiResult.value) {
setAIAnswer(aiResult.value.aiAnswer);
setAIAnswerState('success');
} else {
setAIError(aiResult.reason?.message || 'AI answer failed');
setAIAnswerState('error');
}
});
}, []);Create a new API endpoint /api/concurrent-search that handles both operations internally:
// /app/api/concurrent-search/route.ts
export async function POST(request: NextRequest) {
const { query } = await request.json();
// Start both operations simultaneously
const searchPromise = performSearch(query);
const aiAnswerPromise = generateAIAnswerFromQuery(query); // New function
// Return as server-sent events or WebSocket for real-time updates
const encoder = new TextEncoder();
const stream = new ReadableStream({
start(controller) {
// Handle search completion
searchPromise.then(searchResults => {
controller.enqueue(encoder.encode(
`data: ${JSON.stringify({ type: 'search', data: searchResults })}\n\n`
));
});
// Handle AI answer completion
aiAnswerPromise.then(aiAnswer => {
controller.enqueue(encoder.encode(
`data: ${JSON.stringify({ type: 'ai-answer', data: aiAnswer })}\n\n`
));
controller.close();
});
}
});
return new Response(stream, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
},
});
}Modify the AI answer API to work with partial or streaming search results:
- Phase 1: Generate initial AI response based on query alone
- Phase 2: Enhance response when search results become available
// New endpoint: /api/ai-answer-concurrent/route.ts
export async function POST(request: NextRequest) {
const { query, results } = await request.json();
if (!results || results.length === 0) {
// Generate initial response based on query only
return generateInitialAIResponse(query);
} else {
// Generate enhanced response with search results
return generateComprehensiveAnswer(query, results);
}
}
async function generateInitialAIResponse(query: string): Promise<string> {
const prompt = `Based on the query "${query}", provide a helpful initial response.
Explain what this topic is about and what information would typically be relevant.
Mention that more specific details will be provided once search results are analyzed.`;
// Use Cohere API to generate initial response
const response = await cohere.chat({
model: 'command-a-03-2025',
messages: [{ role: 'user', content: prompt }]
});
return response.message?.content?.[0]?.text || 'Searching for information...';
}Implement real-time updates using WebSocket or SSE:
// Client-side EventSource implementation
const performSearch = useCallback(async (query: string) => {
const eventSource = new EventSource(`/api/stream-search?query=${encodeURIComponent(query)}`);
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.type === 'search') {
setSearchResults(data.results);
setSearchState('success');
} else if (data.type === 'ai-answer') {
setAIAnswer(data.answer);
setAIAnswerState('success');
}
};
eventSource.onerror = () => {
eventSource.close();
setSearchState('error');
setAIAnswerState('error');
};
}, []);Option 1 (Modified Client-Side Concurrent) is the most practical approach because:
- Minimal Backend Changes: Only requires creating a concurrent AI answer endpoint
- Better User Experience: Both sections load independently
- Fallback Support: Can gracefully handle failures in either system
- Progressive Enhancement: AI answer can start with basic response and enhance with search results
- Create
/api/ai-answer-concurrentendpoint that can work with query-only input - Modify client-side
performSearchto usePromise.allSettled() - Update UI to handle independent loading states
- Add error handling for partial failures
- Implement progressive enhancement where AI answer improves with search context
This approach mirrors Google's behavior where search results and AI answers appear independently, providing a much better user experience with faster perceived loading times.