Skip to content

Latest commit

 

History

History
201 lines (159 loc) · 6.68 KB

File metadata and controls

201 lines (159 loc) · 6.68 KB

Current Implementation Analysis

The current search implementation follows a sequential approach:

  1. Search API call (/api/search) - waits for completion
  2. AI Answer API call (/api/ai-answer) - starts only after search completes

This happens because the AI answer endpoint requires search results as input to generate contextual responses.

Solution Approach: Concurrent Search and AI Answer Generation

To achieve Google-like behavior where search results and AI answers load independently, here are several architectural approaches:

Option 1: Modified Client-Side Concurrent Approach (Recommended)

Implementation Strategy:

  1. Start both API calls simultaneously
  2. Use Promise.allSettled() instead of sequential await
  3. Update UI components as each response completes

Code Changes in /app/page.tsx:

const performSearch = useCallback(async (query: string) => {
  const searchStartTime = Date.now();

  // Reset states
  setSearchState('loading');
  setAIAnswerState('loading');
  setCurrentQuery(query);
  setError('');
  setAIError('');
  setSearchResults(null);
  setAIAnswer('');

  log.info('Starting concurrent search and AI answer generation', { query });

  // Create both promises simultaneously
  const searchPromise = fetch('/api/search', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ query }),
  }).then(response => response.json());

  // Start AI answer immediately with query only - modify API to handle this
  const aiAnswerPromise = fetch('/api/ai-answer-concurrent', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ query }),
  }).then(response => response.json());

  // Handle both promises concurrently
  Promise.allSettled([searchPromise, aiAnswerPromise]).then(([searchResult, aiResult]) => {
    // Handle search results
    if (searchResult.status === 'fulfilled' && searchResult.value) {
      setSearchResults(searchResult.value);
      setSearchState('success');
    } else {
      setError(searchResult.reason?.message || 'Search failed');
      setSearchState('error');
    }

    // Handle AI answer results
    if (aiResult.status === 'fulfilled' && aiResult.value) {
      setAIAnswer(aiResult.value.aiAnswer);
      setAIAnswerState('success');
    } else {
      setAIError(aiResult.reason?.message || 'AI answer failed');
      setAIAnswerState('error');
    }
  });
}, []);

Option 2: Server-Side Concurrent Processing

Create a new API endpoint /api/concurrent-search that handles both operations internally:

// /app/api/concurrent-search/route.ts
export async function POST(request: NextRequest) {
  const { query } = await request.json();

  // Start both operations simultaneously
  const searchPromise = performSearch(query);
  const aiAnswerPromise = generateAIAnswerFromQuery(query); // New function

  // Return as server-sent events or WebSocket for real-time updates
  const encoder = new TextEncoder();
  
  const stream = new ReadableStream({
    start(controller) {
      // Handle search completion
      searchPromise.then(searchResults => {
        controller.enqueue(encoder.encode(
          `data: ${JSON.stringify({ type: 'search', data: searchResults })}\n\n`
        ));
      });

      // Handle AI answer completion
      aiAnswerPromise.then(aiAnswer => {
        controller.enqueue(encoder.encode(
          `data: ${JSON.stringify({ type: 'ai-answer', data: aiAnswer })}\n\n`
        ));
        controller.close();
      });
    }
  });

  return new Response(stream, {
    headers: {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      'Connection': 'keep-alive',
    },
  });
}

Option 3: Smart AI Answer Generation (Hybrid Approach)

Modify the AI answer API to work with partial or streaming search results:

  1. Phase 1: Generate initial AI response based on query alone
  2. Phase 2: Enhance response when search results become available
// New endpoint: /api/ai-answer-concurrent/route.ts
export async function POST(request: NextRequest) {
  const { query, results } = await request.json();

  if (!results || results.length === 0) {
    // Generate initial response based on query only
    return generateInitialAIResponse(query);
  } else {
    // Generate enhanced response with search results
    return generateComprehensiveAnswer(query, results);
  }
}

async function generateInitialAIResponse(query: string): Promise<string> {
  const prompt = `Based on the query "${query}", provide a helpful initial response. 
  Explain what this topic is about and what information would typically be relevant. 
  Mention that more specific details will be provided once search results are analyzed.`;

  // Use Cohere API to generate initial response
  const response = await cohere.chat({
    model: 'command-a-03-2025',
    messages: [{ role: 'user', content: prompt }]
  });

  return response.message?.content?.[0]?.text || 'Searching for information...';
}

Option 4: WebSocket or Server-Sent Events

Implement real-time updates using WebSocket or SSE:

// Client-side EventSource implementation
const performSearch = useCallback(async (query: string) => {
  const eventSource = new EventSource(`/api/stream-search?query=${encodeURIComponent(query)}`);
  
  eventSource.onmessage = (event) => {
    const data = JSON.parse(event.data);
    
    if (data.type === 'search') {
      setSearchResults(data.results);
      setSearchState('success');
    } else if (data.type === 'ai-answer') {
      setAIAnswer(data.answer);
      setAIAnswerState('success');
    }
  };

  eventSource.onerror = () => {
    eventSource.close();
    setSearchState('error');
    setAIAnswerState('error');
  };
}, []);

Recommended Implementation

Option 1 (Modified Client-Side Concurrent) is the most practical approach because:

  1. Minimal Backend Changes: Only requires creating a concurrent AI answer endpoint
  2. Better User Experience: Both sections load independently
  3. Fallback Support: Can gracefully handle failures in either system
  4. Progressive Enhancement: AI answer can start with basic response and enhance with search results

Implementation Steps

  1. Create /api/ai-answer-concurrent endpoint that can work with query-only input
  2. Modify client-side performSearch to use Promise.allSettled()
  3. Update UI to handle independent loading states
  4. Add error handling for partial failures
  5. Implement progressive enhancement where AI answer improves with search context

This approach mirrors Google's behavior where search results and AI answers appear independently, providing a much better user experience with faster perceived loading times.