Judge Magnus is an AI-powered startup idea validation platform that acts as a ruthless VC/Shark Tank investor. It uses a multi-agent LangGraph workflow to stress-test business ideas through automated due diligence, market analysis, competitor intelligence, and voice interrogation before generating a refined pitch deck.
Judge Magnus helps entrepreneurs and founders validate their startup ideas by subjecting them to rigorous automated analysis across multiple dimensions:
- Solution Architecture: Generates detailed technical blueprints from problem statements
- Market Intelligence: Analyzes competitors and market positioning
- Economic Validation: Evaluates business viability and unit economics
- Voice Interrogation: Simulates investor grilling sessions via AI voice calls
- Pivot Engine: Generates refined, stress-tested solutions
- Pitch Deck Generation: Creates investor-ready pitch materials
The system uses a sequential workflow where each agent builds upon the previous one's analysis, culminating in a comprehensive business validation report.
Judge Magnus consists of two main components:
- Framework: FastAPI with async support
- Orchestration: LangGraph for multi-agent workflows
- LLM Integration: Supports Ollama (llama3), OpenAI, and Google Gemini
- APIs Used:
- Tavily Search API: Market research and competitor analysis
- Bland AI: Voice interrogation calls with founders
- World Bank API: Economic and market data
- Web Scraping: BeautifulSoup for competitor website analysis
- Framework: React 19 with TypeScript
- Build Tool: Vite
- Styling: TailwindCSS
- Animations: Framer Motion
- HTTP Client: Axios
┌─────────────────────────┐
│ Problem Statement │
└───────────┬─────────────┘
│
▼
┌─────────────────────────┐
│ Solution Architect │ ← Generates technical blueprint
└───────────┬─────────────┘
│
▼
┌─────────────────────────┐
│ Competitor Spy │ ← Market intelligence & competitor analysis
└───────────┬─────────────┘
│
▼
┌─────────────────────────┐
│ Inquisitor │ ← Economic viability check
└───────────┬─────────────┘
│
▼
┌─────────────────────────┐
│ Consensus Engine │ ← Multi-agent synthesis
└───────────┬─────────────┘
│
▼
┌─────────────────────────┐
│ Voice Interrogation │ ← AI phone call stress test
└───────────┬─────────────┘
│
▼
┌─────────────────────────┐
│ Pivot Engine │ ← Solution refinement
└───────────┬─────────────┘
│
▼
┌─────────────────────────┐
│ Pitch Deck Generator │ ← Final deliverable
└─────────────────────────┘
- Python: 3.8 or higher
- Node.js: 18.x or higher
- npm or yarn: Latest stable version
- Ollama (Recommended): For local LLM inference
- Install from ollama.ai
- Pull llama3 model:
ollama pull llama3
-
Tavily API Key (Required)
- Sign up at tavily.com
- Used for market research and competitor analysis
-
Bland AI API Key (Optional)
- Sign up at bland.ai
- Required only for voice interrogation features
- System works without it, skipping the voice call phase
-
Alternative LLM Providers (Optional)
- OpenAI API Key for GPT models
- Google API Key for Gemini models
- Configure in backend code if not using Ollama
git clone https://github.com/saaj376/Judge-Magnus.git
cd Judge-Magnuscd backend
# Create and activate virtual environment (recommended)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r ../requirements.txt
# Create .env file
cat > .env << EOF
TAVILY_API_KEY=your_tavily_api_key_here
BLANDAI_API_KEY=your_bland_api_key_here # Optional
EOFImportant Environment Variables:
TAVILY_API_KEY: Required for market researchBLANDAI_API_KEY: Optional for voice features
cd ../frontend
# Install dependencies
npm install
# The frontend expects backend at http://localhost:8000
# Update API URL in src/services/api.ts if different# Make sure Ollama is installed
ollama --version
# Pull the llama3 model
ollama pull llama3
# Ollama should be running in the background
# Default endpoint: http://localhost:11434cd backend
source venv/bin/activate # If not already activated
# Option 1: Using Python directly
python main.py
# Option 2: Using Uvicorn
uvicorn main:app --reload --host 0.0.0.0 --port 8000Backend will be available at: http://localhost:8000
cd frontend
# Development mode with hot reload
npm run devFrontend will be available at: http://localhost:5173
cd frontend
# Build optimized production bundle
npm run build
# Preview production build
npm run preview-
Open the Application: Navigate to
http://localhost:5173in your browser -
Enter Problem Statement: In the "Concept Input" section, describe your startup idea:
Example: "Small restaurants struggle with food waste management. We're building an AI-powered inventory system that predicts demand and optimizes ordering to reduce waste by 30%." -
Initialize Analysis: Click "Initialize Analysis" button
-
Monitor Progress: Watch as each agent processes your idea:
- Solution Architecture (Blueprint generation)
- Market Intelligence (Competitor research)
- Market Validation (Economic analysis)
- Verdict Consensus (Synthesis)
- Stress Test (Voice interrogation - if configured)
- Pivot Strategy (Solution refinement)
- Pitch Packet (Final deliverable)
-
Review Results: Once complete, view:
- Detailed blueprint with features and architecture
- Competitor analysis and differentiation strategy
- Economic projections (CAC, LTV, churn rates)
- Pivot recommendations
- Investor-ready pitch deck
The backend can be configured by modifying these files:
backend/main.py:
- CORS settings
- Server host/port
- Session management
backend/app/graph.py:
- Agent workflow sequence
- Node connections
- Entry points
Agent-Specific Settings (backend/app/agents/*.py):
# Modify LLM settings in each agent file
def get_llm():
return ChatOllama(
model="llama3", # Change model
temperature=0.2, # Adjust creativity (0.0-1.0)
format="json", # Output format
keep_alive="3m", # Keep model loaded in memory
num_ctx=4096, # Context window size (adjust based on model)
num_gpu=99, # GPU layers (99 = offload all layers to GPU)
)API Endpoint (frontend/src/services/api.ts):
const API_BASE_URL = "http://localhost:8000";Tailwind/Styling (frontend/tailwind.config.js):
- Customize colors, fonts, and themes
Start a new analysis session
- Request Body:
{ "problem_statement": "Your startup idea description" } - Response:
{ "session_id": "uuid-string" }
Get current processing status and partial results
- Response: Full state object including current phase
Get final analysis results
- Response: Complete ShredderState with all analysis data
cd backend
# Run all tests
pytest
# Run specific test file
pytest test_graph.py
# Run with coverage
pytest --cov=appcd frontend
# Lint code
npm run lint-
"TAVILY_API_KEY not set" Error
- Ensure
.envfile exists inbackend/directory - Verify API key is valid
- Restart backend server after adding key
- Ensure
-
"Model not found" Error with Ollama
# Pull the required model ollama pull llama3 # Verify model is available ollama list
-
Frontend Can't Connect to Backend
- Verify backend is running on port 8000
- Check CORS settings in
backend/main.py - Ensure
API_BASE_URLin frontend matches backend URL
-
Slow Response Times
- LLM inference can be slow on CPU
- Consider using GPU-accelerated Ollama
- Increase timeout values if needed
- Use OpenAI/Gemini APIs for faster responses
-
Voice Interrogation Fails Silently
- This is expected if
BLANDAI_API_KEYis not set - The system continues without voice analysis
- Check logs for "[Voice Agent] BLANDAI_API_KEY not set"
- This is expected if
-
Module Import Errors
- Ensure virtual environment is activated
- Run
pip install -r requirements.txtagain - Check Python version compatibility
- FastAPI: Web framework
- LangGraph: Agent orchestration
- LangChain: LLM integration
- Ollama/OpenAI/Gemini: LLM providers
- Tavily: Web search API
- Bland AI: Voice call API
- BeautifulSoup4: Web scraping
- Pydantic: Data validation
- Uvicorn: ASGI server
- React: UI framework
- TypeScript: Type safety
- Vite: Build tool
- TailwindCSS: Styling
- Framer Motion: Animations
- Axios: HTTP client
- Lucide React: Icons
Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow PEP 8 for Python code
- Use TypeScript best practices for frontend
- Add tests for new features
- Update documentation as needed
- Run linters before committing
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- LangChain/LangGraph: For the multi-agent orchestration framework
- Tavily: For powerful web search capabilities
- Bland AI: For voice AI integration
- Ollama: For local LLM inference
- FastAPI: For the robust backend framework
- React Team: For the excellent frontend framework
- Issues: GitHub Issues
- Repository: github.com/saaj376/Judge-Magnus
- Youtube Link : https://www.youtube.com/watch?v=8bY20eWTlmU
- Support for multiple LLM providers (UI selector)
- Persistent storage with database integration
- User authentication and session management
- Export results to PDF/PowerPoint
- Real-time collaborative features
- Integration with pitch deck design tools
- Historical analysis tracking and comparison
- Custom agent configuration UI
- Webhook support for external integrations