Skip to content

fedup of losing hacks, atleast get roasted as many times as possible in the most brutal way to refine your solution in the best possible way

License

Notifications You must be signed in to change notification settings

saaj376/Judge-Magnus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Judge Magnus

License Python React

Judge Magnus is an AI-powered startup idea validation platform that acts as a ruthless VC/Shark Tank investor. It uses a multi-agent LangGraph workflow to stress-test business ideas through automated due diligence, market analysis, competitor intelligence, and voice interrogation before generating a refined pitch deck.


What is Judge Magnus?

Judge Magnus helps entrepreneurs and founders validate their startup ideas by subjecting them to rigorous automated analysis across multiple dimensions:

  • Solution Architecture: Generates detailed technical blueprints from problem statements
  • Market Intelligence: Analyzes competitors and market positioning
  • Economic Validation: Evaluates business viability and unit economics
  • Voice Interrogation: Simulates investor grilling sessions via AI voice calls
  • Pivot Engine: Generates refined, stress-tested solutions
  • Pitch Deck Generation: Creates investor-ready pitch materials

The system uses a sequential workflow where each agent builds upon the previous one's analysis, culminating in a comprehensive business validation report.


Architecture

Judge Magnus consists of two main components:

Backend (Python/FastAPI)

  • Framework: FastAPI with async support
  • Orchestration: LangGraph for multi-agent workflows
  • LLM Integration: Supports Ollama (llama3), OpenAI, and Google Gemini
  • APIs Used:
    • Tavily Search API: Market research and competitor analysis
    • Bland AI: Voice interrogation calls with founders
    • World Bank API: Economic and market data
    • Web Scraping: BeautifulSoup for competitor website analysis

Frontend (React/TypeScript)

  • Framework: React 19 with TypeScript
  • Build Tool: Vite
  • Styling: TailwindCSS
  • Animations: Framer Motion
  • HTTP Client: Axios

Agent Workflow

┌─────────────────────────┐
│  Problem Statement      │
└───────────┬─────────────┘
            │
            ▼
┌─────────────────────────┐
│  Solution Architect     │ ← Generates technical blueprint
└───────────┬─────────────┘
            │
            ▼
┌─────────────────────────┐
│  Competitor Spy         │ ← Market intelligence & competitor analysis
└───────────┬─────────────┘
            │
            ▼
┌─────────────────────────┐
│  Inquisitor             │ ← Economic viability check
└───────────┬─────────────┘
            │
            ▼
┌─────────────────────────┐
│  Consensus Engine       │ ← Multi-agent synthesis
└───────────┬─────────────┘
            │
            ▼
┌─────────────────────────┐
│  Voice Interrogation    │ ← AI phone call stress test
└───────────┬─────────────┘
            │
            ▼
┌─────────────────────────┐
│  Pivot Engine           │ ← Solution refinement
└───────────┬─────────────┘
            │
            ▼
┌─────────────────────────┐
│  Pitch Deck Generator   │ ← Final deliverable
└─────────────────────────┘

Prerequisites

Required Software

  • Python: 3.8 or higher
  • Node.js: 18.x or higher
  • npm or yarn: Latest stable version
  • Ollama (Recommended): For local LLM inference
    • Install from ollama.ai
    • Pull llama3 model: ollama pull llama3

Required API Keys

  1. Tavily API Key (Required)

    • Sign up at tavily.com
    • Used for market research and competitor analysis
  2. Bland AI API Key (Optional)

    • Sign up at bland.ai
    • Required only for voice interrogation features
    • System works without it, skipping the voice call phase
  3. Alternative LLM Providers (Optional)

    • OpenAI API Key for GPT models
    • Google API Key for Gemini models
    • Configure in backend code if not using Ollama

Installation & Setup

1. Clone the Repository

git clone https://github.com/saaj376/Judge-Magnus.git
cd Judge-Magnus

2. Backend Setup

cd backend

# Create and activate virtual environment (recommended)
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r ../requirements.txt

# Create .env file
cat > .env << EOF
TAVILY_API_KEY=your_tavily_api_key_here
BLANDAI_API_KEY=your_bland_api_key_here  # Optional
EOF

Important Environment Variables:

  • TAVILY_API_KEY: Required for market research
  • BLANDAI_API_KEY: Optional for voice features

3. Frontend Setup

cd ../frontend

# Install dependencies
npm install

# The frontend expects backend at http://localhost:8000
# Update API URL in src/services/api.ts if different

4. Start Ollama (If Using Local LLM)

# Make sure Ollama is installed
ollama --version

# Pull the llama3 model
ollama pull llama3

# Ollama should be running in the background
# Default endpoint: http://localhost:11434

Running the Application

Start Backend Server

cd backend
source venv/bin/activate  # If not already activated

# Option 1: Using Python directly
python main.py

# Option 2: Using Uvicorn
uvicorn main:app --reload --host 0.0.0.0 --port 8000

Backend will be available at: http://localhost:8000

Start Frontend Development Server

cd frontend

# Development mode with hot reload
npm run dev

Frontend will be available at: http://localhost:5173

Build Frontend for Production

cd frontend

# Build optimized production bundle
npm run build

# Preview production build
npm run preview

Usage

  1. Open the Application: Navigate to http://localhost:5173 in your browser

  2. Enter Problem Statement: In the "Concept Input" section, describe your startup idea:

    Example: "Small restaurants struggle with food waste management. 
    We're building an AI-powered inventory system that predicts demand 
    and optimizes ordering to reduce waste by 30%."
    
  3. Initialize Analysis: Click "Initialize Analysis" button

  4. Monitor Progress: Watch as each agent processes your idea:

    • Solution Architecture (Blueprint generation)
    • Market Intelligence (Competitor research)
    • Market Validation (Economic analysis)
    • Verdict Consensus (Synthesis)
    • Stress Test (Voice interrogation - if configured)
    • Pivot Strategy (Solution refinement)
    • Pitch Packet (Final deliverable)
  5. Review Results: Once complete, view:

    • Detailed blueprint with features and architecture
    • Competitor analysis and differentiation strategy
    • Economic projections (CAC, LTV, churn rates)
    • Pivot recommendations
    • Investor-ready pitch deck

Configuration

Backend Configuration

The backend can be configured by modifying these files:

backend/main.py:

  • CORS settings
  • Server host/port
  • Session management

backend/app/graph.py:

  • Agent workflow sequence
  • Node connections
  • Entry points

Agent-Specific Settings (backend/app/agents/*.py):

# Modify LLM settings in each agent file
def get_llm():
    return ChatOllama(
        model="llama3",           # Change model
        temperature=0.2,          # Adjust creativity (0.0-1.0)
        format="json",            # Output format
        keep_alive="3m",          # Keep model loaded in memory
        num_ctx=4096,             # Context window size (adjust based on model)
        num_gpu=99,               # GPU layers (99 = offload all layers to GPU)
    )

Frontend Configuration

API Endpoint (frontend/src/services/api.ts):

const API_BASE_URL = "http://localhost:8000";

Tailwind/Styling (frontend/tailwind.config.js):

  • Customize colors, fonts, and themes

API Endpoints

Backend REST API

POST /shred

Start a new analysis session

  • Request Body:
    {
      "problem_statement": "Your startup idea description"
    }
  • Response:
    {
      "session_id": "uuid-string"
    }

GET /status/{session_id}

Get current processing status and partial results

  • Response: Full state object including current phase

GET /results/{session_id}

Get final analysis results

  • Response: Complete ShredderState with all analysis data

Testing

Backend Tests

cd backend

# Run all tests
pytest

# Run specific test file
pytest test_graph.py

# Run with coverage
pytest --cov=app

Frontend Tests

cd frontend

# Lint code
npm run lint

Troubleshooting

Common Issues

  1. "TAVILY_API_KEY not set" Error

    • Ensure .env file exists in backend/ directory
    • Verify API key is valid
    • Restart backend server after adding key
  2. "Model not found" Error with Ollama

    # Pull the required model
    ollama pull llama3
    
    # Verify model is available
    ollama list
  3. Frontend Can't Connect to Backend

    • Verify backend is running on port 8000
    • Check CORS settings in backend/main.py
    • Ensure API_BASE_URL in frontend matches backend URL
  4. Slow Response Times

    • LLM inference can be slow on CPU
    • Consider using GPU-accelerated Ollama
    • Increase timeout values if needed
    • Use OpenAI/Gemini APIs for faster responses
  5. Voice Interrogation Fails Silently

    • This is expected if BLANDAI_API_KEY is not set
    • The system continues without voice analysis
    • Check logs for "[Voice Agent] BLANDAI_API_KEY not set"
  6. Module Import Errors

    • Ensure virtual environment is activated
    • Run pip install -r requirements.txt again
    • Check Python version compatibility

Dependencies

Backend Key Dependencies

  • FastAPI: Web framework
  • LangGraph: Agent orchestration
  • LangChain: LLM integration
  • Ollama/OpenAI/Gemini: LLM providers
  • Tavily: Web search API
  • Bland AI: Voice call API
  • BeautifulSoup4: Web scraping
  • Pydantic: Data validation
  • Uvicorn: ASGI server

Frontend Key Dependencies

  • React: UI framework
  • TypeScript: Type safety
  • Vite: Build tool
  • TailwindCSS: Styling
  • Framer Motion: Animations
  • Axios: HTTP client
  • Lucide React: Icons

Contributing

Contributions are welcome! Please follow these guidelines:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Development Guidelines

  • Follow PEP 8 for Python code
  • Use TypeScript best practices for frontend
  • Add tests for new features
  • Update documentation as needed
  • Run linters before committing

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


Acknowledgments

  • LangChain/LangGraph: For the multi-agent orchestration framework
  • Tavily: For powerful web search capabilities
  • Bland AI: For voice AI integration
  • Ollama: For local LLM inference
  • FastAPI: For the robust backend framework
  • React Team: For the excellent frontend framework

Contact & Support


Watch Magnus Roast me


Future Enhancements

  • Support for multiple LLM providers (UI selector)
  • Persistent storage with database integration
  • User authentication and session management
  • Export results to PDF/PowerPoint
  • Real-time collaborative features
  • Integration with pitch deck design tools
  • Historical analysis tracking and comparison
  • Custom agent configuration UI
  • Webhook support for external integrations

About

fedup of losing hacks, atleast get roasted as many times as possible in the most brutal way to refine your solution in the best possible way

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published