MCPify is a powerful tool that automatically detects APIs in existing projects and transforms them into Model Context Protocol (MCP) servers. This enables seamless integration of your existing command-line tools, web APIs, and applications with AI assistants and other MCP-compatible clients.
- Intelligent API Detection: Multiple advanced detection strategies
- π€ OpenAI Detection: Use GPT-4 for intelligent API analysis and tool extraction
- πͺ Camel-AI Detection: Leverage Camel-AI's ChatAgent framework for comprehensive analysis
- π AST Detection: Static code analysis using Abstract Syntax Trees
- π― Auto-Selection: Automatically choose the best available detection strategy
- Multiple Project Types: Support for various project architectures
- CLI Tools: Detect argparse, click, typer-based command-line interfaces
- Web APIs: Support for Flask, Django, and FastAPI applications with route detection
- Interactive Commands: Identify command-based interactive applications
- Python Modules: Extract callable functions and methods
- Flexible MCP Server: Multiple ways to start and control MCP servers
- Multiple Backend Support: Works with command-line tools, HTTP APIs, Python modules, and more
- Configuration Validation: Built-in validation system to ensure correct configurations
- Parameter Detection: Automatically extract route parameters, query parameters, and CLI arguments
- Zero Code Changes: Transform existing projects without modifying their source code
- Professional Architecture: Clean separation between detection, configuration, and server execution
MCPify now includes a powerful Streamlit-based web interface that makes repository analysis and MCP server configuration generation intuitive and interactive!
# Install UI dependencies
pip install 'mcpify[ui]'
# Start the interactive web interface
python -m mcpify.ui
# Or use the convenience function
python -c "from mcpify.ui import start_ui; start_ui()"Then navigate to http://localhost:8501 in your browser.
- GitIngest-style Interface: Clean, intuitive repository input with drag-and-drop support
- Smart Examples: Pre-configured example repositories to try instantly
- Advanced Options: Configurable exclude patterns, file size limits, and detection strategies
- Real-time Progress: Visual progress indicators for each analysis phase
- Multiple Input Types: Support for GitHub URLs, local directories, and Git repositories
- Conversational API Discovery: Describe what you need in natural language
- Smart Recommendations: AI suggests relevant APIs and tools based on your requirements
- Interactive Configuration: Build MCP configurations through guided conversations
- Context-Aware Suggestions: Leverages repository analysis for targeted recommendations
The UI provides a 5-phase intelligent workflow:
- π Input Phase: Repository selection with examples and advanced options
- π Analysis Phase: GitIngest processing with real-time progress tracking
- π¬ Chat Phase: AI-powered conversation to understand your needs
- π― Confirmation Phase: Review and confirm detected APIs and tools
- β Complete Phase: Download configurations and get deployment instructions
- Session Management: Save and restore analysis sessions
- Configuration Validation: Real-time validation with detailed error reporting
- Export Options: Download configurations in multiple formats
- Server Testing: Built-in MCP server testing and validation
- History Tracking: Keep track of all your analysis sessions
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β¨ MCPify β¨ β
β Turn repositories into MCP servers β
β β
β π Repository Input β
β βββββββββββββββββββββββββββββββββββββββ βββββββββββββ β
β β https://github.com/user/repo β β π Analyzeβ β
β βββββββββββββββββββββββββββββββββββββββ βββββββββββββ β
β β
β βοΈ Advanced Options β
β β’ Exclude patterns: *.md, __pycache__/, *.pyc β
β β’ Max file size: 50 KB β
β β’ Detection strategy: auto β
β β
β π‘ Try these examples: β
β [FastAPI Todo] [Flask Example] [CLI Tool] [API Client]β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π Analysis Progress β
β ββββββββββββββββββββββββ 80% β
β β
β Validating Configuration β
β Checking configuration validity... β
β β
β β
GitIngest β
Detect APIs π Validate β³ Complete β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
Analysis Complete β
β β
β π Repository: my-fastapi-app ποΈ Files: 45 β
β π Language: Python β‘ Framework: FastAPI β
β β±οΈ Time: 12.3s π Analyzed: 32 β
β β
β π Summary | βοΈ Configuration | π Validation | π Codeβ
β β
β π Generated 8 API tools with FastAPI backend β
β π₯ Download Configuration β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# Start the UI
python -m mcpify.ui
# In the browser:
# 1. Enter: https://github.com/tiangolo/fastapi
# 2. Click "π Analyze"
# 3. Wait for analysis completion
# 4. Download the generated configuration# Start UI with custom settings
python -m mcpify.ui
# Configure advanced options:
# β’ Exclude patterns: "*.md, tests/, docs/"
# β’ Max file size: 100 KB
# β’ Detection strategy: openai
# β’ Include private repos: YesThe UI can be customized through environment variables:
# Custom port
export STREAMLIT_SERVER_PORT=8502
python -m mcpify.ui
# Custom host
export STREAMLIT_SERVER_ADDRESS=0.0.0.0
python -m mcpify.ui
# Enable debug mode
export STREAMLIT_LOGGER_LEVEL=debug
python -m mcpify.uimcpify/ui/
βββ __init__.py # UI module exports
βββ main.py # UI entry point
βββ app.py # Main Streamlit application
βββ models.py # Data models for UI
βββ session_manager.py # Session and history management
βββ components/ # Reusable UI components
β βββ __init__.py
β βββ chat_interface.py # AI chat components
β βββ sidebar.py # Navigation sidebar
β βββ detection_results.py # Results display
βββ pages/ # Individual page implementations
βββ __init__.py
βββ repository_analyzer.py # Main analyzer page
Want to contribute to the UI? Here's how to get started:
# Install UI development dependencies
pip install 'mcpify[ui,dev]'
# Run the UI in development mode
streamlit run mcpify/ui/app.py --server.runOnSave true
# Run UI tests
python -m pytest tests/test_ui_*.py -vpip install mcpifygit clone https://github.com/your-username/mcpify.git
cd mcpify
pip install -e .For enhanced detection capabilities:
# For OpenAI-powered detection
pip install openai
export OPENAI_API_KEY="your-api-key"
# For Camel-AI powered detection
pip install camel-aimcpify/
βββ mcpify/ # Core package
β βββ cli.py # CLI interface with detection commands
β βββ __main__.py # Module entry point
β βββ wrapper.py # MCP protocol wrapper
β βββ backend.py # Backend adapters
β βββ detect/ # Detection module
β β βββ __init__.py # Module exports
β β βββ base.py # Base detector class
β β βββ ast.py # AST-based detection
β β βββ openai.py # OpenAI-powered detection
β β βββ camel.py # Camel-AI detection
β β βββ factory.py # Detector factory
β β βββ types.py # Type definitions
β βββ validate.py # Configuration validation
βββ examples/ # Example projects
βββ docs/ # Documentation
βββ tests/ # Test suite
MCPify offers multiple detection strategies. Use the best one for your needs:
# Auto-detection (recommended): Automatically selects the best available strategy
mcpify detect /path/to/your/project --output config.json
# OpenAI-powered detection: Most intelligent, requires API key
mcpify openai-detect /path/to/your/project --output config.json
# Camel-AI detection: Advanced agent-based analysis
mcpify camel-detect /path/to/your/project --output config.json
# AST detection: Fast, no API key required
mcpify ast-detect /path/to/your/project --output config.jsonmcpify view config.json
mcpify validate config.json# Method 1: Using mcpify CLI (recommended)
mcpify serve config.json
# Method 2: Direct module invocation
python -m mcpify serve config.json
# HTTP mode for web integration
mcpify serve config.json --mode streamable-http --port 8080The auto-detect command intelligently selects the best available strategy:
mcpify detect /path/to/projectSelection Priority:
- Camel-AI (if installed) - Most comprehensive analysis
- OpenAI (if API key available) - Intelligent LLM-based detection
- AST (always available) - Reliable static analysis fallback
Uses GPT-4 for intelligent project analysis:
# With API key parameter
mcpify openai-detect /path/to/project --openai-key YOUR_API_KEY
# Using environment variable
export OPENAI_API_KEY="your-api-key"
mcpify openai-detect /path/to/projectAdvantages:
- Understands complex code patterns and context
- Generates detailed descriptions and parameter information
- Excellent at identifying non-obvious API endpoints
- Handles multiple programming languages
Uses Camel-AI's ChatAgent framework for comprehensive analysis:
# Install camel-ai first
pip install camel-ai
# Set OpenAI API key (required by Camel-AI)
export OPENAI_API_KEY="your-api-key"
# Run detection
mcpify camel-detect /path/to/project --model-name gpt-4Advantages:
- Advanced agent-based reasoning
- Deep project structure understanding
- Excellent for complex multi-file projects
- Sophisticated parameter extraction
Fast, reliable static code analysis:
mcpify ast-detect /path/to/projectAdvantages:
- No API key required
- Fast execution
- Reliable for standard patterns (argparse, Flask routes)
- Works offline
# Detect and test your APIs with different strategies
mcpify detect my-project --output my-project.json # Auto-select best
mcpify openai-detect my-project --output my-project-ai.json # AI-powered
mcpify ast-detect my-project --output my-project-ast.json # Static analysis
# Compare results
mcpify view my-project.json
mcpify serve my-project.json# Use OpenAI for intelligent analysis
export OPENAI_API_KEY="your-key"
mcpify openai-detect complex-project --output smart-config.json
# Use Camel-AI for advanced agent analysis
pip install camel-ai
mcpify camel-detect complex-project --output agent-config.json# Generate configuration with best available strategy
mcpify detect production-app --output prod-config.json
# Deploy as HTTP server
mcpify serve prod-config.json --mode streamable-http --host 0.0.0.0 --port 8080{
"name": "my-web-api",
"description": "Web API server",
"backend": {
"type": "fastapi",
"base_url": "http://localhost:8000"
},
"tools": [
{
"name": "get_user",
"description": "Get user information",
"endpoint": "/users/{user_id}",
"method": "GET",
"parameters": [
{
"name": "user_id",
"type": "string",
"description": "User ID"
}
]
}
]
}{
"name": "my-python-tools",
"description": "Python module backend",
"backend": {
"type": "python",
"module_path": "./my_module.py"
},
"tools": [
{
"name": "calculate",
"description": "Perform calculation",
"function": "calculate",
"parameters": [
{
"name": "expression",
"type": "string",
"description": "Mathematical expression"
}
]
}
]
}{
"name": "my-cli-tool",
"description": "Command line tool backend",
"backend": {
"type": "commandline",
"config": {
"command": "python3",
"args": ["./my_script.py"],
"cwd": "."
}
},
"tools": [
{
"name": "process_data",
"description": "Process data with CLI tool",
"args": ["--process", "{input_file}"],
"parameters": [
{
"name": "input_file",
"type": "string",
"description": "Input file path"
}
]
}
]
}# Auto-detection with strategy selection
mcpify detect <project_path> [--output <file>] [--openai-key <key>]
# Specific detection strategies
mcpify openai-detect <project_path> [--output <file>] [--openai-key <key>]
mcpify camel-detect <project_path> [--output <file>] [--model-name <model>]
mcpify ast-detect <project_path> [--output <file>]
# Configuration management
mcpify view <config_file> [--verbose]
mcpify validate <config_file> [--verbose]
mcpify serve <config_file> [--mode <mode>] [--host <host>] [--port <port>]fastapi: FastAPI web applicationsflask: Flask web applicationspython: Python modules and functionscommandline: Command-line tools and scriptsexternal: External programs and services
stdio: Standard input/output (default MCP mode)streamable-http: HTTP Server-Sent Events mode
string,integer,number,boolean,array- Automatic type detection from source code
- Custom validation rules
- Enhanced type inference with AI detection
# Basic usage
mcpify serve config.json
# Specify server mode
mcpify serve config.json --mode stdio # Default mode
mcpify serve config.json --mode streamable-http # HTTP mode
# Configure host and port (HTTP mode only)
mcpify serve config.json --mode streamable-http --host localhost --port 8080
mcpify serve config.json --mode streamable-http --host 0.0.0.0 --port 9999
# Real examples with provided configurations
mcpify serve examples/python-server-project/server.json
mcpify serve examples/python-server-project/server.json --mode streamable-http --port 8888
mcpify serve examples/python-cmd-tool/cmd-tool.json --mode stdio- Uses standard input/output for communication
- Best for local MCP clients and development
- No network configuration needed
mcpify serve config.json
# or explicitly
mcpify serve config.json --mode stdio- Uses HTTP with Server-Sent Events
- Best for web integration and remote clients
- Requires host and port configuration
# Local development
mcpify serve config.json --mode streamable-http --port 8080
# Production deployment
mcpify serve config.json --mode streamable-http --host 0.0.0.0 --port 8080Explore the examples/ directory for ready-to-use configurations:
# Try different detection strategies on examples
mcpify detect examples/python-server-project --output server-auto.json
mcpify openai-detect examples/python-cmd-tool --output cmd-openai.json
mcpify ast-detect examples/python-server-project --output server-ast.json
# View example configurations
mcpify view examples/python-server-project/server.json
mcpify view examples/python-cmd-tool/cmd-tool.json
# Test with examples - STDIO mode (default)
mcpify serve examples/python-server-project/server.json
mcpify serve examples/python-cmd-tool/cmd-tool.json
# Test with examples - HTTP mode
mcpify serve examples/python-server-project/server.json --mode streamable-http --port 8888
mcpify serve examples/python-cmd-tool/cmd-tool.json --mode streamable-http --port 9999# Run all tests
python -m pytest tests/ -v
# Run with coverage
python -m pytest tests/ --cov=mcpify --cov-report=html
# Run specific tests
python -m pytest tests/test_detect.py -vgit clone https://github.com/your-username/mcpify.git
cd mcpify
pip install -e ".[dev]"
# Install optional dependencies for full functionality
pip install openai camel-ai
python -m pytest tests/ -v# Detection commands
mcpify detect <project_path> [--output <file>] [--openai-key <key>]
mcpify openai-detect <project_path> [--output <file>] [--openai-key <key>]
mcpify camel-detect <project_path> [--output <file>] [--model-name <model>]
mcpify ast-detect <project_path> [--output <file>]
# Configuration commands
mcpify view <config_file> [--verbose]
mcpify validate <config_file> [--verbose]
# Server commands
mcpify serve <config_file> [--mode <mode>] [--host <host>] [--port <port>]pip install mcpify
# Use mcpify serve for all scenarios# Run as Python module
python -m mcpify serve config.json
python -m mcpify serve config.json --mode streamable-http --port 8080FROM python:3.10-slim
COPY . /app
WORKDIR /app
RUN pip install .
# Optional: Install AI detection dependencies
# RUN pip install openai camel-ai
CMD ["mcpify", "serve", "config.json", "--mode", "streamable-http", "--host", "0.0.0.0", "--port", "8080"]# Start HTTP server for production
mcpify serve config.json --mode streamable-http --host 0.0.0.0 --port 8080
# With custom configuration
mcpify serve config.json --mode streamable-http --host 127.0.0.1 --port 9999We welcome contributions! Please see our development setup above and:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
# Linting and formatting
ruff check mcpify/
ruff format mcpify/
# Type checking
mypy mcpify/This project is licensed under the MIT License - see the LICENSE file for details.
- Model Context Protocol - The protocol specification
- MCP Python SDK - Official Python implementation
- OpenAI API - For AI-powered detection
- Camel-AI - Multi-agent framework for advanced detection
- Documentation: See
docs/usage.mdfor detailed usage instructions - Examples: Check the
examples/directory for configuration templates - Issues: GitHub Issues
- Discussions: GitHub Discussions