Skip to content

Latest commit

 

History

History
144 lines (107 loc) · 4.47 KB

File metadata and controls

144 lines (107 loc) · 4.47 KB

ColomboAI Integration Summary for DeepCode-MC1

Overview

Successfully integrated ColomboAI API with DeepCode-MC1 to handle missing usage field in API responses.

Changes Made

1. Configuration Files ✅

mcp_agent.secrets.yaml

openai:
  api_key: "sk-lI-ORkumAFzFW7Tz-9vO0Uw19gGwQT5AlhvUOXUAD_c"
  base_url: "https://mc1.colomboai.com/v1"

mcp_agent.config.yaml

llm_provider: "openai"

openai:
  base_max_tokens: 40000
  default_model: "qwen-3"
  planning_model: "qwen-3"
  implementation_model: "qwen-3"
  reasoning_effort: low
  max_tokens_policy: adaptive
  retry_max_tokens: 32768

2. Patch File Created ✅

File: patch_openai_llm.py

This patch handles the missing usage field in ColomboAI API responses by:

  • Monkey-patching the OpenAIAugmentedLLM.generate method
  • Intercepting the executor.execute calls
  • Adding default usage values (0 tokens) when the field is missing
  • Preventing 'NoneType' object has no attribute 'prompt_tokens' errors

3. Entry Points Updated ✅

The patch is automatically imported at startup in all entry points:

Backend (new_ui/backend/main.py)

# Apply patch for ColomboAI API compatibility (handles missing usage field)
try:
    import patch_openai_llm
except Exception as e:
    print(f"⚠️ Could not load OpenAI LLM patch: {e}")

CLI (cli/main_cli.py)

# Apply patch for ColomboAI API compatibility (handles missing usage field)
try:
    import patch_openai_llm
except Exception as e:
    print(f"⚠️ Could not load OpenAI LLM patch: {e}")

Streamlit UI (ui/streamlit_app.py)

# Apply patch for ColomboAI API compatibility (handles missing usage field)
try:
    import patch_openai_llm
except Exception as e:
    print(f"⚠️ Could not load OpenAI LLM patch: {e}")

4. Logs Directory Created ✅

Created logs/ directory to prevent file writing errors.

How It Works

  1. Startup: When DeepCode starts (via any entry point), the patch is automatically imported
  2. Patching: The patch modifies the OpenAI LLM's generate method to intercept API responses
  3. Default Values: If response.usage is None, it creates a mock usage object with 0 tokens
  4. Transparent: The rest of the code continues to work without modifications

API Configuration

Testing

To test the integration:

# Backend (New UI)
cd /home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1
python new_ui/backend/main.py

# CLI
python cli/main_cli.py

# Streamlit UI
streamlit run ui/streamlit_app.py

Expected Behavior

✅ API calls to ColomboAI should succeed (HTTP 200 OK) ✅ No more 'NoneType' object has no attribute 'prompt_tokens' errors ✅ Token counts will show as 0 (since ColomboAI doesn't provide them) ✅ All workflows should execute normally

Notes

  • The patch is applied globally and affects all OpenAI LLM instances
  • Token tracking will show 0 tokens since ColomboAI doesn't provide usage data
  • The patch is safe and only adds default values when the field is missing
  • No changes to the core mcp_agent package are required

Files Modified

  1. /home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/patch_openai_llm.py (NEW)
  2. /home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/new_ui/backend/main.py
  3. /home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/cli/main_cli.py
  4. /home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/ui/streamlit_app.py
  5. /home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/mcp_agent.config.yaml
  6. /home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/mcp_agent.secrets.yaml
  7. /home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/logs/ (NEW directory)

Troubleshooting

If you still encounter issues:

  1. Check API Key: Ensure the API key in mcp_agent.secrets.yaml is correct
  2. Check Base URL: Verify https://mc1.colomboai.com/v1 is accessible
  3. Check Model Name: Ensure "qwen-3" is the correct model name for your API
  4. Check Logs: Look in logs/ directory for detailed error messages
  5. Verify Patch: Ensure you see "✅ OpenAI LLM patch applied successfully" at startup

Next Steps

🚀 Restart DeepCode and test with your ColomboAI endpoint!

The integration is complete and ready to use. All entry points (Backend, CLI, Streamlit UI) will automatically apply the patch on startup.