Successfully integrated ColomboAI API with DeepCode-MC1 to handle missing usage field in API responses.
openai:
api_key: "sk-lI-ORkumAFzFW7Tz-9vO0Uw19gGwQT5AlhvUOXUAD_c"
base_url: "https://mc1.colomboai.com/v1"llm_provider: "openai"
openai:
base_max_tokens: 40000
default_model: "qwen-3"
planning_model: "qwen-3"
implementation_model: "qwen-3"
reasoning_effort: low
max_tokens_policy: adaptive
retry_max_tokens: 32768File: patch_openai_llm.py
This patch handles the missing usage field in ColomboAI API responses by:
- Monkey-patching the
OpenAIAugmentedLLM.generatemethod - Intercepting the executor.execute calls
- Adding default usage values (0 tokens) when the field is missing
- Preventing
'NoneType' object has no attribute 'prompt_tokens'errors
The patch is automatically imported at startup in all entry points:
# Apply patch for ColomboAI API compatibility (handles missing usage field)
try:
import patch_openai_llm
except Exception as e:
print(f"⚠️ Could not load OpenAI LLM patch: {e}")# Apply patch for ColomboAI API compatibility (handles missing usage field)
try:
import patch_openai_llm
except Exception as e:
print(f"⚠️ Could not load OpenAI LLM patch: {e}")# Apply patch for ColomboAI API compatibility (handles missing usage field)
try:
import patch_openai_llm
except Exception as e:
print(f"⚠️ Could not load OpenAI LLM patch: {e}")Created logs/ directory to prevent file writing errors.
- Startup: When DeepCode starts (via any entry point), the patch is automatically imported
- Patching: The patch modifies the OpenAI LLM's generate method to intercept API responses
- Default Values: If
response.usageis None, it creates a mock usage object with 0 tokens - Transparent: The rest of the code continues to work without modifications
- Base URL: https://mc1.colomboai.com/v1
- Model: qwen-3
- Provider: openai (OpenAI-compatible API)
- Max Tokens: 40000 (base), 32768 (retry)
To test the integration:
# Backend (New UI)
cd /home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1
python new_ui/backend/main.py
# CLI
python cli/main_cli.py
# Streamlit UI
streamlit run ui/streamlit_app.py✅ API calls to ColomboAI should succeed (HTTP 200 OK)
✅ No more 'NoneType' object has no attribute 'prompt_tokens' errors
✅ Token counts will show as 0 (since ColomboAI doesn't provide them)
✅ All workflows should execute normally
- The patch is applied globally and affects all OpenAI LLM instances
- Token tracking will show 0 tokens since ColomboAI doesn't provide usage data
- The patch is safe and only adds default values when the field is missing
- No changes to the core mcp_agent package are required
/home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/patch_openai_llm.py(NEW)/home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/new_ui/backend/main.py/home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/cli/main_cli.py/home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/ui/streamlit_app.py/home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/mcp_agent.config.yaml/home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/mcp_agent.secrets.yaml/home/InfoVerse/Documents/ColomboAI/DeepCode/DeepCode-MC1/logs/(NEW directory)
If you still encounter issues:
- Check API Key: Ensure the API key in
mcp_agent.secrets.yamlis correct - Check Base URL: Verify
https://mc1.colomboai.com/v1is accessible - Check Model Name: Ensure "qwen-3" is the correct model name for your API
- Check Logs: Look in
logs/directory for detailed error messages - Verify Patch: Ensure you see "✅ OpenAI LLM patch applied successfully" at startup
🚀 Restart DeepCode and test with your ColomboAI endpoint!
The integration is complete and ready to use. All entry points (Backend, CLI, Streamlit UI) will automatically apply the patch on startup.