A simple, elegant open-source framework for production-ready AI agents
This is the core principle that drives every design decision in ConnectOnion.
from connectonion import Agent
agent = Agent(name="assistant")
agent.input("Hello!") # That's it!def search(query: str) -> str:
"""Search for information."""
return f"Results for {query}"
agent = Agent(name="assistant", tools=[search])
agent.input("Search for Python tutorials")agent = Agent(name="assistant", tools=[search])
agent.auto_debug() # Interactive debugging sessionagent = Agent(
name="production",
model="gpt-5", # Latest models
tools=[search, analyze, execute], # Your functions as tools
system_prompt=company_prompt, # Custom behavior
max_iterations=10, # Safety controls
trust="prompt" # Multi-agent ready
)
agent.input("Complex production task")from connectonion import host
host(agent) # HTTP server + P2P relay - other agents can now discover and call this agent|
Just one |
Battle-tested with GPT-5, Gemini 2.5, Claude Opus 4.1. Logging, debugging, trust system built-in. |
MIT licensed. Community-driven. Join 1000+ developers building the future of AI agents. |
|
Add reflection & reasoning to any agent in one line. Extensible and composable. |
Pause at breakpoints with |
Scale from prototypes to production systems without rewriting code. |
Get help, share agents, and discuss with 1000+ builders in our active community.
pip install connectonion# Create a new agent project with one command
co create my-agent
# Navigate and run
cd my-agent
python agent.pyThe CLI guides you through API key setup automatically. No manual .env editing needed!
import os
from connectonion import Agent
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
# 1. Define tools as simple functions
def search(query: str) -> str:
"""Search for information."""
return f"Found information about {query}"
def calculate(expression: str) -> float:
"""Perform mathematical calculations."""
return eval(expression) # Use safely in production
# 2. Create an agent with tools and personality
agent = Agent(
name="my_assistant",
system_prompt="You are a helpful and friendly assistant.",
tools=[search, calculate]
# max_iterations=10 is the default - agent will try up to 10 tool calls per task
)
# 3. Use the agent
result = agent.input("What is 25 * 4?")
print(result) # Agent will use the calculate function
result = agent.input("Search for Python tutorials")
print(result) # Agent will use the search function
# 4. View behavior history (automatic!)
print(agent.history.summary())Debug your agents like you debug code - pause at breakpoints, inspect variables, and test edge cases:
from connectonion import Agent
from connectonion.decorators import xray
# Mark tools you want to debug with @xray
@xray
def search_database(query: str) -> str:
"""Search for information."""
return f"Found 3 results for '{query}'"
@xray
def send_email(to: str, subject: str) -> str:
"""Send an email."""
return f"Email sent to {to}"
# Create agent with @xray tools
agent = Agent(
name="debug_demo",
tools=[search_database, send_email]
)
# Launch interactive debugging session
agent.auto_debug()
# Or debug a specific task
agent.auto_debug("Search for Python tutorials and email the results")What happens at each @xray breakpoint:
ββββββββββββββββββββββββββββββββββββββββββββββ
@xray BREAKPOINT: search_database
Local Variables:
query = "Python tutorials"
result = "Found 3 results for 'Python tutorials'"
What do you want to do?
β Continue execution π [c or Enter]
Edit values π [e]
Quit debugging π« [q]
π‘ Use arrow keys to navigate or type shortcuts
>
Key features:
- Pause at breakpoints: Tools decorated with
@xraypause execution - Inspect state: See all local variables and execution context
- Edit variables: Modify results to test "what if" scenarios
- Full Python REPL: Run any code to explore agent behavior
- See next action: Preview what the LLM plans to do next
Perfect for:
- Understanding why agents make certain decisions
- Testing edge cases without modifying code
- Exploring agent behavior interactively
- Debugging complex multi-tool workflows
Learn more in the auto_debug guide
Package reusable capabilities as plugins and use them across multiple agents:
from connectonion import Agent, after_tools, llm_do
# Define a reflection plugin
def add_reflection(agent):
trace = agent.current_session['trace'][-1]
if trace['type'] == 'tool_execution' and trace['status'] == 'success':
result = trace['result']
reflection = llm_do(
f"Result: {result[:200]}\n\nWhat did we learn?",
system_prompt="Be concise.",
temperature=0.3
)
agent.current_session['messages'].append({
'role': 'assistant',
'content': f"π€ {reflection}"
})
# Plugin is just a list of event handlers
reflection = [after_tools(add_reflection)] # after_tools fires once after all tools
# Use across multiple agents
researcher = Agent("researcher", tools=[search], plugins=[reflection])
analyst = Agent("analyst", tools=[analyze], plugins=[reflection])What plugins provide:
- Reusable capabilities: Package event handlers into bundles
- Simple pattern: A plugin is just a list of event handlers
- Easy composition: Combine multiple plugins together
- Built-in plugins: re_act, eval, system_reminder, image_result_formatter, and more
Built-in plugins are ready to use:
from connectonion.useful_plugins import re_act, system_reminder
agent = Agent("assistant", tools=[search], plugins=[re_act, system_reminder])Learn more about plugins | Built-in plugins
The main class that orchestrates LLM calls and tool usage. Each agent:
- Has a unique name for tracking purposes
- Can be given a custom personality via
system_prompt - Automatically converts functions to tools
- Records all behavior to JSON files
NEW: Just write regular Python functions! ConnectOnion automatically converts them to tools:
def my_tool(param: str, optional_param: int = 10) -> str:
"""This docstring becomes the tool description."""
return f"Processed {param} with value {optional_param}"
# Use it directly - no wrapping needed!
agent = Agent("assistant", tools=[my_tool])Key features:
- Automatic Schema Generation: Type hints become OpenAI function schemas
- Docstring Integration: First line becomes tool description
- Parameter Handling: Supports required and optional parameters
- Type Conversion: Handles different return types automatically
Define your agent's personality and behavior with flexible input options:
# 1. Direct string prompt
agent = Agent(
name="helpful_tutor",
system_prompt="You are an enthusiastic teacher who loves to educate.",
tools=[my_tools]
)
# 2. Load from file (any text file, no extension restrictions)
agent = Agent(
name="support_agent",
system_prompt="prompts/customer_support.md" # Automatically loads file content
)
# 3. Using Path object
from pathlib import Path
agent = Agent(
name="coder",
system_prompt=Path("prompts") / "senior_developer.txt"
)
# 4. None for default prompt
agent = Agent("basic_agent") # Uses default: "You are a helpful assistant..."Example prompt file (prompts/customer_support.md):
# Customer Support Agent
You are a senior customer support specialist with expertise in:
- Empathetic communication
- Problem-solving
- Technical troubleshooting
## Guidelines
- Always acknowledge the customer's concern first
- Look for root causes, not just symptoms
- Provide clear, actionable solutionsAutomatic logging of all agent activities including:
- User inputs and agent responses
- LLM calls with timing
- Tool executions with parameters and results
- Default storage in
.co/logs/{name}.log(human-readable format)
You can still use the traditional Tool class approach, but the new functional approach is much simpler:
from connectonion.tools import Calculator, CurrentTime, ReadFile
agent = Agent("assistant", tools=[Calculator(), CurrentTime(), ReadFile()])def calculate(expression: str) -> float:
"""Perform mathematical calculations."""
return eval(expression) # Use safely in production
def get_time(format: str = "%Y-%m-%d %H:%M:%S") -> str:
"""Get current date and time."""
from datetime import datetime
return datetime.now().strftime(format)
def read_file(filepath: str) -> str:
"""Read contents of a text file."""
with open(filepath, 'r') as f:
return f.read()
# Use them directly!
agent = Agent("assistant", tools=[calculate, get_time, read_file])The function-based approach is simpler, more Pythonic, and easier to test!
ConnectOnion CLI provides templates to get you started quickly:
# Create a minimal agent (default)
co create my-agent
# Create with specific template
co create my-playwright-bot --template playwright
# Initialize in existing directory
co init # Adds .co folder only
co init --template playwright # Adds full templateAvailable Templates:
minimal(default) - Simple agent starterplaywright- Web automation with browser toolsmeta-agent- Development assistant with docs searchweb-research- Web research and data extraction
Each template includes:
- Pre-configured agent ready to run
- Automatic API key setup
- Embedded ConnectOnion documentation
- Git-ready
.gitignore
Learn more in the CLI Documentation and Templates Guide.
The simplest way is to use functions (recommended):
def weather(city: str) -> str:
"""Get current weather for a city."""
# Your weather API logic here
return f"Weather in {city}: Sunny, 22Β°C"
# That's it! Use it directly
agent = Agent(name="weather_agent", tools=[weather])Or use the Tool class for more control:
from connectonion.tools import Tool
class WeatherTool(Tool):
def __init__(self):
super().__init__(
name="weather",
description="Get current weather for a city"
)
def run(self, city: str) -> str:
return f"Weather in {city}: Sunny, 22Β°C"
def get_parameters_schema(self):
return {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"}
},
"required": ["city"]
}
agent = Agent(name="weather_agent", tools=[WeatherTool()])connectonion/
βββ connectonion/
β βββ __init__.py # Main exports
β βββ agent.py # Agent class
β βββ tools.py # Tool interface and built-ins
β βββ llm.py # LLM interface and OpenAI implementation
β βββ console.py # Terminal output and logging
β βββ cli/ # CLI module
β βββ main.py # CLI commands
β βββ docs.md # Embedded documentation
β βββ templates/ # Agent templates
β βββ basic_agent.py
β βββ chat_agent.py
β βββ data_agent.py
β βββ *.md # Prompt templates
βββ docs/ # Documentation
β βββ quickstart.md
β βββ concepts/ # Core concepts
β βββ cli/ # CLI commands
β βββ templates/ # Project templates
β βββ ...
βββ examples/
β βββ basic_example.py
βββ tests/
β βββ test_agent.py
βββ pyproject.toml
python -m pytest tests/Or run individual test files:
python -m unittest tests.test_agentAll agent activities are automatically logged to:
.co/logs/{agent_name}.log # Default location
Each log entry includes:
- Timestamp
- User input
- LLM calls with timing
- Tool executions with parameters and results
- Final responses
Control logging behavior:
# Default: logs to .co/logs/assistant.log
agent = Agent("assistant")
# Log to current directory
agent = Agent("assistant", log=True) # β assistant.log
# Disable logging
agent = Agent("assistant", log=False)
# Custom log file
agent = Agent("assistant", log="my_logs/custom.log")Set your API key via environment variable:
export OPENAI_API_KEY="your-api-key-here"Or pass directly to agent:
agent = Agent(name="test", api_key="your-api-key-here")agent = Agent(name="test", model="gpt-5") # Default: gpt-5-miniControl how many tool calling iterations an agent can perform:
# Default: 10 iterations (good for most tasks)
agent = Agent(name="assistant", tools=[...])
# Complex tasks may need more iterations
research_agent = Agent(
name="researcher",
tools=[search, analyze, summarize, write_file],
max_iterations=25 # Allow more steps for complex workflows
)
# Simple agents can use fewer iterations for safety
calculator = Agent(
name="calc",
tools=[calculate],
max_iterations=5 # Prevent runaway calculations
)
# Per-request override for specific complex tasks
result = agent.input(
"Analyze all project files and generate comprehensive report",
max_iterations=50 # Override for this specific task
)When an agent reaches its iteration limit, it returns:
"Task incomplete: Maximum iterations (10) reached."
Choosing the Right Limit:
- Simple tasks (1-3 tools): 5-10 iterations
- Standard workflows: 10-15 iterations (default: 10)
- Complex analysis: 20-30 iterations
- Research/multi-step: 30+ iterations
Agents can chain multiple tool calls automatically:
result = agent.input(
"Calculate 15 * 8, then tell me what time you did this calculation"
)
# Agent will use calculator first, then current_time toolfrom connectonion.llm import LLM
class CustomLLM(LLM):
def complete(self, messages, tools=None):
# Your custom LLM implementation
pass
agent = Agent(name="test", llm=CustomLLM())Current Focus:
- Multi-agent networking (serve/connect)
- Trust system for agent collaboration
co deployfor one-command deployment
Recently Completed:
- Multiple LLM providers (OpenAI, Anthropic, Gemini, Groq, Grok, OpenRouter)
- Managed API keys (
co/prefix) - Plugin system
- Google OAuth integration
- Interactive debugging (
@xray,auto_debug)
See full roadmap for details.
- π¬ Discord: Join our community - Get help, share ideas, meet other developers
- π Documentation: docs.connectonion.com - Comprehensive guides and examples
- β GitHub: Star the repo - Show your support
- π Issues: Report bugs - We respond quickly
If ConnectOnion helps you build better agents, give it a star! β
It helps others discover the framework and motivates us to keep improving it.
We welcome contributions! ConnectOnion is open source and community-driven.
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
See our Contributing Guide for more details.
MIT License - Use it anywhere, even commercially. See LICENSE file for details.
Built with β€οΈ by the open-source community
β Star this repo β’ π¬ Join Discord β’ π Read Docs β’ β¬ Back to top