Skip to content

AudreyYZY/connectonion

Β 
Β 

Repository files navigation

πŸ§… ConnectOnion

Production Ready License: MIT Python 3.10+ PyPI Downloads GitHub stars Contributors Discord Documentation

A simple, elegant open-source framework for production-ready AI agents

πŸ“š Documentation β€’ πŸ’¬ Discord β€’ ⭐ Star Us


🌟 Philosophy: "Keep simple things simple, make complicated things possible"

This is the core principle that drives every design decision in ConnectOnion.

🎯 Living Our Philosophy

Step 1: Simple - Create and Use

from connectonion import Agent

agent = Agent(name="assistant")
agent.input("Hello!")  # That's it!

Step 2: Add Your Tools

def search(query: str) -> str:
    """Search for information."""
    return f"Results for {query}"

agent = Agent(name="assistant", tools=[search])
agent.input("Search for Python tutorials")

Step 3: Debug Your Agent

agent = Agent(name="assistant", tools=[search])
agent.auto_debug()  # Interactive debugging session

Step 4: Production Ready

agent = Agent(
    name="production",
    model="gpt-5",                    # Latest models
    tools=[search, analyze, execute], # Your functions as tools
    system_prompt=company_prompt,     # Custom behavior
    max_iterations=10,                # Safety controls
    trust="prompt"                    # Multi-agent ready
)
agent.input("Complex production task")

Step 5: Multi-Agent - Make it Remotely Callable

from connectonion import host
host(agent)  # HTTP server + P2P relay - other agents can now discover and call this agent

✨ Why ConnectOnion?

🎯 Simple API

Just one Agent class. Your functions become tools automatically. Start in 2 lines, not 200.

πŸš€ Production Ready

Battle-tested with GPT-5, Gemini 2.5, Claude Opus 4.1. Logging, debugging, trust system built-in.

🌍 Open Source

MIT licensed. Community-driven. Join 1000+ developers building the future of AI agents.

πŸ”Œ Plugin System

Add reflection & reasoning to any agent in one line. Extensible and composable.

πŸ” Interactive Debugging

Pause at breakpoints with @xray, inspect state, modify variables on-the-fly.

⚑ No Boilerplate

Scale from prototypes to production systems without rewriting code.


πŸ’¬ Join the Community

Discord

Get help, share agents, and discuss with 1000+ builders in our active community.


πŸš€ Quick Start

Installation

pip install connectonion

Quickest Start - Use the CLI

# Create a new agent project with one command
co create my-agent

# Navigate and run
cd my-agent
python agent.py

The CLI guides you through API key setup automatically. No manual .env editing needed!

Manual Usage

import os  
from connectonion import Agent

# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"

# 1. Define tools as simple functions
def search(query: str) -> str:
    """Search for information."""
    return f"Found information about {query}"

def calculate(expression: str) -> float:
    """Perform mathematical calculations."""
    return eval(expression)  # Use safely in production

# 2. Create an agent with tools and personality
agent = Agent(
    name="my_assistant",
    system_prompt="You are a helpful and friendly assistant.",
    tools=[search, calculate]
    # max_iterations=10 is the default - agent will try up to 10 tool calls per task
)

# 3. Use the agent
result = agent.input("What is 25 * 4?")
print(result)  # Agent will use the calculate function

result = agent.input("Search for Python tutorials") 
print(result)  # Agent will use the search function

# 4. View behavior history (automatic!)
print(agent.history.summary())

πŸ” Interactive Debugging with @xray

Debug your agents like you debug code - pause at breakpoints, inspect variables, and test edge cases:

from connectonion import Agent
from connectonion.decorators import xray

# Mark tools you want to debug with @xray
@xray
def search_database(query: str) -> str:
    """Search for information."""
    return f"Found 3 results for '{query}'"

@xray
def send_email(to: str, subject: str) -> str:
    """Send an email."""
    return f"Email sent to {to}"

# Create agent with @xray tools
agent = Agent(
    name="debug_demo",
    tools=[search_database, send_email]
)

# Launch interactive debugging session
agent.auto_debug()

# Or debug a specific task
agent.auto_debug("Search for Python tutorials and email the results")

What happens at each @xray breakpoint:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
@xray BREAKPOINT: search_database

Local Variables:
  query = "Python tutorials"
  result = "Found 3 results for 'Python tutorials'"

What do you want to do?
  β†’ Continue execution πŸš€       [c or Enter]
    Edit values πŸ”             [e]
    Quit debugging 🚫          [q]

πŸ’‘ Use arrow keys to navigate or type shortcuts
>

Key features:

  • Pause at breakpoints: Tools decorated with @xray pause execution
  • Inspect state: See all local variables and execution context
  • Edit variables: Modify results to test "what if" scenarios
  • Full Python REPL: Run any code to explore agent behavior
  • See next action: Preview what the LLM plans to do next

Perfect for:

  • Understanding why agents make certain decisions
  • Testing edge cases without modifying code
  • Exploring agent behavior interactively
  • Debugging complex multi-tool workflows

Learn more in the auto_debug guide

πŸ”Œ Plugin System

Package reusable capabilities as plugins and use them across multiple agents:

from connectonion import Agent, after_tools, llm_do

# Define a reflection plugin
def add_reflection(agent):
    trace = agent.current_session['trace'][-1]
    if trace['type'] == 'tool_execution' and trace['status'] == 'success':
        result = trace['result']
        reflection = llm_do(
            f"Result: {result[:200]}\n\nWhat did we learn?",
            system_prompt="Be concise.",
            temperature=0.3
        )
        agent.current_session['messages'].append({
            'role': 'assistant',
            'content': f"πŸ€” {reflection}"
        })

# Plugin is just a list of event handlers
reflection = [after_tools(add_reflection)]  # after_tools fires once after all tools

# Use across multiple agents
researcher = Agent("researcher", tools=[search], plugins=[reflection])
analyst = Agent("analyst", tools=[analyze], plugins=[reflection])

What plugins provide:

  • Reusable capabilities: Package event handlers into bundles
  • Simple pattern: A plugin is just a list of event handlers
  • Easy composition: Combine multiple plugins together
  • Built-in plugins: re_act, eval, system_reminder, image_result_formatter, and more

Built-in plugins are ready to use:

from connectonion.useful_plugins import re_act, system_reminder

agent = Agent("assistant", tools=[search], plugins=[re_act, system_reminder])

Learn more about plugins | Built-in plugins

πŸ”§ Core Concepts

Agent

The main class that orchestrates LLM calls and tool usage. Each agent:

  • Has a unique name for tracking purposes
  • Can be given a custom personality via system_prompt
  • Automatically converts functions to tools
  • Records all behavior to JSON files

Function-Based Tools

NEW: Just write regular Python functions! ConnectOnion automatically converts them to tools:

def my_tool(param: str, optional_param: int = 10) -> str:
    """This docstring becomes the tool description."""
    return f"Processed {param} with value {optional_param}"

# Use it directly - no wrapping needed!
agent = Agent("assistant", tools=[my_tool])

Key features:

  • Automatic Schema Generation: Type hints become OpenAI function schemas
  • Docstring Integration: First line becomes tool description
  • Parameter Handling: Supports required and optional parameters
  • Type Conversion: Handles different return types automatically

System Prompts

Define your agent's personality and behavior with flexible input options:

# 1. Direct string prompt
agent = Agent(
    name="helpful_tutor",
    system_prompt="You are an enthusiastic teacher who loves to educate.",
    tools=[my_tools]
)

# 2. Load from file (any text file, no extension restrictions)
agent = Agent(
    name="support_agent",
    system_prompt="prompts/customer_support.md"  # Automatically loads file content
)

# 3. Using Path object
from pathlib import Path
agent = Agent(
    name="coder",
    system_prompt=Path("prompts") / "senior_developer.txt"
)

# 4. None for default prompt
agent = Agent("basic_agent")  # Uses default: "You are a helpful assistant..."

Example prompt file (prompts/customer_support.md):

# Customer Support Agent

You are a senior customer support specialist with expertise in:
- Empathetic communication
- Problem-solving
- Technical troubleshooting

## Guidelines
- Always acknowledge the customer's concern first
- Look for root causes, not just symptoms
- Provide clear, actionable solutions

Logging

Automatic logging of all agent activities including:

  • User inputs and agent responses
  • LLM calls with timing
  • Tool executions with parameters and results
  • Default storage in .co/logs/{name}.log (human-readable format)

🎯 Example Tools

You can still use the traditional Tool class approach, but the new functional approach is much simpler:

Traditional Tool Classes (Still Supported)

from connectonion.tools import Calculator, CurrentTime, ReadFile

agent = Agent("assistant", tools=[Calculator(), CurrentTime(), ReadFile()])

New Function-Based Approach (Recommended)

def calculate(expression: str) -> float:
    """Perform mathematical calculations."""
    return eval(expression)  # Use safely in production

def get_time(format: str = "%Y-%m-%d %H:%M:%S") -> str:
    """Get current date and time."""
    from datetime import datetime
    return datetime.now().strftime(format)

def read_file(filepath: str) -> str:
    """Read contents of a text file."""
    with open(filepath, 'r') as f:
        return f.read()

# Use them directly!
agent = Agent("assistant", tools=[calculate, get_time, read_file])

The function-based approach is simpler, more Pythonic, and easier to test!

🎨 CLI Templates

ConnectOnion CLI provides templates to get you started quickly:

# Create a minimal agent (default)
co create my-agent

# Create with specific template
co create my-playwright-bot --template playwright

# Initialize in existing directory
co init  # Adds .co folder only
co init --template playwright  # Adds full template

Available Templates:

  • minimal (default) - Simple agent starter
  • playwright - Web automation with browser tools
  • meta-agent - Development assistant with docs search
  • web-research - Web research and data extraction

Each template includes:

  • Pre-configured agent ready to run
  • Automatic API key setup
  • Embedded ConnectOnion documentation
  • Git-ready .gitignore

Learn more in the CLI Documentation and Templates Guide.

πŸ”¨ Creating Custom Tools

The simplest way is to use functions (recommended):

def weather(city: str) -> str:
    """Get current weather for a city."""
    # Your weather API logic here
    return f"Weather in {city}: Sunny, 22Β°C"

# That's it! Use it directly
agent = Agent(name="weather_agent", tools=[weather])

Or use the Tool class for more control:

from connectonion.tools import Tool

class WeatherTool(Tool):
    def __init__(self):
        super().__init__(
            name="weather",
            description="Get current weather for a city"
        )
    
    def run(self, city: str) -> str:
        return f"Weather in {city}: Sunny, 22Β°C"
    
    def get_parameters_schema(self):
        return {
            "type": "object",
            "properties": {
                "city": {"type": "string", "description": "City name"}
            },
            "required": ["city"]
        }

agent = Agent(name="weather_agent", tools=[WeatherTool()])

πŸ“ Project Structure

connectonion/
β”œβ”€β”€ connectonion/
β”‚   β”œβ”€β”€ __init__.py         # Main exports
β”‚   β”œβ”€β”€ agent.py            # Agent class
β”‚   β”œβ”€β”€ tools.py            # Tool interface and built-ins
β”‚   β”œβ”€β”€ llm.py              # LLM interface and OpenAI implementation
β”‚   β”œβ”€β”€ console.py          # Terminal output and logging
β”‚   └── cli/                # CLI module
β”‚       β”œβ”€β”€ main.py         # CLI commands
β”‚       β”œβ”€β”€ docs.md         # Embedded documentation
β”‚       └── templates/      # Agent templates
β”‚           β”œβ”€β”€ basic_agent.py
β”‚           β”œβ”€β”€ chat_agent.py
β”‚           β”œβ”€β”€ data_agent.py
β”‚           └── *.md        # Prompt templates
β”œβ”€β”€ docs/                   # Documentation
β”‚   β”œβ”€β”€ quickstart.md
β”‚   β”œβ”€β”€ concepts/           # Core concepts
β”‚   β”œβ”€β”€ cli/                # CLI commands
β”‚   β”œβ”€β”€ templates/          # Project templates
β”‚   └── ...
β”œβ”€β”€ examples/
β”‚   └── basic_example.py
β”œβ”€β”€ tests/
β”‚   └── test_agent.py
└── pyproject.toml

πŸ§ͺ Running Tests

python -m pytest tests/

Or run individual test files:

python -m unittest tests.test_agent

πŸ“Š Automatic Logging

All agent activities are automatically logged to:

.co/logs/{agent_name}.log  # Default location

Each log entry includes:

  • Timestamp
  • User input
  • LLM calls with timing
  • Tool executions with parameters and results
  • Final responses

Control logging behavior:

# Default: logs to .co/logs/assistant.log
agent = Agent("assistant")

# Log to current directory
agent = Agent("assistant", log=True)  # β†’ assistant.log

# Disable logging
agent = Agent("assistant", log=False)

# Custom log file
agent = Agent("assistant", log="my_logs/custom.log")

πŸ”‘ Configuration

OpenAI API Key

Set your API key via environment variable:

export OPENAI_API_KEY="your-api-key-here"

Or pass directly to agent:

agent = Agent(name="test", api_key="your-api-key-here")

Model Selection

agent = Agent(name="test", model="gpt-5")  # Default: gpt-5-mini

Iteration Control

Control how many tool calling iterations an agent can perform:

# Default: 10 iterations (good for most tasks)
agent = Agent(name="assistant", tools=[...])

# Complex tasks may need more iterations
research_agent = Agent(
    name="researcher", 
    tools=[search, analyze, summarize, write_file],
    max_iterations=25  # Allow more steps for complex workflows
)

# Simple agents can use fewer iterations for safety
calculator = Agent(
    name="calc", 
    tools=[calculate],
    max_iterations=5  # Prevent runaway calculations
)

# Per-request override for specific complex tasks
result = agent.input(
    "Analyze all project files and generate comprehensive report",
    max_iterations=50  # Override for this specific task
)

When an agent reaches its iteration limit, it returns:

"Task incomplete: Maximum iterations (10) reached."

Choosing the Right Limit:

  • Simple tasks (1-3 tools): 5-10 iterations
  • Standard workflows: 10-15 iterations (default: 10)
  • Complex analysis: 20-30 iterations
  • Research/multi-step: 30+ iterations

πŸ› οΈ Advanced Usage

Multiple Tool Calls

Agents can chain multiple tool calls automatically:

result = agent.input(
    "Calculate 15 * 8, then tell me what time you did this calculation"
)
# Agent will use calculator first, then current_time tool

Custom LLM Providers

from connectonion.llm import LLM

class CustomLLM(LLM):
    def complete(self, messages, tools=None):
        # Your custom LLM implementation
        pass

agent = Agent(name="test", llm=CustomLLM())

πŸ—ΊοΈ Roadmap

Current Focus:

  • Multi-agent networking (serve/connect)
  • Trust system for agent collaboration
  • co deploy for one-command deployment

Recently Completed:

  • Multiple LLM providers (OpenAI, Anthropic, Gemini, Groq, Grok, OpenRouter)
  • Managed API keys (co/ prefix)
  • Plugin system
  • Google OAuth integration
  • Interactive debugging (@xray, auto_debug)

See full roadmap for details.

πŸ”— Connect With Us

Discord GitHub Documentation


⭐ Show Your Support

If ConnectOnion helps you build better agents, give it a star! ⭐

It helps others discover the framework and motivates us to keep improving it.

⭐ Star on GitHub


🀝 Contributing

We welcome contributions! ConnectOnion is open source and community-driven.

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Submit a pull request

See our Contributing Guide for more details.


πŸ“„ License

MIT License - Use it anywhere, even commercially. See LICENSE file for details.


Built with ❀️ by the open-source community

⭐ Star this repo β€’ πŸ’¬ Join Discord β€’ πŸ“– Read Docs β€’ ⬆ Back to top

About

The Best AI Agent Framework for Agent Collaboration.

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 98.1%
  • HTML 1.2%
  • Other 0.7%