Skip to content

openonion/connectonion-ts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

51 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ ConnectOnion TypeScript SDK

Connect to Python agents from TypeScript - Use powerful Python agents in your TypeScript apps

npm version TypeScript License: MIT

✨ What is ConnectOnion?

ConnectOnion TypeScript SDK lets you connect to and use AI agents built with Python. Build your agents in Python (where the ecosystem is rich), then use them seamlessly from TypeScript/JavaScript applications.

// Connect to a Python agent and use it
import { connect } from 'connectonion';

// Connect to a remote agent by address
const agent = connect('0x3d4017c3e843895a92b70aa74d1b7ebc9c982ccf2ec4968cc0cd55f12af4660c');

// Use it like a local function
const result = await agent.input('Search for TypeScript tutorials');
console.log(result);

That's it. No server setup. No complex configuration. Just connect and use.

🎯 Why Use This?

🐍 Build Agents in Python

Python has the richest AI ecosystem - LangChain, LlamaIndex, transformers, and countless ML libraries. Build your agents where the tools are best.

πŸ“± Use Agents in TypeScript

Your web apps, React frontends, Node.js backends, and Electron apps are in TypeScript. Now you can use powerful Python agents directly.

🌐 Zero Infrastructure

No servers to manage. No API endpoints to deploy. Agents connect peer-to-peer through the relay network.

πŸ”’ Secure by Design

Ed25519 cryptographic addressing. No passwords. No auth tokens to leak. Just public/private key pairs.

πŸš€ Quick Start (60 seconds)

1. Install

npm install connectonion
# or
yarn add connectonion
# or
pnpm add connectonion

2. Connect to a Python Agent

import { connect } from 'connectonion';

// Connect to a remote Python agent
const agent = connect('0x3d4017c3e843895a92b70aa74d1b7ebc9c982ccf2ec4968cc0cd55f12af4660c');

// Use it!
const response = await agent.input('Analyze this data and create a report');
console.log(response);

3. Create the Python Agent (Optional)

If you need to create your own agent in Python:

# pip install connectonion
from connectonion import Agent, announce

def analyze_data(data: str) -> str:
    """Analyze data and create a report"""
    # Your Python logic with pandas, numpy, etc.
    return f"Analysis: {data}"

agent = Agent(
    name="data-analyst",
    tools=[analyze_data]
)

# Announce to the network
announce(agent)
# Prints: Agent address: 0x3d401...

Then connect from TypeScript as shown above!

🎨 Real-World Examples

Example 1: Connect to ML Agent from React App

// React component using a Python ML agent
import { connect } from 'connectonion';
import { useState } from 'react';

function DataAnalyzer() {
  const [result, setResult] = useState('');
  const agent = connect('0xYourPythonMLAgent');

  const analyze = async () => {
    // Python agent has pandas, scikit-learn, matplotlib, etc.
    const response = await agent.input(
      'Analyze sales data and predict next quarter trends'
    );
    setResult(response);
  };

  return <button onClick={analyze}>Analyze Data</button>;
}

Example 2: Node.js Backend Using Python Agent

// Express API using a Python agent for complex processing
import express from 'express';
import { connect } from 'connectonion';

const app = express();
const pythonAgent = connect('0xYourPythonAgent');

app.post('/analyze', async (req, res) => {
  // Offload heavy processing to Python agent
  const result = await pythonAgent.input(req.body.query);
  res.json({ result });
});

app.listen(3000);

Example 3: Electron App with Python Backend

// Electron app using Python agent for system operations
import { connect } from 'connectonion';

const systemAgent = connect('0xYourSystemAgent');

async function handleFileOperation() {
  // Python agent has full system access and libraries
  const result = await systemAgent.input(
    'Find all PDFs in Downloads, extract text, and summarize'
  );
  return result;
}

πŸ”§ Connection Options

Custom Relay URL

// Connect to local development relay
const agent = connect(
  '0xYourAgent',
  'ws://localhost:8000/ws/announce'
);

// Or use environment variable
process.env.RELAY_URL = 'ws://localhost:8000/ws/announce';
const agent = connect('0xYourAgent'); // uses RELAY_URL

Timeout Configuration

// Adjust timeout for long-running tasks
const result = await agent.input(
  'Process large dataset',
  60000 // 60 second timeout
);

Multiple Agents

// Connect to different specialized agents
const mlAgent = connect('0xMLAgent');
const nlpAgent = connect('0xNLPAgent');
const visionAgent = connect('0xVisionAgent');

// Use them in parallel
const [analysis, sentiment, objects] = await Promise.all([
  mlAgent.input('Analyze time series'),
  nlpAgent.input('Extract sentiment from reviews'),
  visionAgent.input('Detect objects in image')
]);

πŸ“š Documentation

Building Agents in TypeScript (Experimental)

While we recommend building agents in Python, you can also build simple agents directly in TypeScript:

Important Notes:

  • TypeScript agent features are experimental and may have bugs
  • Python agent features are well-tested and fully supported
  • For complex agents with ML, data processing, or extensive Python libraries, use Python and connect via connect()
  • Full TypeScript agent support planned for Q1 2026

If you encounter bugs building agents in TypeScript, please report them on GitHub.

πŸ—οΈ Architecture

System Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    ConnectOnion TypeScript SDK                    β”‚
β”‚                                                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚    Agent      β”‚  β”‚   connect()  β”‚  β”‚      llmDo()           β”‚ β”‚
β”‚  β”‚  (local AI)   β”‚  β”‚  (remote)    β”‚  β”‚  (one-shot LLM call)   β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚         β”‚                 β”‚                        β”‚             β”‚
β”‚         β–Ό                 β–Ό                        β”‚             β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                   β”‚             β”‚
β”‚  β”‚        LLM Factory          β”‚β—€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜             β”‚
β”‚  β”‚      createLLM(model)       β”‚                                 β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                 β”‚
β”‚     β”Œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                            β”‚
β”‚     β–Ό       β–Ό          β–Ό            β–Ό                            β”‚
β”‚  Anthropic  OpenAI   Gemini    OpenOnion                         β”‚
β”‚  (claude-*) (gpt-*)  (gemini-*) (co/*)                          β”‚
β”‚                                                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚  Tool System  β”‚  β”‚  Trust   β”‚  β”‚  Console  β”‚  β”‚   Xray    β”‚  β”‚
│  │  func→schema  │  │  Levels  │  │  Logging  │  │  Debugger │  │
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Agent Execution Flow

  agent.input("What is 2+2?")
       β”‚
       β–Ό
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚  Init messages   β”‚  [system prompt] + [user message]
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚
           β–Ό
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚        Main Loop (max 10 iter)       β”‚
  β”‚                                      β”‚
  β”‚  LLM.complete(messages, tools)       β”‚
  β”‚       β”‚                              β”‚
  β”‚       β”œβ”€β”€ No tool calls ──▢ EXIT     β”‚
  β”‚       β”‚                              β”‚
  β”‚       └── Tool calls found:          β”‚
  β”‚            Promise.all(              β”‚
  β”‚              tool_1.run(args),       β”‚
  β”‚              tool_2.run(args)        β”‚
  β”‚            )                         β”‚
  β”‚            β”‚                         β”‚
  β”‚            β–Ό                         β”‚
  β”‚       Append results β†’ LOOP          β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       β”‚
       β–Ό
  Return final text response

Tool Conversion

  Your code                        SDK internals

  function add(a: number,          Tool {
    b: number): number {    ──▢      name: "add",
    return a + b;                    description: "...",
  }                                  run(args) β†’ add(a, b),
                                     toFunctionSchema() β†’ {
  class API {                          type: "object",
    search(q: string) {}   ──▢        properties: {a: {type: "number"}, ...}
    fetch(id: number) {}             }
  }                                }

LLM Provider Routing

  createLLM(model)
       β”‚
       β”œβ”€β”€ "co/*"     ──▢ OpenAI LLM + OpenOnion baseURL
       β”œβ”€β”€ "claude-*"  ──▢ Anthropic LLM (default)
       β”œβ”€β”€ "gpt-*"     ──▢ OpenAI LLM
       β”œβ”€β”€ "o*"        ──▢ OpenAI LLM
       β”œβ”€β”€ "gemini-*"  ──▢ Gemini LLM
       └── (unknown)   ──▢ Anthropic (fallback) or NoopLLM

Project Structure

your-project/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ agents/        # Your agent definitions
β”‚   β”œβ”€β”€ tools/         # Custom tool implementations
β”‚   └── index.ts       # Main entry point
β”œβ”€β”€ .env               # API keys (never commit!)
β”œβ”€β”€ package.json
└── tsconfig.json

SDK Internal Structure

src/
β”œβ”€β”€ core/
β”‚   └── agent.ts            # Main Agent class (orchestrator)
β”œβ”€β”€ llm/
β”‚   β”œβ”€β”€ index.ts            # LLM factory (routes model names)
β”‚   β”œβ”€β”€ anthropic.ts        # Anthropic Claude provider (default)
β”‚   β”œβ”€β”€ openai.ts           # OpenAI GPT/O-series provider
β”‚   β”œβ”€β”€ gemini.ts           # Google Gemini provider
β”‚   β”œβ”€β”€ noop.ts             # Fallback for missing config
β”‚   └── llm-do.ts           # One-shot llmDo() helper
β”œβ”€β”€ tools/
β”‚   β”œβ”€β”€ tool-utils.ts       # Function β†’ Tool conversion
β”‚   β”œβ”€β”€ tool-executor.ts    # Execution + trace recording
β”‚   β”œβ”€β”€ xray.ts             # Debug context injection (@xray)
β”‚   β”œβ”€β”€ replay.ts           # Replay decorator for debugging
β”‚   └── email.ts            # Mock email tools for demos/tests
β”œβ”€β”€ trust/
β”‚   β”œβ”€β”€ index.ts            # Trust levels (open/careful/strict)
β”‚   └── tools.ts            # Whitelist checks & verification
β”œβ”€β”€ connect/
β”‚   β”œβ”€β”€ index.ts            # connect() factory + re-exports
β”‚   β”œβ”€β”€ types.ts            # ChatItem, Response, AgentStatus, ConnectOptions, etc.
β”‚   β”œβ”€β”€ endpoint.ts         # resolveEndpoint, fetchAgentInfo, utils
β”‚   └── remote-agent.ts     # RemoteAgent class
β”œβ”€β”€ console.ts              # Dual logging (stderr + file)
β”œβ”€β”€ types.ts                # Core TypeScript interfaces
└── index.ts                # Public API exports

πŸ’¬ Join the Community

Discord

Get help, share agents, and discuss with 1000+ builders in our active community.


⭐ Show Your Support

If ConnectOnion helps you build better agents, give it a star! ⭐

It helps others discover the framework and motivates us to keep improving it.

⭐ Star on GitHub


🀝 Contributing

We love contributions! See CONTRIBUTING.md for guidelines.

Development Setup

# Clone the repo
git clone https://github.com/openonion/connectonion-ts
cd connectonion-ts

# Install dependencies
npm install

# Run tests
npm test

# Build
npm run build

πŸ“„ License

MIT Β© OpenOnion Team

πŸ”— Links

🌟 Why This Architecture?

Python for Agents

  • Rich AI Ecosystem: LangChain, transformers, pandas, scikit-learn, PyTorch, TensorFlow
  • Data Processing: NumPy, SciPy, matplotlib for complex analysis
  • Mature Libraries: Decades of proven Python libraries
  • Simple Setup: pip install and you're ready

TypeScript for Apps

  • Web & Mobile: React, Next.js, React Native, Electron
  • Type Safety: Catch errors at compile time
  • IDE Support: Unmatched IntelliSense and auto-completion
  • NPM Ecosystem: Access to millions of UI/frontend packages

Best of Both Worlds

Build agents where the tools are rich (Python), use them where users are (TypeScript apps).


Built with ❀️ by developers, for developers

⭐ Star us on GitHub

About

TypeScript SDK for ConnectOnion - Build AI agents that actually DO things

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors