Skip to content

changjonathanc/llmproc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

34 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LLMProc

LLMProc Logo

License Status DOI

Important

🚨 Project Status: No Longer Maintained

LLMProc has served its purpose and is no longer being maintained. Through building this project, I discovered the plugin pattern, which make it easier to write your own agent framework. Read the new blog post for details: Agent-Environment Middleware (AEM).

The original LLMProc documentation follows below for reference.


Original LLMProc Documentation

LLMProc: Unix-inspired runtime that treats LLMs as processes. Build production-ready LLM programs with fully customizable YAML/TOML files. Or experiment with meta-tools via Python SDK - fork/spawn, goto, and more. Learn more at llmproc.com.

πŸ”₯ Check out our LLMProc GitHub Actions to see LLMProc successfully automating code implementation, conflict resolution, and more!

πŸ“‹ Latest Updates: See v0.10.0 Release Notes for cost control features, enhanced callbacks, and more.

Table of Contents

LLMProc GitHub Actions

Automate your development workflow with LLMProc-powered GitHub Actions:

  • @llmproc /resolve - Automatically resolve merge conflicts
  • @llmproc /ask <question> - Answer questions on issues/PRs
  • @llmproc /code <request> - Implement features from comments

Tip

Quick Setup: Run this command in your repository to automatically install workflows and get setup instructions:

uvx --from llmproc llmproc-install-actions

Why LLMProc over Claude Code?

Feature LLMProc Claude Code
License / openness βœ… Apache-2.0 ❌ Closed, minified JS
Token overhead βœ… Zero. You send exactly what you want ❌ 12-13k tokens (system prompt + builtin tools)
Custom system prompt βœ… Yes 🟑 Append-only (via CLAUDE.md)
Tool selection βœ… Opt-in; pick only the tools you need 🟑 Opt-out via --disallowedTools*
Tool schema override βœ… Supports alias, description overrides ❌ Not possible
Configuration βœ… Single YAML/TOML "LLM Program" 🟑 Limited config options
Scripting / SDK βœ… Python SDK with function tools ❌ JS-only CLI

*--disallowedTools allows removing builtin tools, but not MCP tools.

Installation

pip install llmproc

Run without installing

uvx llmproc

Important

You'll need an API key from your chosen provider (Anthropic, OpenAI, etc.). Set it as an environment variable: export ANTHROPIC_API_KEY=your_key_here

Setup

For local development, run:

make setup
source .venv/bin/activate

Common tasks:

make test    # Run tests
make format  # Format and lint code

Quick Start

Python usage

# Full example: examples/multiply_example.py
import asyncio
from llmproc import LLMProgram  # Optional: import register_tool for advanced tool configuration


def multiply(a: float, b: float) -> dict:
    """Multiply two numbers and return the result."""
    return {"result": a * b}  # Expected: Ο€ * e = 8.539734222677128


async def main():
    program = LLMProgram(
        model_name="claude-3-7-sonnet-20250219",
        provider="anthropic",
        system_prompt="You're a helpful assistant.",
        parameters={"max_tokens": 1024},
        tools=[multiply],
    )
    process = await program.start()
    await process.run("Can you multiply 3.14159265359 by 2.71828182846?")

    print(process.get_last_message())


if __name__ == "__main__":
    asyncio.run(main())

Configuration

Note

LLMProc supports TOML, YAML, and dictionary-based configurations. Check out the examples directory for various configuration patterns and the YAML Configuration Schema for all available options.

CLI Usage

  • llmproc - Execute an LLM program. Use --json mode to pipe output for automation (see GitHub Actions examples)
  • llmproc-demo - Interactive debugger for LLM programs/processes

Flexible Callback Signatures

LLMProc uses Flask/pytest-style parameter injection for callbacks. Your callbacks only need to declare the parameters they actually use:

class MyCallbacks:
    def tool_start(self, tool_name):                    # Basic: just the tool name
        print(f"πŸ”§ Starting {tool_name}")

    def tool_end(self, tool_name, result):              # Selective: name and result
        print(f"βœ… {tool_name} completed")

    def response(self, content, process):               # Full context when needed
        tokens = process.count_tokens()
        print(f"πŸ’¬ Response: {len(content)} chars, {tokens} tokens")

    def turn_end(self, response, tool_results):         # Mix and match freely
        print(f"πŸ”„ Turn: {len(tool_results)} tools")

# Register callbacks
process.add_plugins(MyCallbacks())

Benefits:

  • Clean signatures - Declare only what you need
  • Performance - No unnecessary parameter processing
  • Compatibility - Legacy *, process signatures still work
  • Flexibility - Mix different styles freely

See flexible signatures cookbook for comprehensive examples.

Features

Production Ready

  • Claude 3.7/4 models with full tool calling support
  • Python SDK - Register functions as tools with automatic schema generation
  • Stateful tools - Prefer class instances with instance method tools rather than injecting runtime context
  • Async and sync APIs - Use await program.start() or program.start_sync()
  • TOML/YAML configuration - Define LLM programs declaratively
  • MCP protocol - Connect to external tool servers
  • Built-in tools - File operations, calculator, spawning processes
  • Tool customization - Aliases, description overrides, parameter descriptions
  • Automatic optimizations - Prompt caching, retry logic with exponential backoff
  • Streaming support - Use LLMPROC_USE_STREAMING=true to handle high max_tokens values
  • Flexible callback signatures - Flask/pytest-style parameter injection - callbacks only need parameters they actually use

In Development

  • Gemini models - Basic support, tool calling not yet implemented
  • Streaming callbacks - Real-time token streaming callbacks via plugin system
  • Process persistence - Save/restore conversation state

Experimental Features

These cutting-edge features bring Unix-inspired process management to LLMs:

  • Process Forking - Create copies of running LLM processes with full conversation history, enabling parallel exploration of different solution paths

  • Program Linking - Connect multiple LLM programs together, allowing specialized models to collaborate (e.g., a coding expert delegating to a debugging specialist)

  • GOTO/Time Travel - Reset conversations to previous states, perfect for backtracking when the LLM goes down the wrong path or for exploring alternative approaches

  • File Descriptor System - Handle massive outputs elegantly with Unix-like pagination, reference IDs, and smart chunking - no more truncated responses

  • Tool Access Control - Fine-grained permissions (READ/WRITE/ADMIN) for multi-process environments, ensuring security when multiple LLMs collaborate

  • Meta-Tools - LLMs can modify their own runtime parameters! Create tools that let models adjust temperature, max_tokens, or other settings on the fly for adaptive behavior

Documentation

πŸ“š Documentation Index - Comprehensive guides and API reference

πŸ”§ Key Resources:

Design Philosophy

LLMProc treats LLMs as processes in a Unix-inspired runtime framework:

  • LLMs function as processes that execute prompts and make tool calls
  • Tools operate at both user and kernel levels, with system tools able to modify process state
  • The Process abstraction naturally maps to Unix concepts like spawn, fork, goto, IPC, file descriptors, and more
  • This architecture provides a foundation for evolving toward a more complete LLM runtime

For in-depth explanations of these design decisions, see our API Design FAQ.

License

Apache License 2.0

About

LLMProc: Unix-inspired runtime that treats LLMs as processes.

Resources

Contributing

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •  

Languages