Important
LLMProc has served its purpose and is no longer being maintained. Through building this project, I discovered the plugin pattern, which make it easier to write your own agent framework. Read the new blog post for details: Agent-Environment Middleware (AEM).
The original LLMProc documentation follows below for reference.
LLMProc: Unix-inspired runtime that treats LLMs as processes. Build production-ready LLM programs with fully customizable YAML/TOML files. Or experiment with meta-tools via Python SDK - fork/spawn, goto, and more. Learn more at llmproc.com.
π₯ Check out our LLMProc GitHub Actions to see LLMProc successfully automating code implementation, conflict resolution, and more!
π Latest Updates: See v0.10.0 Release Notes for cost control features, enhanced callbacks, and more.
- LLMProc GitHub Actions
- Why LLMProc over Claude Code?
- Installation
- Quick Start
- Features
- Documentation
- Design Philosophy
- License
Automate your development workflow with LLMProc-powered GitHub Actions:
@llmproc /resolve- Automatically resolve merge conflicts@llmproc /ask <question>- Answer questions on issues/PRs@llmproc /code <request>- Implement features from comments
Tip
Quick Setup: Run this command in your repository to automatically install workflows and get setup instructions:
uvx --from llmproc llmproc-install-actions| Feature | LLMProc | Claude Code |
|---|---|---|
| License / openness | β Apache-2.0 | β Closed, minified JS |
| Token overhead | β Zero. You send exactly what you want | β 12-13k tokens (system prompt + builtin tools) |
| Custom system prompt | β Yes | π‘ Append-only (via CLAUDE.md) |
| Tool selection | β Opt-in; pick only the tools you need | π‘ Opt-out via --disallowedTools* |
| Tool schema override | β Supports alias, description overrides | β Not possible |
| Configuration | β Single YAML/TOML "LLM Program" | π‘ Limited config options |
| Scripting / SDK | β Python SDK with function tools | β JS-only CLI |
*
--disallowedToolsallows removing builtin tools, but not MCP tools.
pip install llmprocRun without installing
uvx llmprocImportant
You'll need an API key from your chosen provider (Anthropic, OpenAI, etc.). Set it as an environment variable:
export ANTHROPIC_API_KEY=your_key_here
For local development, run:
make setup
source .venv/bin/activateCommon tasks:
make test # Run tests
make format # Format and lint code# Full example: examples/multiply_example.py
import asyncio
from llmproc import LLMProgram # Optional: import register_tool for advanced tool configuration
def multiply(a: float, b: float) -> dict:
"""Multiply two numbers and return the result."""
return {"result": a * b} # Expected: Ο * e = 8.539734222677128
async def main():
program = LLMProgram(
model_name="claude-3-7-sonnet-20250219",
provider="anthropic",
system_prompt="You're a helpful assistant.",
parameters={"max_tokens": 1024},
tools=[multiply],
)
process = await program.start()
await process.run("Can you multiply 3.14159265359 by 2.71828182846?")
print(process.get_last_message())
if __name__ == "__main__":
asyncio.run(main())Note
LLMProc supports TOML, YAML, and dictionary-based configurations. Check out the examples directory for various configuration patterns and the YAML Configuration Schema for all available options.
- llmproc - Execute an LLM program. Use
--jsonmode to pipe output for automation (see GitHub Actions examples) - llmproc-demo - Interactive debugger for LLM programs/processes
LLMProc uses Flask/pytest-style parameter injection for callbacks. Your callbacks only need to declare the parameters they actually use:
class MyCallbacks:
def tool_start(self, tool_name): # Basic: just the tool name
print(f"π§ Starting {tool_name}")
def tool_end(self, tool_name, result): # Selective: name and result
print(f"β
{tool_name} completed")
def response(self, content, process): # Full context when needed
tokens = process.count_tokens()
print(f"π¬ Response: {len(content)} chars, {tokens} tokens")
def turn_end(self, response, tool_results): # Mix and match freely
print(f"π Turn: {len(tool_results)} tools")
# Register callbacks
process.add_plugins(MyCallbacks())Benefits:
- Clean signatures - Declare only what you need
- Performance - No unnecessary parameter processing
- Compatibility - Legacy
*, processsignatures still work - Flexibility - Mix different styles freely
See flexible signatures cookbook for comprehensive examples.
- Claude 3.7/4 models with full tool calling support
- Python SDK - Register functions as tools with automatic schema generation
- Stateful tools - Prefer class instances with instance method tools rather than injecting runtime context
- Async and sync APIs - Use
await program.start()orprogram.start_sync() - TOML/YAML configuration - Define LLM programs declaratively
- MCP protocol - Connect to external tool servers
- Built-in tools - File operations, calculator, spawning processes
- Tool customization - Aliases, description overrides, parameter descriptions
- Automatic optimizations - Prompt caching, retry logic with exponential backoff
- Streaming support - Use
LLMPROC_USE_STREAMING=trueto handle high max_tokens values - Flexible callback signatures - Flask/pytest-style parameter injection - callbacks only need parameters they actually use
- Gemini models - Basic support, tool calling not yet implemented
- Streaming callbacks - Real-time token streaming callbacks via plugin system
- Process persistence - Save/restore conversation state
These cutting-edge features bring Unix-inspired process management to LLMs:
-
Process Forking - Create copies of running LLM processes with full conversation history, enabling parallel exploration of different solution paths
-
Program Linking - Connect multiple LLM programs together, allowing specialized models to collaborate (e.g., a coding expert delegating to a debugging specialist)
-
GOTO/Time Travel - Reset conversations to previous states, perfect for backtracking when the LLM goes down the wrong path or for exploring alternative approaches
-
File Descriptor System - Handle massive outputs elegantly with Unix-like pagination, reference IDs, and smart chunking - no more truncated responses
-
Tool Access Control - Fine-grained permissions (READ/WRITE/ADMIN) for multi-process environments, ensuring security when multiple LLMs collaborate
-
Meta-Tools - LLMs can modify their own runtime parameters! Create tools that let models adjust temperature, max_tokens, or other settings on the fly for adaptive behavior
π Documentation Index - Comprehensive guides and API reference
- Python SDK Guide - Fluent API for building LLM applications
- YAML Configuration Schema - Complete configuration reference
- FAQ - Design rationales and common questions
- Examples - Sample configurations and tutorials
LLMProc treats LLMs as processes in a Unix-inspired runtime framework:
- LLMs function as processes that execute prompts and make tool calls
- Tools operate at both user and kernel levels, with system tools able to modify process state
- The Process abstraction naturally maps to Unix concepts like spawn, fork, goto, IPC, file descriptors, and more
- This architecture provides a foundation for evolving toward a more complete LLM runtime
For in-depth explanations of these design decisions, see our API Design FAQ.
Apache License 2.0
