| title | Multi-Agent Overview |
|---|---|
| description | Why and how to coordinate multiple AI agents to work together |
| weight | 90 |
Your customer support needs a technical expert, a refund specialist, and a billing analyst. Your research pipeline needs a searcher, an analyzer, and a writer. A single agent that does everything ends up doing everything mediocre.
Multi-agent systems solve this. Multiple specialized agents, each focused on what they do best, working together to solve complex problems.
Syrin gives you three ways to coordinate agents:
- Pipeline — Agents run in sequence, each passing results to the next
- Parallel — Multiple agents work on different parts simultaneously
- Dynamic — An LLM decides which agents to spawn and when
Each approach serves different needs. Let's see when to use which.
| Pattern | When to Use | Example |
|---|---|---|
| Pipeline | Fixed workflow, known steps | Research → Analyze → Write |
| Parallel | Independent tasks, speed matters | Gather data from multiple sources |
| Dynamic | Unknown requirements, LLM decides | Complex queries needing flexible approach |
from syrin import Agent, Model, Pipeline
class Researcher(Agent):
model = Model.OpenAI("gpt-4o-mini", api_key="your-key")
system_prompt = "You find relevant information."
class Writer(Agent):
model = Model.OpenAI("gpt-4o", api_key="your-key")
system_prompt = "You write clear reports."
pipeline = Pipeline()
# Sequential: Researcher runs first, Writer gets the results
result = pipeline.run([
(Researcher, "Research AI trends in healthcare"),
(Writer, "Write a summary based on the research"),
])
print(result.content) # Writer's final outputWhat just happened: The researcher gathered information, then passed it to the writer. The writer synthesized everything into a clean report.
All agents in a pipeline share the same budget:
from syrin import Budget
pipeline = Pipeline(
budget=Budget(max_cost=1.00), # Shared $1 budget for all agents
)
result = pipeline.run([
(Researcher, "Research AI"),
(Writer, "Write report"),
])
print(f"Total spent: ${result.cost:.4f}") # Combined cost from both agentsBeyond pipelines, agents can hand off tasks mid-conversation:
class TriageAgent(Agent):
model = Model.OpenAI("gpt-4o", api_key="your-key")
system_prompt = "You route requests to the right specialist."
class BillingAgent(Agent):
model = Model.OpenAI("gpt-4o-mini", api_key="your-key")
system_prompt = "You handle billing questions."
triage = TriageAgent()
# Triage decides to hand off to Billing
result = triage.handoff(
BillingAgent,
"User asked about their invoice #12345"
)Multi-agent shines when:
- Tasks are fundamentally different — Research vs. writing vs. coding
- Specialization matters — One model excels at search, another at creative writing
- Workflows are predictable — You know the steps, just need automation
- Cost needs control — Smaller models for simple tasks, larger for complex ones
Don't reach for multi-agent when:
- Simple Q&A — One agent handles it fine
- Linear conversation — The task flows naturally in one exchange
- Low latency matters — Multiple agents add overhead
- You're prototyping — Start simple, add complexity when needed
- When to Use Multi-Agent — Decision guide for your use case
- Pipeline — Sequential agent execution
- Dynamic Pipeline — Let the LLM decide
- Agents: Overview — Single agent fundamentals
- Agents: Handoff — Transfer control between agents
- Core Concepts: Budget — Cost control across agents