Your AI Agency. One Conversation.
113 agents · 8 teams · 9 workflows · Any LLM
AGEX is a self-hosted AI agency platform. You talk to it. It deploys the right agents. You get professional output.
It's not a chatbot. It's not a workflow builder. It's a complete AI agency with 113 specialized agents across 8 teams that reads your message, figures out what needs to be done, and does it.
You: "Write cold outreach emails for selling AI tools to CTOs"
AGEX: Sales Team activated.
→ ICP Analyst refining target persona
→ Cold Email Writer crafting the sequence
→ Follow-up Automator building the cadence
Here are your 3 emails...
Works with any LLM — OpenAI, Anthropic, Google, Groq (free), OpenRouter, or local models via Ollama.
git clone https://github.com/z1fex/agex.git
cd agex
pnpm installcd apps/web
npx next dev -p 3000- Open http://localhost:3000
- Click "Start talking"
- Go to Settings in the sidebar
- Add your API key (Groq is free — get a key here)
- Go to Chat and talk to your agency
That's it. No Docker. No database setup. No configuration files.
The primary interface is a conversation. You don't navigate menus or configure pipelines. You describe what you need, and AGEX figures out which of its 113 agents to deploy.
- Real-time streaming responses
- Conversation history persists across sessions
- Run workflows directly from chat
- Context-aware — remembers previous messages
| Team | Agents | What They Do |
|---|---|---|
| Marketing | 60 | SEO, Social Media, Email, PPC, PR, Influencer, Events, Affiliate, Brand, Analytics, CRO, Community, Partnerships |
| Sales | 10 | Cold outreach, lead scoring, proposals, follow-ups, CRM, objection handling |
| Intelligence | 10 | Competitor monitoring, news, sentiment analysis, trend tracking |
| Research | 8 | Market sizing, competitor teardowns, customer research, tech scouting |
| Strategy | 7 | SWOT, positioning, pricing, go-to-market, market entry |
| Content | 10 | Blogs, social posts, newsletters, case studies, scripts, whitepapers |
| Direction | 4 | OKRs, roadmaps, quarterly planning, priorities |
| Managing | 4 | Project tracking, timelines, resource allocation |
| Workflow | Steps | What You Get |
|---|---|---|
| Content Month | 7 | Trend report, content calendar, 4 blog posts, 12 social posts |
| Product Launch | 11 | Market research, positioning, landing page, launch emails, PR brief |
| Competitor Report | 5 | Competitor profiles, SWOT analysis, strategic recommendations |
| Lead Generation | 6 | Lead magnets, email sequences, cold outreach scripts |
| Full Strategy | 10 | Market analysis, SWOT, positioning, go-to-market plan |
| Email Sequence | 5 | Welcome series, nurture emails, sales follow-ups |
| SEO Overhaul | 7 | SEO audit, keyword strategy, content briefs, meta tags |
| Social Media Blitz | 6 | Platform strategy, content calendar, 30 posts, hashtag sets |
| Brand Audit | 5 | Brand analysis, perception report, recommendations |
| Provider | Models | Notes |
|---|---|---|
| OpenAI | GPT-4o, GPT-4o-mini, o3-mini | |
| Anthropic | Claude Sonnet, Haiku, Opus | |
| Gemini 2.5 Pro, Flash | ||
| Groq | Llama 3.3 70B, Mixtral, Gemma | Free tier available |
| OpenRouter | 200+ models | One key, all models |
| Ollama | Llama, Mistral, DeepSeek, Phi | Free, runs locally |
- Pitch black theme with blinking stars
- Glassmorphism cards
- Animated page transitions (Framer Motion)
- Responsive sidebar with collapse
- Real-time streaming text in chat
┌─────────────────────────────────────────────┐
│ AGEX │
├─────────────────────────────────────────────┤
│ │
│ User types a message in Chat │
│ │ │
│ ▼ │
│ /api/chat (Next.js API Route) │
│ │ │
│ ▼ │
│ Brain Builder │
│ ├── Reads 105 agent .md files │
│ ├── Builds system prompt with: │
│ │ ├── 8 team descriptions │
│ │ ├── 113+ agent identities │
│ │ └── 9 workflow definitions │
│ └── Selects full/compact brain │
│ based on model context window │
│ │ │
│ ▼ │
│ LLM Router │
│ ├── OpenAI (api.openai.com) │
│ ├── Anthropic (api.anthropic.com) │
│ ├── Google (generativelanguage.google) │
│ ├── Groq (api.groq.com) │
│ ├── OpenRouter (openrouter.ai) │
│ └── Ollama (localhost:11434) │
│ │ │
│ ▼ │
│ Streaming response back to Chat UI │
│ │
└─────────────────────────────────────────────┘
AGEX doesn't use a framework like LangChain or CrewAI. The LLM itself IS the agency.
- On every chat message, the server reads all 105 agent markdown files from the
content/directory - It builds a system prompt that tells the LLM: "You are AGEX, an agency with 8 teams and 113 agents. Here are your teams and capabilities."
- The LLM receives this brain + the user's message
- The LLM determines which team/agents are relevant and executes the work directly
- Response streams back to the browser in real-time
Premium models (GPT-4o, Claude Sonnet, Gemini Pro) get the full brain with agent identities and detailed instructions. Free/small models (Groq free tier, small Ollama models) get a compact brain that fits within token limits.
| Layer | Technology |
|---|---|
| Frontend | Next.js 15, React 19, TypeScript |
| Styling | Tailwind CSS v4, Framer Motion |
| Components | Radix UI, class-variance-authority |
| State | Zustand (persisted to localStorage) |
| Backend | Next.js API Routes |
| LLM | Direct API calls (no LangChain, no LiteLLM) |
| Monorepo | Turborepo, pnpm workspaces |
| Database | SQLite + Drizzle ORM (schema ready, not yet wired) |
agex/
├── apps/
│ └── web/ # Next.js 15 app
│ ├── app/
│ │ ├── page.tsx # Landing page
│ │ ├── api/chat/ # LLM chat endpoint (streaming)
│ │ └── dashboard/
│ │ ├── chat/ # Chat interface (primary)
│ │ ├── agents/ # Agent team browser
│ │ ├── workflows/ # Workflow cards + detail
│ │ ├── clients/ # Client management
│ │ ├── vault/ # Markdown file browser
│ │ ├── outputs/ # Generated deliverables
│ │ ├── analytics/ # Cost & usage tracking
│ │ └── settings/ # LLM provider config
│ ├── src/
│ │ ├── components/ # UI, chat, layout, animations
│ │ ├── stores/ # Zustand stores (chat, sidebar)
│ │ └── lib/ # Utilities
│ └── package.json
│
├── packages/
│ ├── shared/ # Types, parsers, constants
│ ├── db/ # Drizzle ORM schema (14 tables)
│ └── vault-engine/ # Vault read/write/index
│
├── content/ # The brain — 105 agent/workflow .md files
│ ├── agents/ # 75 agent files across 8 teams
│ │ ├── marketing/ # 13 sub-team agents
│ │ ├── sales/ # 10 agents
│ │ ├── intelligence/ # 10 agents
│ │ ├── research/ # 8 agents
│ │ ├── strategy/ # 7 agents
│ │ ├── content/ # 10 agents
│ │ ├── direction/ # 4 agents
│ │ └── managing/ # 4 agents
│ ├── workflows/ # 9 workflow definitions
│ ├── templates/ # 12 output templates
│ ├── tools/ # 7 tool guides
│ ├── onboarding/ # Client interview flow
│ └── quality/ # QA checklist
│
├── vault/ # Runtime vault (Obsidian-compatible)
├── output/ # Generated deliverables
├── turbo.json
├── package.json
└── pnpm-workspace.yaml
| Page | URL | Description |
|---|---|---|
| Landing | / |
AGEX branding, "Start talking" CTA |
| Chat | /dashboard/chat |
Primary interface — talk to your agency |
| Dashboard | /dashboard |
Overview stats, quick actions, recent activity |
| Agents | /dashboard/agents |
Browse 8 teams, see all agents |
| Agent Team | /dashboard/agents/[team] |
Individual agents with "Chat" buttons |
| Workflows | /dashboard/workflows |
9 workflow cards with step counts |
| Workflow Detail | /dashboard/workflows/[slug] |
Step breakdown, "Run in Chat" button |
| Clients | /dashboard/clients |
Client list, onboard via chat |
| Vault | /dashboard/vault |
Obsidian-compatible file browser |
| Outputs | /dashboard/outputs |
All generated deliverables |
| Analytics | /dashboard/analytics |
Cost tracking, usage metrics |
| Settings | /dashboard/settings |
API keys, model selection |
AGEX was tested with 3 simulations and 13 individual tests before release.
| Task | Team Used | Result |
|---|---|---|
| Strategic positioning vs Wiz/Orca/Lacework | Strategy | PASS — 4 sections, competitor-specific differentiators |
| 3-email cold outreach sequence for CISOs | Sales | PASS — all under word limits, subject lines included |
| Landing page copy with hero/features/social proof | Content | PASS — markdown formatted, mentions real compliance standards |
| Task | Team Used | Result |
|---|---|---|
| Competitor analysis of Everlane/Reformation/Patagonia | Intelligence | PASS — 5 specific messaging gaps identified |
| 400-word editorial blog in Kinfolk tone | Content | PASS — zero exclamation marks, sensory details, quiet tone |
| 4-week LinkedIn content calendar (12 posts) | Content | PASS — narrative arc across weeks, all fields present |
| Test | Result |
|---|---|
| Empty message | PASS — graceful response asking for context |
| No API key | PASS — clear error: "Go to Settings to add one" |
| Invalid provider | PASS — "Unknown provider" error |
| Wrong API key | PASS — "Invalid API Key" from provider |
| Missing model | PASS — "Provider and model required" |
| 3 concurrent requests | PASS — handled correctly |
| All 10 pages after stress | PASS — 10/10 return 200 |
The easiest way to start. Groq offers free API access with fast inference.
- Go to console.groq.com
- Create an account and copy your API key
- In AGEX Settings, select Groq and paste the key
- Model:
llama-3.3-70b-versatile
- Go to platform.openai.com
- Create an API key
- In AGEX Settings, select OpenAI and paste the key
- Model:
gpt-4o(recommended)
- Go to console.anthropic.com
- Create an API key
- In AGEX Settings, select Anthropic and paste the key
- Model:
claude-sonnet-4-20250514(recommended)
- Go to aistudio.google.com
- Get an API key
- In AGEX Settings, select Google and paste the key
- Model:
gemini-2.5-flash(recommended)
- Go to openrouter.ai
- Create an account and get an API key
- In AGEX Settings, select OpenRouter and paste the key
- Access 200+ models through one key
- Install Ollama
- Run
ollama pull llama3.1 - In AGEX Settings, select Ollama
- No API key needed — runs on your machine
AGEX handles your LLM API keys. Here's how we protect them:
- API keys are stored in your browser's localStorage — never sent to any server except the LLM provider
- The
/api/chatroute runs on YOUR machine — keys are only used server-side in your Next.js instance - No telemetry, no analytics, no external calls — AGEX only talks to the LLM provider you configure
- Vault files stay on your filesystem — nothing leaves your machine
For production deployment, see the Security section of the documentation.
node --version # Must be 20+
pnpm --version # Must be 9+git clone https://github.com/z1fex/agex.git
cd agex
pnpm installcd apps/web
npx next dev -p 3000pnpm build| Command | Description |
|---|---|
pnpm dev |
Start development server |
pnpm build |
Production build |
pnpm lint |
Run linting |
pnpm clean |
Clean build artifacts |
- Chat-first interface with real-time streaming
- 113 agent brain loaded from 75 markdown files
- 6 LLM providers (OpenAI, Anthropic, Google, Groq, OpenRouter, Ollama)
- Full brain (premium models) / compact brain (free models)
- 12 pages: chat, dashboard, agents, workflows, vault, clients, outputs, analytics, settings
- Chat history persists across navigation
- Execution engine — new
@agency/execution-enginepackage - Agent hierarchy — Dispatcher routes to specific agents with focused prompts
- Multi-call execution — each agent gets its own dedicated LLM call (~3-5K tokens, not 50K)
- Output saving — auto-save deliverables to vault + output directory with YAML frontmatter
- Vault API — real file tree + file preview from
/api/vault - Outputs API — list saved deliverables from
/api/outputs - Enhanced streaming — chat shows agent plan, step progress, completion markers
- Improved markdown rendering — headers, dividers, code blocks, lists, cost info
- Conversation threads — multiple chats, auto-titled, persistent
- Client onboarding — creates real vault files (profile, brand voice, ICP, goals, competitors)
- Client context injection — client data auto-loaded into dispatcher + agent prompts
- Cost tracking — per-call token counting, pricing table, real analytics dashboard
- Quality gate — automated scoring via fast LLM model
- 6 new agents — LinkedIn Ghostwriter, Pitch Deck Writer, Tone Adapter, Trend Predictor, Landing Page Optimizer, Objection Handler (119+ total)
- Analytics API — real cost data by provider, team, and day
- Clients API — create + list clients from vault directory
- Database activation (SQLite fully wired)
- Export outputs (markdown, PDF)
- Better small model support (improved compact brain)
- CLI version (
npx agex) - Docker one-click deployment
- Web search integration (Tavily / SearXNG)
- Custom agent builder (create agents from the UI)
- Workflow builder (drag-and-drop)
- Multi-user with authentication
- Scheduled recurring tasks
- Plugin system for community agent packs
- Agent performance metrics and A/B testing between models
Contributions are welcome. Here's how:
- Fork the repository
- Create your branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- New agent prompt files in
content/agents/ - New workflow definitions in
content/workflows/ - Additional LLM provider support
- UI improvements and themes
- Documentation
Q: Is this free? A: AGEX is free and open source (MIT license). You pay for LLM API usage based on your provider. Groq and Ollama offer free tiers.
Q: Do I need Docker?
A: No. Just Node.js and pnpm. Run pnpm install and start the dev server.
Q: How is this different from ChatGPT/Claude? A: ChatGPT is a general assistant. AGEX is a specialized agency with 113 pre-built agent identities trained for marketing, sales, content, research, and strategy work. It knows which team to deploy for which task.
Q: Can I add my own agents?
A: Yes. Create a new .md file in content/agents/[team]/ following the existing format. AGEX will automatically include it in the brain.
Q: Can I use this commercially? A: Yes. MIT license. Use it however you want.
Q: Does my data leave my machine? A: Only your chat messages go to the LLM provider you choose. Everything else (vault files, settings, history) stays on your machine.
MIT License. See LICENSE for details.
AGEX — Stop managing agents. Start talking to your agency.