Skip to content

akshayitzme/support-pilot

Repository files navigation

SupportPilot

AI-powered customer support agent that analyzes Linear tickets, searches internal documentation, and proposes actionable solutions. Designed for real-world efficiency, it can complete a ticket analysis in under 5 seconds, dramatically reducing manual triage and improving support team productivity by automating repetitive tasks.

Built as a hobby project to experiment with AI integration, RAG, and tool calling, it also demonstrates multi-provider AI configuration, background job processing, and type-safe TypeScript architecture, making it easy to scale and extend.

Tech Stack

Category Technology Reason
Runtime Bun Fast execution
Framework Fastify Minimal, type-safe, low overhead HTTP server
Queue BullMQ + Redis Reliable background job processing with retries
Queue UI Bull Board Web dashboard for monitoring queues and job status
AI SDK Vercel AI SDK Unified interface for OpenAI, Gemini, Ollama
Validation Zod Runtime type safety for AI tool parameters
Linting Biome Fast, unified linting and formatting
Container Docker Reproducible worker deployment
Integration Linear API Polling-based ticket ingestion

What It Does

  • Polls Linear for new support tickets on a configurable interval
  • Classifies ticket intent using structured LLM output
  • Searches internal markdown knowledge base via keyword scoring
  • Generates root cause analysis and suggested fixes using RAG
  • Posts formatted AI response back to Linear as a comment
  • Switches between AI providers without code changes

Architecture Choices

Multi-provider AI configuration: Switch between OpenAI, Gemini, and local Ollama via single point configuration. Tests cost vs performance tradeoffs. Enables local development without API costs.

Polling over webhooks: Simpler for experimentation. No webhook signature verification or public endpoint exposure. Easier to test locally without tunneling tools.

Single queue, two-phase processing: Poller fetches and enqueues. Worker processes. Clear separation of concerns. Each part testable in isolation.

Vercel AI SDK over custom abstraction: Reduces boilerplate. Built-in tool calling and structured output parsing. Focuses effort on business logic instead of provider SDKs.

Services layer over inline logic: Business logic lives in src/services/. Worker and API are thin orchestrators. Easier to unit test and reuse.

Local knowledge base over managed vector DB: No external dependencies for MVP. Keyword and topic scoring is sufficient for experimentation. Swappable for vector stores later.

Advantages & Real-World Impact

  • Faster Ticket Resolution: AI analysis completes in under 2 seconds per ticket, reducing manual triage time and helping support teams handle more requests efficiently.

  • Improved Accuracy: Structured LLM output and knowledge base retrieval help ensure recommendations are relevant, consistent, and actionable.

  • Seamless Multi-Provider AI Switching: Test and balance cost vs. performance without rewriting code. Enables local development with no API charges.

  • Reduced Cognitive Load: Support agents get root cause analysis, suggested fixes, and implementation steps automatically, freeing them to focus on complex issues.

  • Observability: BullMQ + Bull Board gives a clear view of queued jobs, failures, and retries, increasing operational confidence.

  • Extensible & Type-Safe: Written in TypeScript with Zod validation, making it easier to add new AI tools, workflows, or integrations safely.

Demo Instructions

  1. Start Redis: docker-compose up -d
  2. Run app: bun run src/index.ts
  3. Create a ticket in Linear
  4. Watch worker logs for processing output
  5. Verify AI comment appears in Linear ticket thread

Optional: Open BullMQ dashboard at http://localhost:3000/admin/queues to monitor jobs.

Project Structure

.
├── biome.json              # Biome linting and formatting config
├── bun.lock                # Bun lockfile
├── docker-compose.yml      # Redis + worker container definitions
├── docs/
│   ├── kb/                 # Knowledge base markdown files
│       └── api-authentication-401.md
├── knip.json               # Unused code detection config
├── Makefile                # Common development commands
├── package.json            # Dependencies and scripts
├── README.md               # This file
├── src/
│   ├── ai/
│   │   ├── helper.ts       # Pure AI functions: detectIntent, diagnoseIssue
│   │   ├── index.ts        # AI provider initialization
│   │   ├── prompts.ts      # Prompt templates
│   │   └── tools.ts        # Tool definitions for LLM
│   ├── api/
│   │   ├── health.ts       # Health check endpoint
│   │   └── index.ts        # Fastify server setup
│   ├── config/
│   │   ├── ai.ts           # Multi-provider AI config
│   │   ├── bull-board.ts   # BullMQ dashboard config
│   │   ├── bullmq.ts       # Queue connection config
│   │   ├── fastify.ts      # Fastify server config
│   │   ├── linear-poll.ts  # Polling interval config
│   │   └── redis.ts        # Redis connection config
│   ├── index.ts            # Application entry point (API)
│   ├── jobs/
│   │   └── poll-worker.ts  # Cron job: fetch Linear issues
│   ├── queues/
│   │   └── support-ticket.ts # Queue name constant
│   ├── services/
│   │   ├── knowledge-base.ts # Local KB search and indexing
│   │   ├── linear-client.ts  # Linear GraphQL calls
│   │   ├── redis-client.ts   # Redis connection singleton
│   │   └── ticket-processor.ts # AI analysis + comment posting
│   ├── types/
│   │   ├── ai.ts             # AI response types
│   │   ├── knowledge-service.ts # KB search types
│   │   ├── linear.ts         # Linear API types
│   │   └── queue.ts          # TicketJob interface
│   └── workers/
│       ├── index.ts          # Worker orchestrator
│       └── support-ticket.ts # BullMQ worker definition
└── tsconfig.json         # TypeScript config

License

MIT

About

AI-powered customer support agent that analyzes Linear tickets, searches internal documentation, and proposes actionable solutions.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors