Zero-configuration AI context generation for any codebase. Analyze code quality, complexity, and technical debt across 17+ programming languages with extreme quality enforcement and Toyota Way standards.
📖 https://paiml.github.io/pmat-book/ - Complete documentation, tutorials, and guides
# Rust (recommended)
cargo install pmat
# macOS/Linux
brew install pmat
# Windows
choco install pmat
# npm (global)
npm install -g pmat-agent# Analyze codebase and generate AI-ready context
pmat context
# Analyze complexity
pmat analyze complexity
# Grade technical debt (A+ through F)
pmat analyze tdg
# Score repository health (0-110 scale)
pmat repo-score . # Fast: scans HEAD only
pmat repo-score . --deep # Thorough: scans entire git history
# Find Self-Admitted Technical Debt
pmat analyze satd
# Test suite quality (mutation testing)
pmat mutate --target src/Install pre-commit hooks for automatic quality enforcement:
# Install git hooks (bashrs quality, pmat-book validation)
pmat hooks install
# Check hook status
pmat hooks status
# Dry-run to see what would be checked
pmat hooks install --dry-runHooks enforce:
- Bash/Makefile safety (bashrs linting)
- pmat-book validation (multi-language examples)
- Documentation accuracy (zero hallucinations)
- 17+ Languages: Rust, TypeScript, Python, Go, Java, C/C++, Ruby, PHP, Swift, Kotlin, and more
- AI-Ready Context: Generate deep context for Claude, GPT, and other LLMs
- Technical Debt Grading (TDG): A+ through F scoring with 6 orthogonal metrics
- Repository Health Scoring ✨NEW: Quantitative assessment (0-110 scale) across 6 categories + bonus features
- Workflow Prompts ✨NEW: 11 pre-configured AI prompts enforcing EXTREME TDD and Toyota Way principles
- Git-Commit Correlation: Track TDG scores at specific commits for quality archaeology
- Semantic Code Search: Natural language code discovery with hybrid search
- Quality Gates: Pre-commit hooks, CI/CD integration, mutation testing
- MCP Integration: 19 tools for Claude Code, Cline, and other MCP clients
- Zero Configuration: Works out of the box on any codebase
📖 PMAT Book - Complete guide with tutorials
Key chapters:
- Installation and Setup
- Getting Started
- MCP Protocol
- Technical Debt Grading
- Workflow Prompts ✨NEW
- Repository Health Scoring ✨NEW
- Multi-Language Examples
# Generate context for Claude/GPT
pmat context --output context.md --format llm-optimized
# Analyze TypeScript project
pmat analyze complexity --language typescript
# Technical debt grading with components
pmat analyze tdg --include-components
# Semantic search (natural language)
pmat embed sync ./src
pmat semantic search "error handling patterns"
# Validate documentation for hallucinations
pmat validate-readme --targets README.mdPre-configured AI workflow prompts that enforce EXTREME TDD and Toyota Way quality principles. Perfect for piping to Claude Code, ChatGPT, or other AI assistants.
# List all available prompts
pmat prompt --list
# Show specific prompt (YAML format)
pmat prompt code-coverage
# Get prompt as text for AI assistants
pmat prompt debug --format text | pbcopy
# JSON format for programmatic use
pmat prompt quality-enforcement --format json
# Customize for non-Rust projects
pmat prompt code-coverage \
--set TEST_CMD="pytest" \
--set COVERAGE_CMD="pytest --cov"
# Save to file
pmat prompt continue -o workflow.yamlAvailable Prompts (11 total):
CRITICAL Priority:
code-coverage- Enforce 85%+ coverage using EXTREME TDDdebug- Five Whys root cause analysisquality-enforcement- Run all quality gates (12 gates)security-audit- Security analysis and fixes
HIGH Priority:
continue- Continue next best step with EXTREME TDDassert-cmd-testing- Verify CLI test coveragemutation-testing- Run mutation testing on weak codeperformance-optimization- Speed up compilation and testsrefactor-hotspots- Refactor high-TDG code
MEDIUM Priority:
clean-repo-cruft- Remove temporary filesdocumentation- Update and validate docs
Key Features:
- Toyota Way principles (Jidoka, Andon Cord, Five Whys)
- Variable substitution for multi-language support
- Quality gates and time constraints
- Zero-tolerance policies
- Short alias:
pmat p --list
Documentation: Workflow Prompts Guide
Evaluate test suite quality by introducing code mutations and checking if tests detect them.
# Basic mutation testing
pmat mutate --target src/lib.rs
# With quality gate (fail if score < 85%)
pmat mutate --target src/ --threshold 85
# Failures only (CI/CD optimization)
pmat mutate --target src/ --failures-only
# JSON output for integration
pmat mutate --target src/ --output-format json > results.jsonMutation Score = (Killed Mutants / Total Valid Mutants) × 100%
Supported Languages: Rust, Python, TypeScript, JavaScript, Go, C++
Key Features:
- Multi-language mutation operators (arithmetic, comparison, logical, boundary)
- CI/CD integration (GitHub Actions, GitLab CI, Jenkins)
- Performance optimization (parallel execution, differential testing)
- Quality gates with configurable thresholds
Documentation:
- User Guide - Complete guide with examples
- API Reference - All CLI flags and options
- Best Practices - Team adoption strategies
- CI/CD Integration - GitHub Actions, GitLab CI, Jenkins
Example Projects:
- Rust Example - 8 functions, 8 tests, ~90% mutation score
- Python Example - 8 functions, 24 tests, comprehensive coverage
- TypeScript Example - 8 functions, 24 tests, Jest integration
Zero-regression quality enforcement across local development, git workflows, and CI/CD pipelines.
# 1. Create quality baseline
pmat tdg baseline create --output .pmat/tdg-baseline.json --path .
# 2. Install git hooks (optional)
pmat hooks install --tdg-enforcement
# 3. Check for regressions
pmat tdg check-regression \
--baseline .pmat/tdg-baseline.json \
--path . \
--max-score-drop 5.0 \
--fail-on-regression
# 4. Enforce quality standards for new code
pmat tdg check-quality \
--path . \
--min-grade B+ \
--new-files-only \
--fail-on-violationBaseline System:
- Blake3 content-hash based deduplication
- Project-wide quality snapshots
- Delta detection (improved, regressed, unchanged files)
Quality Gates:
- Regression detection (prevents quality degradation)
- Minimum grade enforcement (ensures new code quality)
- Language-specific thresholds
Git Hooks:
- Pre-commit quality checks
- Post-commit baseline updates
- Configurable enforcement modes (strict, warning, disabled)
CI/CD Templates:
- GitHub Actions workflow
- GitLab CI pipeline
- Jenkins declarative pipeline
Create .pmat/tdg-rules.toml:
[quality_gates]
rust_min_grade = "A"
python_min_grade = "B+"
max_score_drop = 5.0
mode = "strict" # strict, warning, or disabled
[baseline]
baseline_path = ".pmat/tdg-baseline.json"
auto_update_on_main = trueGitHub Actions:
cp templates/ci/github-actions-tdg.yml .github/workflows/tdg-quality.ymlGitLab CI:
cp templates/ci/gitlab-ci-tdg.yml .gitlab-ci.ymlJenkins:
cp templates/ci/Jenkinsfile-tdg JenkinsfileSee Complete Guide: docs/guides/ci-cd-tdg-integration.md
Track Technical Debt Grading (TDG) scores at specific git commits for "quality archaeology" workflows.
# Analyze with git context
pmat tdg server/src/lib.rs --with-git-context
# Query specific commit
pmat tdg history --commit v2.178.0
# History since reference
pmat tdg history --since HEAD~10
# Commit range
pmat tdg history --range v2.177.0..v2.178.0
# Filter by file
pmat tdg history --path server/src/lib.rs --since HEAD~5
# JSON output for scripting
pmat tdg history --commit 60125a0 --format jsonUse Cases:
- Quality Archaeology: Find which commit broke quality
- Release Tracking: Compare quality between releases
- Regression Detection: Identify quality drops over time
- Developer Metrics: Track quality attribution
Example Workflows:
# Find when quality dropped below B+
pmat tdg history --since HEAD~50 --format json | \
jq '.history[] | select(.score.grade | test("C|D|F"))'
# Quality delta between releases
pmat tdg history --range v2.177.0..v2.178.0
# Per-file quality trend
pmat tdg history --path src/lib.rs --since HEAD~20Features:
- Tag resolution support (e.g.,
--commit v2.178.0) - MCP integration (
with_git_context: trueparameter) - Zero performance overhead (<1% analysis time)
- Backward compatible (git context is optional)
PMAT provides 19 MCP tools for AI agents:
# Start MCP server
pmat mcp
# Use with Claude Code, Cline, or other MCP clientsTools include: context generation, complexity analysis, TDG scoring, semantic search, code clustering, documentation validation, and more.
See MCP Tools Documentation for details.
Built by: Pragmatic AI Labs
License: MIT
Repository: github.com/paiml/paiml-mcp-agent-toolkit
Issues: GitHub Issues
Current Version: v2.167.0 (Sprint 44 - Coverage Remediation)
See ROADMAP.md for project status and future plans.
Quality Standards:
- EXTREME TDD (RED → GREEN → REFACTOR)
- 85%+ code coverage
- Five Whys root cause analysis
- Toyota Way principles (Jidoka, Genchi Genbutsu, Kaizen)
- Zero tolerance for defects