A comprehensive collection of AI engineering projects designed as an executable codebase - where both humans and AI agents can discover and run available tools through a unified interface. This repository demonstrates practical implementations across cloud infrastructure, AI inference systems, and graph-based AI architectures.
# Get available commands and project overview
make help
# Check prerequisites (Packer, AWS CLI, OpenTofu)
make check-prerequisites
# Initialize and setup the project
make setup
ai-engineering/
βββ Makefile # π― EXECUTABLE INTERFACE - Universal tool catalog
βββ README.md # This file
βββ .github/workflows/ # CI/CD automation for tool validation
βββ agents/ # AI agent implementations and examples
β βββ oss-agent/ # Example agent using open source models
βββ infrastructure/ # Cloud infrastructure projects
β βββ ai-inference/ # Production-ready AI inference infrastructure
β βββ packer/ # Custom Ubuntu 24.04 AMI builder with GPU support
β βββ opentofu/ # Modular Infrastructure as Code deployment
β βββ iam/ # IAM roles and permissions
β βββ inference/ # Main deployment using modular components
β βββ modules/ # Reusable Terraform modules
β βββ inference/ # Parameterized inference server module
βββ intro-langgraph/ # (Planned) LangGraph learning project
This repository embodies a core principle: the codebase itself should be executable by both humans and AI agents. Every tool, command, and capability is discoverable and runnable through a unified interface.
make help # Same interface - AI agents can parse and execute tools
Key Principles:
- Single Source of Truth: The Makefile serves as the authoritative catalog of all executable operations
- Self-Documenting: Every command includes clear descriptions and usage examples
- Universal Access: The same interface works for humans, CI/CD systems, and AI agents
- Discoverability: No hidden commands - everything is accessible via
make help
This repository serves as a comprehensive portfolio demonstrating:
- Infrastructure as Code: Automated cloud infrastructure provisioning
- AI Inference Systems: Production-ready AI model deployment
- DevOps Best Practices: CI/CD, automation, and monitoring
- Human-AI Collaboration: Interfaces designed for both human and AI agent interaction
- Learning in Public: Documented journey through AI engineering
A complete infrastructure solution for deploying AI inference workloads on AWS using custom AMIs and modular OpenTofu configuration.
Key Features:
- Custom Ubuntu 24.04 AMI with Docker, NVIDIA drivers, and GPU support
- Modular architecture: separate IAM, inference, and reusable modules
- Multi-model deployment: Qwen 3 0.6B, GPT-OSS 20B, and Gemma 3 27B configurations
- vLLM server with systemd integration and container lifecycle management
- GPU-enabled instances (g5.2xlarge) with automated provisioning
- Comprehensive security: EBS encryption, restrictive security groups, IAM best practices
Quick Deploy:
# Build custom AMI
make ami-build
# Deploy IAM resources
make tofu-iam-apply
# Deploy inference infrastructure
make tofu-inference-apply
A practical example of building AI agents using open source models with comprehensive documentation.
Key Features:
- OpenAI-compatible API integration for local models (vLLM, Ollama)
- Function calling with Wikipedia search tools
- Interactive REPL with Rich formatting
- Comprehensive comments explaining agent patterns and OSS model usage
Quick Start:
cd agents/oss-agent
python main.py # Interactive agent with Wikipedia search
- Infrastructure: OpenTofu (Terraform), Packer, AWS (EC2, EBS, VPC, IAM)
- AI/ML: vLLM, Python, GPU acceleration, OpenAI-compatible APIs
- Containerization: Docker, systemd service management
- Automation: Make, Bash scripting, GitHub Actions
- Security: EBS encryption, security groups, IAM best practices
The project's Makefile serves as the executable interface - a programmatically parseable catalog of all available tools. This design enables both humans and AI agents to discover and execute operations using the same commands.
make help # Lists all available tools with descriptions
Example Output:
AI Engineering - Executable Tool Catalog
AMI/Packer Commands:
ami-build Build the AMI (with validation)
ami-init Initialize Packer plugins
ami-validate Validate Packer configuration
Agent Commands:
agent-oss-check Check OSS agent environment and dependencies
agent-oss-install Install OSS agent dependencies
agent-oss-run Run the OSS agent interactively
Infrastructure/OpenTofu Commands:
tofu-apply Deploy all infrastructure with OpenTofu
tofu-iam-apply Deploy IAM infrastructure with OpenTofu
tofu-inference-apply Deploy inference infrastructure with OpenTofu
tofu-init Initialize all OpenTofu modules
tofu-plan Show deployment plan for all modules
tofu-validate Validate all OpenTofu modules
Setup & Verification Commands:
check-aws-config Check AWS configuration and permissions
check-prerequisites Check if required tools are installed
setup Complete setup and initialization
Utility Commands:
help Show this help message
list-ami List recent AMIs created by this project
status Show current project status
The Makefile format is specifically designed to be:
- Parseable: AI agents can extract command names and descriptions
- Executable: Commands can be run programmatically
- Self-Contained: Each command includes all necessary context
- Consistent: Uniform pattern across all operations
Infrastructure Commands:
make ami-build
- Build custom Ubuntu AI inference AMImake tofu-iam-apply
- Deploy IAM resources (roles, policies)make tofu-inference-apply
- Deploy inference infrastructure (EC2, vLLM)make tofu-apply
- Deploy all infrastructure (IAM + inference)make tofu-destroy
- Tear down all infrastructure
Agent Commands:
make agent-oss-run
- Run the OSS agent interactivelymake agent-oss-install
- Install OSS agent dependenciesmake agent-oss-check
- Check agent environment and dependencies
Utility Commands:
make check-prerequisites
- Verify required toolsmake setup
- Complete project initializationmake status
- Show current project statusmake clean
- Clean build artifacts
Validation Commands:
make ami-validate
- Validate Packer configurationmake tofu-validate
- Validate OpenTofu configuration
- Graph-based AI application development
- Multi-agent systems and workflows
- Integration with various LLM providers
- Container orchestration (EKS)
- Model serving platforms
- Monitoring and observability stack
Before using this project, ensure you have:
- AWS CLI configured with appropriate permissions
- Packer 1.7+ for AMI building
- OpenTofu 1.0+ for infrastructure deployment
- Make for automation commands
Quick installation on macOS:
brew install awscli packer opentofu
Each subproject contains detailed documentation:
infrastructure/ai-inference/packer/README.md
- AMI building guideinfrastructure/ai-inference/opentofu/README.md
- Complete infrastructure deployment guideagents/oss-agent/main.py
- Comprehensive agent implementation with inline documentation
This is a personal learning repository, but feedback and suggestions are welcome! Please:
- Check existing issues and documentation
- Open an issue for bugs or feature requests
- Follow the established code and documentation patterns
This repository represents practical implementations learned from:
- AWS Well-Architected Framework
- Infrastructure as Code best practices
- AI/ML deployment patterns
- DevOps automation principles
This project demonstrates production-ready infrastructure automation, AI system deployment, and continuous learning in the rapidly evolving field of AI engineering.