A weather-focused AI application built with the Mastra framework, designed for deployment on AWS Lambda using Docker containers.
This repository demonstrates how to integrate AWS Lambda Web Adapter into a Docker container for serverless deployment. The critical implementation is in the Dockerfile at line 10:
COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.9.0 /lambda-adapter /opt/extensions/lambda-adapter
This single line enables any HTTP server (running on port 8080) to work seamlessly with AWS Lambda's event-driven model. The adapter handles the translation between Lambda events and standard HTTP requests, eliminating the need to modify your application code for serverless deployment.
This project provides weather-related AI services through conversational agents and automated workflows. It leverages the Open-Meteo API for weather data and OpenAI for intelligent responses, packaged in a production-ready Docker container optimized for AWS Lambda.
- Weather Agent: Conversational AI that provides location-based weather information
- Weather Tool: Direct weather data retrieval with geocoding support
- Weather Workflow: Multi-step process for weather analysis and activity planning
- Lambda-Ready: Containerized deployment with AWS Lambda Web Adapter
- In-Memory Storage: LibSQL database for fast, serverless operations
- Node.js 18+
- Docker
- OpenAI API key
npm install
Create a .env
file:
OPENAI_API_KEY=your_openai_api_key_here
# Install dependencies
npm install
# Build the application
npx mastra build
# Run locally with Docker
docker build -t mastra-lambda .
docker run -p 8080:8080 -e OPENAI_API_KEY=your_key mastra-lambda
# Check formatting and linting
npx biome check
# Auto-fix issues
npx biome check --write
- Entry Point (
src/mastra/index.ts
): Main Mastra configuration - Weather Agent (
src/mastra/agents/weather-agent.ts
): GPT-4o-mini powered conversational interface - Weather Tool (
src/mastra/tools/weather-tool.ts
): Open-Meteo API integration - Weather Workflow (
src/mastra/workflows/weather-workflow.ts
): Activity planning pipeline
The application runs on port 8080 and uses:
- LibSQL in-memory storage for serverless compatibility
- Pino logger with info level
- AWS Lambda Web Adapter for HTTP request handling
- Multi-stage Docker build for optimized container size
Once deployed, the application exposes Mastra's standard endpoints for:
- Agent conversations
- Workflow execution
- Tool invocations
-
Build the Docker image:
docker build -t mastra-lambda .
-
Push to Amazon ECR:
aws ecr get-login-password --region your-region | docker login --username AWS --password-stdin your-account.dkr.ecr.your-region.amazonaws.com docker tag mastra-lambda:latest your-account.dkr.ecr.your-region.amazonaws.com/mastra-lambda:latest docker push your-account.dkr.ecr.your-region.amazonaws.com/mastra-lambda:latest
-
Create Lambda function using the container image
-
Set environment variables in Lambda console:
OPENAI_API_KEY
: Your OpenAI API key
The project uses:
- Biome for formatting (2 spaces, double quotes) and linting
- TypeScript for type safety
- Zod for schema validation
- CommonJS module format
- Open-Meteo: Geocoding and weather data (no API key required)
- OpenAI: GPT-4o-mini for AI capabilities (API key required)
- Non-root user execution in Docker container
- Environment variable based configuration
- No hardcoded secrets or API keys