diff --git a/terraform/module-generation/skills/terraform-module-acceptance-criteria-builder/SKILL.md b/terraform/module-generation/skills/terraform-module-acceptance-criteria-builder/SKILL.md new file mode 100644 index 0000000..ec072b9 --- /dev/null +++ b/terraform/module-generation/skills/terraform-module-acceptance-criteria-builder/SKILL.md @@ -0,0 +1,219 @@ +--- +name: terraform-module-acceptance-criteria-builder +description: > + Use this skill to turn high-level needs about Terraform modules or + infrastructure-as-code into clear, testable requirements using the + EARS notation (Easy Approach to Requirements Syntax), including + structured user-story input from companion IaC story skills. +metadata: + domain: requirements + tags: ["EARS", "requirements", "terraform", "iac", "testing"] +--- + +# Purpose + +When this skill is active, you help the user express **behavioural requirements** +for Terraform modules or other IaC components using EARS patterns +(Ubiquitous, Event-driven, Optional, Unwanted, Complex). + +You focus on: +- What the module must do in given conditions. +- What must never happen (unwanted behaviour). +- Options/flags and their effects. +- Requirements that can be turned into automated tests or checks. + +# When to use this skill + +Use this skill when: +- The user mentions EARS, requirements, or “spec” for a Terraform module / IaC. +- The user has a user story or high-level description and wants precise, + low-ambiguity requirements. +- The user provides output from the Agent Skill `terraform-module-story-builder` + and wants EARS requirements derived from it. +- The user wants to derive test cases or policy rules from requirements. + +Do NOT use this skill for: +- Purely business-level discussions without any need for testable behaviour. +- Freeform documentation or README generation. + +# EARS patterns (quick reference) + +When writing requirements, use these standard EARS patterns. + +- **Ubiquitous** (always true): + Pattern: `The shall .` + Use for behaviour that must always hold. + +- **Event-driven**: + Pattern: `When , the shall .` + Use for behaviour that depends on a condition or event. + +- **Optional feature**: + Pattern: `Where is , the shall .` + Use for flags or configuration options (e.g. `var.enable_x`). + +- **Unwanted behaviour**: + Pattern: `If , then the shall .` + Use for validation, errors, and safety constraints. + +- **Complex**: + Pattern: combination of the above when needed. + Prefer splitting into multiple simpler requirements when possible. + +# Instructions + +When using this skill, always follow this process: + +1. **Normalize the input (especially user-story skill output)** + - If input includes a `## User story` section, treat it as primary source. + - If input includes `## Context (optional)`, use it for domain, + beneficiary, and constraints. + - If story wording is unclear, restate it once in concise + "As a / I want / so that" form before deriving requirements. + +2. **Clarify the scope** + - Identify the module or component being specified (e.g. `network`, `secure_bucket`). + - Ask for missing details only if needed to write testable requirements. + - If the user mentions Terraform, use terms like `module`, `variable`, `output`, + `plan`, and `apply` in the requirements. + +3. **Identify key behaviours** + - Extract 3–10 core behaviours or rules from the user story and context: + - always-true invariants (e.g. encryption must always be enabled), + - configuration-dependent behaviours (flags, options), + - error/validation conditions, + - event-triggered actions (e.g. when a change is detected). + +4. **Map behaviours to EARS patterns** + - Choose the simplest fitting pattern for each behaviour: + - invariant → Ubiquitous, + - behaviour under condition/event → Event-driven, + - feature/flag → Optional, + - validation/error → Unwanted. + - Avoid mixing multiple patterns in a single requirement if you can split them. + +5. **Write EARS requirements in IaC/Terraform terms** + - Refer to: + - the **module** (`The network module shall ...`), + - **variables** (`var.*`), + - **outputs** (`output.*`), + - Terraform actions (`plan`, `apply`) where relevant. + - Use clear, specific language: + - prefer “exactly one VPC” over “a VPC”, + - prefer “deny apply with an error explaining …” over “fail gracefully”. + +6. **Assign requirement IDs** + - Use a simple scheme like `REQ-1`, `REQ-2`, … or `-REQ-1`. + - Keep IDs stable so they can be referenced from tests and documentation. + +7. **Output format** + +Always respond using this structure: + +```md +## User story + +As a , +I want , +so that . + +## Scope + +- Component: `` +- Context: `` + +## EARS requirements + +- **[REQ-1] (Ubiquitous)** + The `` shall ... + +- **[REQ-2] (Event-driven)** + When ``, the `` shall ... + +- **[REQ-3] (Optional)** + Where `` is ``, the `` shall ... + +- **[REQ-4] (Unwanted)** + If ``, then the `` shall ... + +## Variable Specification + +| Variable | Type | Required | Default | Description | +|----------|------|----------|---------|-------------| +| var.environment | string | Yes | - | Deployment environment | +| var.enable_logging | bool | No | false | Enable logging | + +## Output Specification + +| Output | Description | Value | Sensitive | +|--------|-------------|-------|-----------| +| bucket_id | The S3 bucket ID | aws_s3_bucket.main.id | No | +| bucket_arn | The S3 bucket ARN | aws_s3_bucket.main.arn | No | +``` + +The Variable Specification table is derived from variables referenced in the EARS requirements. Use [terraform-variables.md](references/terraform-variables.md) for best practices on choosing types, validation rules, and defaults. + +The Output Specification table captures what values the module exposes. Use [terraform-outputs.md](references/terraform-outputs.md) for best practices on output design. + +8. **Make them testable** + +For each requirement, mentally check: +- Could a Terraform unit/integration test or policy check verify this? +- Is there a clear condition and an expected outcome? +- If not, refine the requirement to make it objectively verifiable. + +If the user asks, you may add a short, optional section: + +```md +## Suggested test ideas + +- REQ-1: Verify that ... +- REQ-2: Verify that ... +``` + +If the user asks for traceability, you may instead add: + +```md +## Suggested test mapping + +- REQ-1 -> Unit/integration test +- REQ-2 -> Policy-as-code or contract test +``` + +9. **Stay concise** + +- Prefer 4–8 high-quality requirements over a long, unfocused list. +- Do not repeat the same information in multiple requirements unless + it clarifies different conditions. + +10. **Variable extraction** + +Extract variables from EARS requirements and populate the Variable Specification table: + +| Pattern in Requirement | Variable Inference | +|------------------------|-------------------| +| `var.enable_*`, `var.is_*`, `var.has_*` | `bool`, optional, default `false` | +| `var.environment`, `var.region`, `var.name`, `var.*_id` | `string`, required unless default is safe | +| `var.count`, `var.size`, `var.replicas`, `var.*_count` | `number`, optional with sensible default | +| "must be one of X, Y, Z" | Add validation: `contains(["X", "Y", "Z"], var.X)` | +| "at least N" / "no more than N" | Add range validation | + +Only include variables that are referenced in the requirements. The Variable Specification is derived from the EARS requirements, not a separate pattern. + +11. **Output extraction** + +Extract outputs from EARS requirements and populate the Output Specification table: + +| Source | Inference | +|--------|-----------| +| "The module shall expose..." | Explicit output, add to table | +| Resource attributes in requirements | Derived output | +| Sensitive values (passwords, keys, secrets) | Mark sensitive = true | +| Complex structures | Prefer simple, predictable shapes | + +Outputs should include: +- Values consumers actually need (other modules, CI/CD, debugging) +- Specific attributes, not full resource objects +- Clear descriptions explaining what the value is + +The Output Specification is derived from the module's purpose and requirements, not a separate pattern. diff --git a/terraform/module-generation/skills/terraform-module-acceptance-criteria-builder/references/terraform-outputs.md b/terraform/module-generation/skills/terraform-module-acceptance-criteria-builder/references/terraform-outputs.md new file mode 100644 index 0000000..a721e48 --- /dev/null +++ b/terraform/module-generation/skills/terraform-module-acceptance-criteria-builder/references/terraform-outputs.md @@ -0,0 +1,223 @@ +## 1. Output Only What Consumers Actually Need + +Outputs are the "public interface" of your root module or reusable modules. Treat them like an API: + +- Only expose values that: + - are needed by other modules/stacks or external systems (CI/CD, scripts), + - are useful for humans (e.g., URLs, IDs for debugging). +- Avoid exposing: + - large or noisy structures that aren't really consumed, + - full resource objects (use specific attributes instead). + +Example: instead of outputting the entire VPC resource: + +```hcl +# Avoid +output "vpc" { + value = aws_vpc.main +} +``` + +Prefer specific, stable attributes: + +```hcl +output "vpc_id" { + value = aws_vpc.main.id + description = "ID of the main VPC" +} +``` + +--- + +## 2. Use `description` on Every Output + +Descriptions help both humans and tooling understand your module's interface: + +```hcl +output "alb_dns_name" { + description = "DNS name of the public Application Load Balancer" + value = aws_lb.public.dns_name +} +``` + +This is especially important for shared modules that others will consume. + +--- + +## 3. Mark Sensitive Outputs as `sensitive = true` + +Never expose secrets in plain text outputs. If an output could contain: + +- passwords, +- API keys, +- private keys, +- tokens, +- connection strings, + +mark it as sensitive: + +```hcl +output "db_password" { + description = "Database password" + value = random_password.db.result + sensitive = true +} +``` + +This: + +- hides values from `terraform apply`/`terraform output` CLI by default, +- helps prevent secrets from being logged in CI/CD pipelines. + +Note: if a consumer explicitly runs `terraform output -json` and processes it somewhere else, they can still access the values; the sensitivity is mainly about display/logging. + +--- + +## 4. Keep Outputs Stable (Treat Them as an API) + +Changing outputs is a breaking change for: + +- other Terraform configurations using `terraform_remote_state`, +- scripts or CI stages that parse `terraform output` or `terraform output -json`, +- teams that rely on those values. + +Good practices: + +- Choose clear, stable names from the start (`vpc_id`, `public_subnet_ids`, `alb_dns_name`). +- Avoid renaming/removing outputs lightly; if you must: + - deprecate old outputs gradually, + - keep old outputs as aliases for some time where feasible. + +--- + +## 5. Prefer Simple, Predictable Data Structures + +For outputs consumed by other modules or tools, aim for simple shapes: + +- Strings for single values (`vpc_id`, `alb_dns_name`). +- Lists/sets of strings for collections (`public_subnet_ids`). +- Maps/objects where structure is intentional and documented. + +Example: + +```hcl +output "subnet_ids" { + description = "IDs of the private subnets" + value = aws_subnet.private[*].id +} + +output "endpoints" { + description = "Endpoints of key services" + value = { + api_url = aws_apigatewayv2_api.main.api_endpoint + web_url = aws_cloudfront_distribution.web.domain_name + } +} +``` + +Avoid overly nested, "raw" outputs that leak provider internals unless your consumer really needs them. + +--- + +## 6. Use Outputs to Bridge Between Stacks Safely + +When you have separate Terraform stacks (e.g., network, data, application), outputs are often consumed through `terraform_remote_state` or by CI tooling: + +```hcl +data "terraform_remote_state" "network" { + backend = "s3" + + config = { + bucket = "my-terraform-states" + key = "network/terraform.tfstate" + region = "us-east-1" + } +} + +resource "aws_instance" "app" { + subnet_id = data.terraform_remote_state.network.outputs.private_subnet_id +} +``` + +To make this robust: + +- Keep "shared" outputs minimal and well‑named (`vpc_id`, `private_subnet_ids`, `security_group_ids`). +- Avoid exposing implementation details you may want to change (e.g., specific resource names). + +--- + +## 7. Use `terraform output -json` for Automation + +When scripts or CI/CD pipelines consume outputs, they should use the JSON form: + +```bash +terraform output -json > outputs.json +``` + +Best practices: + +- Design outputs with machine‑readable shapes (maps, lists, basic types). +- Avoid unnecessary formatting (e.g., embedding JSON as strings). +- Keep names and structures stable over time. + +--- + +## 8. Don't Output Huge or Unnecessary Blobs + +Avoid: + +- giant user data scripts, +- large policies or templates, +- big binary blobs. + +Reasons: + +- `terraform output` becomes unreadable, +- state files become heavier, +- consumers rarely need all that data. + +Instead, if you must share large artifacts, store them in a bucket/repo and output only: +- a URL, +- a key/path, +- or an ID. + +--- + +## 9. Align Outputs with Environments and Teams + +Think about who uses which outputs: + +- Platform/network team: + - cares about `vpc_id`, `subnet_ids`, `route_table_ids`, `shared_services_endpoint`. +- App team: + - cares about `app_url`, `api_url`, `db_endpoint`, `queue_url`. + +Consider organizing outputs and descriptions so each audience can quickly find what they need. In larger modules, grouping through naming conventions helps (`network_*`, `app_*`). + +--- + +## 10. Use Outputs as a Debugging Aid + +In addition to "public API" outputs, you can temporarily output values to debug: + +```hcl +output "debug_asg_capacity" { + value = aws_autoscaling_group.app.desired_capacity +} +``` + +Just remember to: + +- remove or comment these once debugging is done, +- avoid exposing anything sensitive, or mark it `sensitive = true`. + +--- + +## 11. Keep Outputs Close to the Resources They Expose (in Modules) + +In bigger modules: + +- It's often clearer to declare outputs in the same `.tf` file or near the resources they expose, or +- Keep all outputs in a dedicated `outputs.tf` file for consistency, but use clear, sectioned comments. + +Pick one convention per repo and stick to it for readability. \ No newline at end of file diff --git a/terraform/module-generation/skills/terraform-module-acceptance-criteria-builder/references/terraform-variables.md b/terraform/module-generation/skills/terraform-module-acceptance-criteria-builder/references/terraform-variables.md new file mode 100644 index 0000000..2adf44c --- /dev/null +++ b/terraform/module-generation/skills/terraform-module-acceptance-criteria-builder/references/terraform-variables.md @@ -0,0 +1,287 @@ +## 1. Prefer Input Variables Over Hard‑Coding + +- Use variables for anything that can change across: + - environments (dev/stage/prod), + - regions/accounts, + - sizes (instance types, node counts), + - feature toggles. +- Keep resource blocks mostly static; inject variability through variables and locals. + +Example: + +```hcl +variable "environment" { + type = string + description = "Deployment environment (dev, stage, prod)" +} + +resource "aws_s3_bucket" "logs" { + bucket = "logs-${var.environment}" +} +``` + +--- + +## 2. Always Specify Type and Description + +- Avoid untyped variables; they make refactoring and validation harder. +- Use `description` to document intent, not just what the value is. + +```hcl +variable "instance_count" { + type = number + description = "Number of application instances per AZ" + default = 2 +} +``` + +Benefits: + +- Clearer error messages. +- Better editor/IDE assistance. +- Safer future changes. + +--- + +## 3. Use Sensible Defaults, but Don’t Overdo Them + +- Provide defaults for common, low‑risk values (e.g., small instance size in dev). +- Omit defaults for: + - credentials, + - IDs of external resources, + - environment‑critical choices (e.g., `environment`, `region` if you must choose carefully). + +If a value is required, leave out `default` so Terraform forces the user to supply it. + +--- + +## 4. Validate Variables Explicitly + +Use `validation` blocks to catch errors early: + +```hcl +variable "environment" { + type = string + description = "Deployment environment: dev, stage, or prod" + + validation { + condition = contains(["dev", "stage", "prod"], var.environment) + error_message = "environment must be one of: dev, stage, prod." + } +} +``` + +Common validations: + +- Allowed enums (`environment`, `tier`). +- Ranges (min/max for counts, sizes). +- Basic string patterns (e.g., prefix/suffix rules). + +--- + +## 5. Structure Complex Data with Object/Map Types + +Use structured types instead of many loosely related variables. + +Bad: + +```hcl +variable "app_cpu" {} +variable "app_memory" {} +variable "app_replicas" {} +``` + +Better: + +```hcl +variable "app_config" { + type = object({ + cpu = number + memory = number + replicas = number + }) +} +``` + +For environment‑specific overrides: + +```hcl +variable "env_config" { + type = map(object({ + instance_type = string + min_size = number + max_size = number + })) +} + +# Example value in a *.tfvars file +env_config = { + dev = { + instance_type = "t3.small" + min_size = 1 + max_size = 2 + } + prod = { + instance_type = "m6i.large" + min_size = 3 + max_size = 10 + } +} +``` + +--- + +## 6. Separate Locals from Variables + +- **Variables**: external inputs (what callers/users can set). +- **Locals**: internal derived values or convenience expressions. + +Example: + +```hcl +variable "project" { + type = string + description = "Project name used for tagging and naming" +} + +variable "environment" { + type = string + description = "Environment name" +} + +locals { + common_tags = { + Project = var.project + Environment = var.environment + ManagedBy = "terraform" + } +} +``` + +Use `locals` to: + +- centralize naming standards, +- centralize tags/labels, +- keep resource blocks clean. + +--- + +## 7. Manage Sensitive and Secret Values Properly + +- Mark secrets as `sensitive = true`: + +```hcl +variable "db_password" { + type = string + description = "Database password" + sensitive = true +} +``` + +- Never commit secrets to: + - `.tf` files, + - `*.tfvars` in version control, + - shell history. + +Better approaches: +- Use remote backends and secret managers (Vault, AWS SSM Parameter Store, AWS Secrets Manager, etc.) and feed values via: + - environment variables with `-var` / `TF_VAR_...`, + - pipelines that inject `*.auto.tfvars` at runtime (not stored in git). + +Always add `*.tfvars`, `*.auto.tfvars`, and any secret files to `.gitignore` if they may contain secrets. + +--- + +## 8. Use Variable Files for Environments + +Standard pattern: + +- `variables.tf` – definitions (type, description, validation). +- `dev.tfvars`, `stage.tfvars`, `prod.tfvars` – concrete values. +- Optionally, `terraform.tfvars` or `*.auto.tfvars` for defaults in a specific workspace. + +Run: + +```bash +terraform apply -var-file=dev.tfvars +terraform apply -var-file=prod.tfvars +``` + +Benefits: + +- Clear separation between code and configuration. +- Easy automation per environment. + +--- + +## 9. Keep Naming Consistent and Clear + +- Use lowercase with underscores: `db_username`, `instance_type`. +- Make names reflect purpose, not implementation: + - `app_subnet_ids` is better than `subnet_ids` in a large module. + - `asg_min_size`/`asg_max_size` better than `min`/`max`. + +Consistent naming makes modules easier to reuse across teams. + +--- + +## 10. Design Module Variables for Reuse and Stability + +When writing reusable modules: + +- Start with a small, opinionated set of variables; add more only when needed. +- Prefer **higher‑level** variables over exposing every underlying option. +- Provide conservative defaults that are safe. +- Use `nullable = false` when you truly require values. +- Treat variable changes as part of the module's "public API version": + - Avoid renaming variables without compatibility plans. + - Removing variables is a breaking change for consumers. + +--- + +## 11. Avoid Over‑Parameterization + +Too many variables can be as bad as too few: + +- Don't expose every possible Terraform argument as a variable. +- Keep inputs focused on what really changes between deployments. +- Use `locals` and opinionated defaults for the rest. + +This improves: + +- readability, +- maintainability, +- onboarding for new users of the module. + +--- + +## 12. Use Environment Variables Carefully + +Terraform supports `TF_VAR_name`: + +```bash +export TF_VAR_environment=dev +export TF_VAR_db_password=... +terraform apply +``` + +Good for secrets injected from CI/CD or local secret stores. + +However: + +- Harder to reproduce manually unless documented. +- For non‑secret configuration, prefer checked‑in `*.tfvars` files. + +--- + +## 13. Document Variables + +- Add clear `description` for every variable. +- Consider using `README.md` with a variables table for reusable modules: + - name, + - type, + - required/optional, + - default, + - description. + +This helps teams understand how to use your modules without reading all the code. \ No newline at end of file diff --git a/terraform/module-generation/skills/terraform-module-code-generator/SKILL.md b/terraform/module-generation/skills/terraform-module-code-generator/SKILL.md new file mode 100644 index 0000000..f7a945d --- /dev/null +++ b/terraform/module-generation/skills/terraform-module-code-generator/SKILL.md @@ -0,0 +1,96 @@ +--- +name: terraform-module-code-generator +description: > + Generate Terraform module implementation files from a module specification + document with user story and EARS requirements. +metadata: + domain: terraform + tags: ["terraform", "iac", "module", "code-generation", "ears"] + owner: "platform-engineering" + maturity: "experimental" +--- + +# Terraform Module Code Generator + +Generate a Terraform module from a specification document. This skill focuses +on implementation only and does not orchestrate other skills. + +## Input + +Accept input in two ways: +1. Direct content: specification text provided in the prompt. +2. File path: path to a Markdown specification file. + +If a file path is provided, read the file. Otherwise, use the provided content. + +## Implementation workflow + +### Step 1: Parse the specification + +Extract: +- Module name and purpose +- User story (persona, capability, value) +- EARS requirements and requirement IDs +- Variable specification table +- Output specification table + +### Step 2: Create module directory + +Create module files in the repository root directory. + +Rule: +- Always generate Terraform module implementation files in the root directory. +- Do not create or use `modules/` for generated output unless the + calling context explicitly overrides this behavior. + +### Step 3: Generate module files + +Create standard module files: +- main.tf +- variables.tf +- outputs.tf +- versions.tf + +### Step 4: Map EARS requirements to code + +For each requirement, implement behavior and keep traceability comments. + +Example: + +```hcl +# REQ-1 (Ubiquitous): Encryption must always be enabled +resource "aws_s3_bucket" "this" { + # ... +} +``` + +Pattern mapping guidance: +- Ubiquitous: always-on core resource configuration. +- Event-driven: conditional behavior tied to explicit triggers or conditions. +- Optional: feature-flag behavior controlled by variables. +- Unwanted: input validation and safe failure behavior. + +### Step 5: Generate versions.tf + +Include Terraform and provider constraints that match module requirements. + +### Step 6: Ensure generated code quality baseline + +Apply Terraform style conventions in generated files: +- descriptive names +- predictable locals usage +- complete variable descriptions and types +- clear output descriptions + +### Step 7: Preserve requirement traceability + +All implemented behaviors tied to EARS requirements must be linked to REQ-* IDs +in comments near relevant blocks. + +## Output expectations + +Generated module should: +- satisfy all listed requirements +- have a complete public interface (variables and outputs) +- be ready for subsequent style/test/documentation workflow steps + handled outside this skill diff --git a/terraform/module-generation/skills/terraform-module-documentation-writer/SKILL.md b/terraform/module-generation/skills/terraform-module-documentation-writer/SKILL.md new file mode 100644 index 0000000..d643289 --- /dev/null +++ b/terraform/module-generation/skills/terraform-module-documentation-writer/SKILL.md @@ -0,0 +1,485 @@ +--- +name: terraform-module-documentation-writer +description: > + Generate and maintain high-quality README.md documentation for Terraform + modules, following HashiCorp's standard module structure and module + documentation best practices, without relying on external CLIs such as + terraform-docs. +version: 0.1.0 +metadata: + domain: terraform + tags: ["terraform", "modules", "documentation", "readme", "iac"] + owner: "platform-engineering" + maturity: "experimental" + license: "MIT" +--- + +# Terraform Module Documentation Writer + +When this skill is active, you generate or update `README.md` files for +Terraform modules so they meet HashiCorp's standard module structure and +documentation guidelines, as well as the team's module documentation +conventions. You work **only** from the available Terraform source files +and text context; you MUST NOT rely on external tools or CLIs like +`terraform-docs`. + +This skill is designed to be portable across Linux, macOS, and Windows +by avoiding OS-specific dependencies and focusing on text analysis and +generation. + +## When to use this skill + +Use this skill when: + +- The user asks you to create or improve documentation for a Terraform + module (root module or nested module). +- A module directory contains Terraform files (`main.tf`, + `variables.tf`, `outputs.tf`, etc.) but the `README.md` is missing, + incomplete, or inconsistent. +- The user wants to standardize module docs across a library of + modules. +- The user wants to review or refactor an existing `README.md` to match + recommended structure and content. + +Do NOT use this skill for: + +- Documenting non-Terraform projects (apps, Helm charts, etc.). +- Generating provider or resource reference docs (refer the user to + provider docs instead). +- Running `terraform` commands, linting, or CI configuration. + +## Inputs and assumptions + +Assume you have access to: + +- The path to a Terraform module directory (root or nested module), + which may contain: + - `main.tf` (required by convention), + - `variables.tf`, + - `outputs.tf`, + - optional additional `.tf` files, + - an existing `README.md` (optional). +- Organizational documentation conventions (like those in + `module_documentation.md`) when provided. + +From these files you must infer: + +- The module's purpose and scope. +- The public interface of the module (variables and outputs). +- Reasonable usage examples based on variables and typical patterns. + +You MUST treat `variables.tf` and `outputs.tf` as the **single source +of truth** for the module interface. + +## High-level behavior + +At a high level, this skill: + +1. Locates and inspects Terraform module files in the given directory. +2. Extracts the module interface: + - All input variables with name, type, default (if any), and + description. + - All outputs with name and description. +3. Derives a concise, accurate description of: + - What the module does. + - When it should and should NOT be used. + - Key assumptions and constraints. +4. Generates or updates `README.md` for the module with standard + sections: + - Overview / Purpose + - Requirements + - Resources + - Inputs + - Outputs + - Usage Example(s) + - Assumptions / Constraints + - Versioning (if applicable) + - Ownership / Contact + 5. Generates or updates at least one runnable Terraform example + configuration under `examples/` (default: `examples/basic/`) that + uses the module. + 6. Keeps documentation and examples in sync with the actual Terraform code + by using the code as the authoritative source. + 7. Avoids external command execution (`terraform-docs`, `terraform`, + etc.) and works purely via text analysis. + +## Detailed workflow + +Follow this workflow step by step. If information is missing, make +minimal, clearly-labelled assumptions and prefer asking the user to +confirm when interactive. + +### Step 1: Identify module type and files + +Given a target directory (for example `modules/aws-sandbox-environment-vpc`): + +1. List `.tf` files in the directory. +2. Detect: + - presence of `main.tf` (entrypoint), + - `variables.tf`, + - `outputs.tf`, + - optional `versions.tf`, + - any existing `README.md`. + +Classify the module as: + +- **Root module** if it is the primary module of a repo, typically in +the repository root, or the user explicitly says so. +- **Nested module** if under a `modules/` subdirectory, following + HashiCorp’s standard module structure. + +This classification shapes how you phrase Overview and examples (nested +modules are often more focused and reused by other modules). + +### Step 2: Parse variables and outputs (text-based) + +Without calling external tools, parse `variables.tf` and `outputs.tf` as +HCL-like text: + +- For each `variable ""` block, extract: + - `name` (string after `variable`), + - `type` (if present), + - `default` (if present; determine if variable is required or + optional), + - `description` (if present, including heredocs). +- For each `output ""` block, extract: + - `name`, + - `description` (if present). + +Heuristics and rules: + +- If `default` is absent, mark the variable as **Required**. +- If `description` is absent, infer a short, neutral description based + on the name and usage (e.g., `vpc_id` → "ID of the VPC to use."). +- Retain original types and defaults exactly as written whenever + possible. + +Build internal tables like: + +- Inputs: name, type, default, required, description. +- Outputs: name, description. + +### Step 3: Derive module overview and scope + +Inspect `main.tf` and other `.tf` files to infer: + +- Primary cloud resources or services (e.g., VPC, ALB, S3 bucket). +- High-level purpose (e.g., "Create a sandbox VPC environment for + development workloads"). +- Any obvious limits (e.g., region constraints, assumptions about + pre-existing VPC, KMS keys, IAM roles). + +From this, generate a short **Overview** section that: + +- States what the module does. +- States when to use it (and when not to, if applicable). +- Avoids implementation details except when critical to understanding + behavior. + +### Step 4: Build the README.md skeleton + +Always generate `README.md` with this structure, unless the user or +organization specifies a slightly different template: + +````markdown +# + +## Overview + + + +## Requirements + +- Terraform >= +- Providers: + - >= + - ... + +## Resources + +List the Terraform resources the module may create. + +| Resource Name | Resource Type | Notes | +|---------------|---------------|-------| +| ... | ... | ... | + +## Inputs + +| Name | Type | Default | Required | Description | +|------|------|---------|----------|-------------| +| ... | ... | ... | ... | ... | + +## Outputs + +| Name | Description | +|------|-------------| +| ... | ... | + +## Usage + +```hcl +module "" { + source = "" + + # Required inputs + # ... + + # Optional inputs + # ... +} +``` + +## Assumptions and Constraints + +- ... + +## Versioning + +- This module follows semantic versioning (MAJOR.MINOR.PATCH). +- Recommended: pin the module version in `source` references. + +## Ownership + +- Owner: +- Source: +```` + +You MUST: + +- Populate a `Resources` section by inspecting resource blocks across all + `.tf` files (for example `resource "aws_vpc" "this" { ... }`). +- Include resources that are conditionally created (for example using `count` + or `for_each`) and mark them clearly in the `Notes` column as "Conditional". +- Keep `Resources` descriptive and implementation-aware, but avoid promising + that all listed resources are always created in every configuration. +- Fill the Inputs and Outputs tables from the parsed variables and + outputs. +- Ensure that the Usage example is **copy-paste runnable** with minimal + edits (for example, only requiring the user to provide obvious values + like `vpc_id`). +- Clearly distinguish required vs optional inputs based on the presence + of `default`. + +If organizational guidelines require extra sections (e.g., "Assumptions" +or "Author / Ownership"), include them and populate from any available +context. + +### Step 5: Handling existing README.md + +If a `README.md` already exists: + +1. Parse the existing sections. +2. Compare Inputs/Outputs tables against the extracted variables and + outputs. +3. Update the README so that: + - Inputs/Outputs tables match the current Terraform interface. + - Outdated or missing variables/outputs are corrected. + - Existing good text (overview, diagrams, rich explanations) is + preserved when still accurate. +4. If there are conflicts between the README and the `.tf` files, treat + the `.tf` files as authoritative and adjust the README, optionally + adding a brief note or asking the user to confirm. + +Aim to **improve** the existing README rather than rewriting it from +scratch, unless it is badly structured or obsolete. + +### Step 6: Required examples and advanced usage + +Examples are mandatory, not optional. + +You MUST: + +- Ensure an `examples/` directory exists for the module documentation scope. +- Ensure at least one complete example configuration exists that uses the + module. +- Use `examples/basic/` as the default example path unless the caller or + repository conventions explicitly require a different path. +- Ensure the example is minimally runnable and includes all required module + inputs with safe placeholder/demo values. +- Ensure the README `Usage` section and the filesystem example stay aligned. + +Minimum required example artifacts (for `examples/basic/`): + +- `examples/basic/main.tf` with a `module` block and required provider/ + terraform configuration as needed for a valid example. +- Add supplemental files only when needed for validity (for example + `examples/basic/variables.tf` or `examples/basic/outputs.tf`). + +If multiple patterns are common (e.g., internal vs public ALB, +single-region vs multi-region), additional examples may be added, but at +least one example is always required. + +README guidance for examples: + +- Add a short note that the example can be validated with Terraform + commands from the repository root. +- If the example is located in `examples/basic/`, include this exact + command sequence: + +```bash +terraform -chdir=examples/basic init ; terraform -chdir=examples/basic plan +``` + +- If a different example directory name is used (for example + `examples/basics/`), adjust the `-chdir` path accordingly. + +Completion gate for this step: + +- Do not finalize output unless at least one example file set is produced + under `examples/` and referenced in README. + +### Step 7: Assumptions, limits, and diagrams + +From the code and any provided documentation: + +- Extract and list key assumptions: + - Required external resources (VPCs, KMS keys, IAM roles). + - Expected tags, naming conventions, or policies. + - Tested regions/providers. +- Note important limits (e.g., "not tested in GovCloud", "assumes + private DNS zone exists"). + +For complex modules (e.g., VPC, shared networking, platform services): + +- Suggest or include a simple diagram in Mermaid.js syntax in the README + to show: + - Main resources, + - Relationships, + - External dependencies. +- Prefer a simple, high-level diagram (such as `flowchart` or `graph TD`) + that can be rendered by Markdown viewers supporting Mermaid. + +Example (to include in the README under a "Diagram" or "Architecture" section): + +```mermaid +flowchart TD + caller[Caller module or root config] + vpc[VPC module] + subnets[Public and private subnets] + alb[Application Load Balancer] + asg[Auto Scaling Group] + + caller --> vpc + vpc --> subnets + subnets --> alb + alb --> asg +``` + +If you cannot reliably infer the full topology, generate the simplest +useful diagram and clearly note that it is a high-level approximation. + +If Mermaid is not supported in the target environment, fall back to a +textual description of the architecture (for example, bullet list of +components and their relationships). + +### Step 8: Versioning and ownership + +If the repo or context mentions versions (tags like `v1.2.3`): + +- Add a **Versioning** section describing: + - That the module follows semantic versioning (MAJOR.MINOR.PATCH). + - What constitutes breaking changes (e.g., changes to inputs/outputs + or behavior). +- Recommend pinning module versions in usage examples. + +Always add an **Ownership** section: + +- If owner metadata is provided, use it. +- Otherwise, insert a placeholder like "Owner: ``" and let + the user fill it. + +### Step 9: Robustness and safety practices + +To maximize reliability (inspired by lessons learned from agent/skill +research): + +- Be explicit about uncertainties: + - If you infer behavior from incomplete code, state that the + description is inferred and may need confirmation. +- Prefer **deterministic structures** (fixed section ordering, fixed + table columns) over free-form prose. +- Avoid hidden side effects: + - Only read and write files in the specified module directory when + instructed by the calling agent. + - Never execute shell commands or rely on system-specific behavior. +- Make incremental changes: + - When updating an existing README, adjust only what’s necessary to + align with the current module interface and guidelines. + +## Output format + +When the agent asks you to generate or update documentation, you should +respond with: + +- The full new content of `README.md` in a Markdown code block, + clearly marked as the desired file contents. +- The full content of at least one example Terraform file under + `examples/` (default: `examples/basic/main.tf`) in Markdown code blocks, + each clearly marked with target file path. +- When requested, include a diff-style summary of README and example file + changes, plus final full file contents. + +Example output structure: + +````md +### Generated README.md + +```markdown +# +... +``` + +### Generated examples/basic/main.tf + +```hcl +terraform { + required_version = ">= 1.5.0" +} + +module "example" { + source = "../../" + + # Required inputs + # ... + + # Optional inputs + # ... +} +``` +```` + +The calling agent will be responsible for actually writing files to disk. + +## Examples + +### Example: Create README for a new module + +User context: + +- Directory: `modules/aws-sandbox-environment-vpc/` +- Files: `main.tf`, `variables.tf`, `outputs.tf`, `versions.tf` +- No `README.md` yet. + +You should: + +1. Parse `variables.tf` and `outputs.tf` to build Inputs/Outputs tables. +2. Inspect `main.tf` to detect it creates a sandbox VPC environment for + development. +3. Propose a `README.md` with the standard sections and a `Usage` + example using `modules/aws-sandbox-environment-vpc` as the source. + +### Example: Update README after interface change + +User context: + +- Existing `README.md` with Inputs/Outputs tables. +- `variables.tf` updated with new variable `enable_flow_logs`. + +You should: + +1. Detect that `enable_flow_logs` exists in `variables.tf` but not in + README. +2. Add a row to the Inputs table, marking it as required/optional based + on `default`. +3. Update any relevant Assumptions or Usage examples to mention flow + logs if significant. diff --git a/terraform/module-generation/skills/terraform-module-story-builder/SKILL.md b/terraform/module-generation/skills/terraform-module-story-builder/SKILL.md new file mode 100644 index 0000000..f9bd069 --- /dev/null +++ b/terraform/module-generation/skills/terraform-module-story-builder/SKILL.md @@ -0,0 +1,134 @@ +--- +name: terraform-module-story-builder +description: > + Help write and refine Agile user stories for infrastructure-as-code + and Terraform modules. Focus on clear personas, goals, and value, + even for highly technical work such as platform, security, and + reusable modules. +metadata: + domain: requirements + tags: ["user-story", "iac", "terraform", "platform", "technical-story"] +--- + +# Purpose + +When this skill is active, you help the user write or improve user stories +for infrastructure-as-code and platform work (e.g. Terraform modules, +cloud networking, security hardening, observability). + +You keep the classic "As a / I want / so that" structure but adapt: +- personas to infra / dev / security roles, +- goals to infra capabilities and developer experience, +- value to reliability, security, speed, or maintainability. + +# When to use this skill + +Use this skill when: +- The user wants to describe work on Terraform modules, cloud infra, + CI/CD, security or platform capabilities as user stories. +- The user has a rough technical requirement and wants it framed + as a user story that is understandable, prioritizable, and testable. + +Do NOT use this skill for: +- Non-infra product features targeting end-users of an app (use a generic + product user-story skill instead). +- Low-level task lists without any user or value context (e.g. "rename a variable"). + +# Core principles + +Always enforce these principles when writing stories: + +1. **Persona is a real beneficiary** + - Use roles like: "application developer", "platform engineer", + "security engineer", "SRE", "data engineer", "team X developer". + - Avoid vague "As a system" or "As Terraform". Systems are not users. + +2. **Goal is a capability, not an implementation** + - "I want to provision a secure VPC via a reusable module", not + "I want to write a Terraform file". + - "I want to rotate secrets automatically", not + "I want to configure a cronjob". + +3. **Value is explicit and business- or risk-oriented** + - "so that I reduce the risk of misconfigured networks", + - "so that teams can deploy faster with fewer manual steps", + - "so that we meet compliance requirements". + +4. **Stories should be INVEST where possible** + - Independent, Negotiable, Valuable, Estimable, Small, Testable. + +# Instructions + +When using this skill, follow this process: + +1. **Clarify context** + - Ask 1–2 short questions if needed: + - Who benefits most from this change? (role/team) + - What capability do they need in terms of infra or modules? + - What risk or pain are we reducing, or what outcome are we improving? + - If context is already clear, skip questions and proceed. + +2. **Draft the core user story** + +Use this template: + +> As a , +> I want , +> so that . + +Rules: +- Persona = human role (e.g. "application developer in team X", + "platform engineer", "security officer"). +- Capability = something they can now do using infra or a module + (provision, observe, secure, recover, audit, etc.). +- Value = framed in terms of speed, safety, stability, compliance, + developer experience, or cost. + +3. **Adapt for Terraform / IaC modules** + +When the user mentions Terraform or modules, make the story more specific: + +- Mention the module or area at a high level: + - "via a standardized Terraform module", + - "using our shared VPC module", + - "through an automated pipeline". +- Keep it at the level of "what I can do", not "how the module is implemented". + +Examples: +- "As an application developer, I want to create application networks via a standard Terraform VPC module so that I don't have to understand low-level cloud networking details." +- "As a security engineer, I want all S3 buckets provisioned via our storage module to be encrypted and private by default so that we reduce the risk of data exposure." + +4. **Output format** + +Always respond in this structure: + +```md +## User story + +As a , +I want , +so that . + +## Context (optional) + +- Domain: +- Primary beneficiary: +- Notes: + +``` + +5. **Keep it small and focused** + +If the described work is too large or mixes multiple capabilities: +- Propose a split into 2–3 smaller stories, each with its own persona, + goal, and value. +- Make sure each story can be scoped to something completable within one iteration. + +6. **Avoid common anti-patterns** + +Watch out for: +- Stories without a real "so that" (no clear value). +- "As a developer, I want to refactor X…" without explaining why + (maintainability, performance, risk reduction, etc.). +- Stories that are really tasks (e.g. "As Terraform, I want a module…" + or "As a system, I want to create a VPC").