diff --git a/README.md b/README.md
index e0d03f4..bddacd1 100644
--- a/README.md
+++ b/README.md
@@ -8,6 +8,7 @@ The toolkit bundles the following capabilities as a single **mc-agent-toolkit**
| Feature | Description | Details |
|---|---|---|
+| **Monitoring Advisor** | Analyzes data coverage across warehouses and use cases, identifies monitoring gaps, and creates monitors to protect critical data. | [README](skills/monitoring-advisor/README.md) |
| **Monitor Creation** | Guides AI agents through creating monitors correctly — validates tables, fields, and parameters before generating monitors-as-code YAML. | [README](skills/monitor-creation/README.md) |
| **Prevent** | Surfaces lineage, alerts, and blast radius before code changes. Generates monitors-as-code and targeted validation queries to prevent data incidents. | [README](skills/prevent/README.md) |
| **Generate Validation Notebook** | Generates SQL validation notebooks for dbt model changes, with targeted queries comparing baseline and development data. | [README](skills/generate-validation-notebook/README.md) |
diff --git a/plugins/claude-code/.claude-plugin/plugin.json b/plugins/claude-code/.claude-plugin/plugin.json
index 77ff67a..96184ae 100644
--- a/plugins/claude-code/.claude-plugin/plugin.json
+++ b/plugins/claude-code/.claude-plugin/plugin.json
@@ -15,6 +15,7 @@
"prevent",
"push-ingestion",
"validation-notebook",
+ "monitoring-advisor",
"dbt",
"schema"
],
diff --git a/plugins/claude-code/skills/monitoring-advisor b/plugins/claude-code/skills/monitoring-advisor
new file mode 120000
index 0000000..9052cbe
--- /dev/null
+++ b/plugins/claude-code/skills/monitoring-advisor
@@ -0,0 +1 @@
+../../../skills/monitoring-advisor
\ No newline at end of file
diff --git a/skills/README.md b/skills/README.md
index 1076290..589152e 100644
--- a/skills/README.md
+++ b/skills/README.md
@@ -6,6 +6,7 @@ Skills are platform-agnostic instruction sets that tell an AI coding agent what
| Skill | Description |
|---|---|
+| **[Monitoring Advisor](monitoring-advisor/)** | Analyzes data coverage across warehouses and use cases, identifies monitoring gaps, and creates monitors to protect critical data. |
| **[Monitor Creation](monitor-creation/)** | Guides AI agents through creating monitors correctly — validates tables, fields, and parameters before generating monitors-as-code YAML. |
| **[Prevent](prevent/)** | Surfaces Monte Carlo context (lineage, alerts, blast radius) before code changes, generates monitors-as-code, and produces targeted validation queries. |
| **[Generate Validation Notebook](generate-validation-notebook/)** | Generates SQL validation notebooks for dbt model changes, with targeted queries comparing baseline and development data. |
diff --git a/skills/monitor-creation/references/metric-monitor.md b/skills/monitor-creation/references/metric-monitor.md
index c63317b..fc6c1eb 100644
--- a/skills/monitor-creation/references/metric-monitor.md
+++ b/skills/monitor-creation/references/metric-monitor.md
@@ -21,13 +21,13 @@ Use a metric monitor when the user wants to:
| `name` | string | Unique identifier for the monitor. Use a descriptive slug (e.g., `orders_null_check`). |
| `description` | string | Human-readable description of what the monitor checks. |
| `table` | string | Table MCON (preferred) or `database:schema.table` format. If not MCON, also pass `warehouse`. |
-| `aggregate_time_field` | string | **MUST be a real timestamp/datetime column from the table.** NEVER guess this value. |
| `alert_conditions` | array | List of alert condition objects (see Alert Conditions below). |
## Optional Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
+| `aggregate_time_field` | string | none | Timestamp/datetime column for time-windowed aggregation. **When provided, MUST be a real column from the table — NEVER guess this value.** When omitted, the monitor queries all rows on each run (whole-table scan). Omit for tables without a suitable timestamp column. |
| `warehouse` | string | auto-resolved | Warehouse name or UUID. Required if `table` is not an MCON. |
| `segment_fields` | array of string | none | Fields to group/segment metrics by (e.g., `["country", "status"]`). |
| `aggregate_by` | string | `"day"` | Time interval: `"hour"`, `"day"`, `"week"`, `"month"`. |
@@ -54,7 +54,16 @@ For example, to run a daily-aggregated monitor every other day, pass `aggregate_
## Choosing the Timestamp Field
-The `aggregate_time_field` is the most critical parameter. It MUST be a real column from the table that contains timestamp or datetime values. This is the number one source of monitor creation failures.
+The `aggregate_time_field` controls whether the monitor uses time-windowed aggregation or whole-table scans. When provided, it MUST be a real column from the table — this is the number one source of monitor creation failures.
+
+### When to omit it
+
+Omit `aggregate_time_field` when:
+- The table has **no timestamp or datetime columns** at all.
+- The table uses a **truncate-and-reload** pattern (fully replaced on each pipeline run) — time-windowed aggregation is meaningless since all rows share the same load time.
+- The user wants to monitor the **entire table state** on each run (e.g., `RELATIVE_ROW_COUNT` segmented by a dimension).
+
+When omitted, the monitor queries all rows on each run. This works well for small-to-medium tables but can be expensive for very large tables.
### How to pick it
@@ -63,9 +72,9 @@ The `aggregate_time_field` is the most critical parameter. It MUST be a real col
3. If the user specified one, verify it exists in the column list.
4. If exactly one obvious candidate exists, suggest it.
5. If multiple candidates exist, present them and ask the user.
-6. If NO obvious timestamp columns exist, suggest a custom SQL monitor instead (which does not need a timestamp field).
+6. If NO obvious timestamp columns exist, omit the field — the monitor will do a whole-table scan. For very large tables, consider whether a custom SQL monitor would be more efficient.
-**NEVER** proceed without confirming the timestamp field exists in the table schema.
+**NEVER** guess a timestamp field name — either confirm it exists in the schema or omit it.
### Common timestamp field mistakes
diff --git a/skills/monitoring-advisor/README.md b/skills/monitoring-advisor/README.md
new file mode 100644
index 0000000..b6346f8
--- /dev/null
+++ b/skills/monitoring-advisor/README.md
@@ -0,0 +1,83 @@
+# Monte Carlo Monitoring Advisor Skill
+
+Analyze data coverage across warehouses and use cases, identify monitoring gaps, and create monitors to protect critical data. Walks users through warehouse discovery, use-case exploration, coverage gap analysis, and monitor creation — all through natural conversation.
+
+## Editor & Stack Compatibility
+
+The skill works with any AI editor that supports MCP and the Agent Skills format — including Claude Code, Cursor, and VS Code.
+
+All warehouses supported by Monte Carlo work with the monitoring advisor. The skill validates table and column references against your actual warehouse schema via the Monte Carlo API.
+
+## Prerequisites
+
+- Claude Code, Cursor, VS Code or any editor with MCP support
+- Monte Carlo account with Editor role or above
+- [MC CLI](https://docs.getmontecarlo.com/docs/using-the-cli) installed for monitor deployment (`pip install montecarlodata`)
+- **Recommended:** Install alongside the **monitor-creation** skill — the monitoring advisor cross-references its parameter docs
+
+## Setup
+
+### Via the mc-agent-toolkit plugin (recommended)
+
+Install the plugin for your editor — it bundles the skill, hooks, MCP server, and permissions automatically. See the [main README](../../README.md#installing-the-plugin-recommended) for editor-specific instructions.
+
+### Standalone
+
+1. Configure the Monte Carlo MCP server:
+ ```
+ claude mcp add --transport http monte-carlo-mcp https://integrations.getmontecarlo.com/mcp
+ ```
+
+2. Install the skill:
+ ```bash
+ npx skills add monte-carlo-data/mc-agent-toolkit --skill monitoring-advisor
+ ```
+
+3. Authenticate: run `/mcp` in your editor, select `monte-carlo-mcp`, and complete the OAuth flow.
+
+4. Verify: ask your editor "Test my Monte Carlo connection" — it should call `testConnection` and confirm.
+
+
+Legacy: header-based auth (for MCP clients without HTTP transport)
+
+If your MCP client doesn't support HTTP transport, use `.mcp.json.example` with `npx mcp-remote` and header-based authentication. See the [MCP server docs](https://docs.getmontecarlo.com/docs/mcp-server) for details.
+
+
+
+## How to use it
+
+Ask your AI editor about your monitoring coverage — describe what you want to understand or protect. The skill guides the agent through warehouse discovery, use-case analysis, coverage gap identification, and monitor creation. No special commands needed.
+
+### Example prompts
+
+- "What are my coverage gaps?"
+- "Show me my use cases and what's monitored"
+- "Which tables should I monitor first?"
+- "Analyze monitoring coverage for my warehouse"
+- "Find unmonitored tables with recent anomalies"
+- "Help me set up monitoring for my critical use cases"
+
+### What it does
+
+1. **Discovers** your warehouses and use cases
+2. **Analyzes** coverage — which tables are monitored, which aren't, and which have active anomalies
+3. **Prioritizes** gaps by criticality, importance score, and anomaly activity
+4. **Suggests** monitors tailored to your use cases and data patterns
+5. **Generates** monitors-as-code YAML ready for deployment
+
+### Deploying generated monitors
+
+When the advisor generates a monitor, it returns MaC YAML. Deploy with:
+
+```bash
+montecarlo monitors apply --dry-run # preview
+montecarlo monitors apply --auto-yes # apply
+```
+
+Your project needs a `montecarlo.yml` config in the working directory:
+
+```yaml
+version: 1
+namespace:
+default_resource:
+```
diff --git a/skills/monitoring-advisor/SKILL.md b/skills/monitoring-advisor/SKILL.md
new file mode 100644
index 0000000..5d8dc16
--- /dev/null
+++ b/skills/monitoring-advisor/SKILL.md
@@ -0,0 +1,240 @@
+---
+name: monte-carlo-monitoring-advisor
+description: |
+ Analyze data coverage and create monitoring for critical use cases.
+ Activates when the user asks about monitoring coverage, data coverage gaps,
+ use case analysis, or wants to understand what's monitored vs. not.
+version: 1.0.0
+---
+
+# Monte Carlo Monitoring Advisor Skill
+
+This skill helps you analyze a user's data estate, discover critical use cases, identify coverage gaps, and suggest or create monitors to protect what matters most. It is the interactive counterpart to Monte Carlo's coverage analysis — walking the user through warehouse discovery, use-case exploration, and monitor creation.
+
+When the user is ready to create monitors, **hand off to the monitor-creation skill** (`../monitor-creation/SKILL.md`). It contains the full validation, parameter guidance, and creation workflow. This skill focuses on the coverage analysis that leads up to monitor creation.
+
+## When to activate this skill
+
+Activate when the user:
+
+- Asks about monitoring coverage, data coverage, or coverage gaps
+- Wants to understand what's monitored vs. not in their warehouse
+- Asks about use cases, use-case criticality, or use-case analysis
+- Wants to explore their data estate and find what needs monitoring
+- Says things like "what should I monitor?", "where are my coverage gaps?", "show me my use cases"
+- Asks about unmonitored tables with anomalies or importance-based prioritization
+
+## When NOT to activate this skill
+
+Do not activate when the user is:
+
+- Asking to create a specific monitor for a known table (use the monitor-creation skill)
+- Triaging or responding to active alerts (use the prevent skill's Workflow 3)
+- Running impact assessments before code changes (use the prevent skill's Workflow 4)
+- Editing or deleting existing monitors
+
+---
+
+## Prerequisites
+
+- **Required:** Monte Carlo MCP server (`monte-carlo-mcp`) must be configured and authenticated
+- **Optional:** A database MCP server (Snowflake, BigQuery, Redshift, Databricks) for SQL profiling of table usage patterns
+
+---
+
+## Available MCP tools
+
+All tools are available via the `monte-carlo` MCP server.
+
+| Tool | Purpose |
+| --- | --- |
+| `getWarehouses` | List accessible warehouses (needed first — `getUseCases` requires `warehouse_id`) |
+| `getUseCases` | List use cases with criticality, descriptions, table counts, precomputed tag names |
+| `getUseCaseTableSummary` | Criticality distribution (HIGH/MEDIUM/LOW table counts) for a use case |
+| `getUseCaseTables` | Paginated tables with criticality, golden-table status, MCONs |
+| `getMonitors` | Check monitoring status on specific tables via `mcons` filter |
+| `getAssetLineage` | Upstream/downstream dependencies for tables (takes MCONs + direction) |
+| `getAudiences` | List notification audiences |
+| `getUnmonitoredTablesWithAnomalies` | Tables with muted OOTB anomalies but no monitors (takes ISO 8601 time range) |
+| `search` | Find tables by name; supports `is_monitored` filter |
+| `getTable` | Table details, fields, stats, domain membership |
+| `getQueriesForTable` | Query logs for a table (source/destination) |
+| `getFieldMetricDefinitions` | Available metrics per field type for a warehouse |
+| `getDomains` | List Monte Carlo domains |
+| `getValidationPredicates` | Available validation rule types |
+| `createTableMonitorMac` | Generate table monitor YAML (dry-run) |
+| `createMetricMonitorMac` | Generate metric monitor YAML (dry-run) |
+| `createValidationMonitorMac` | Generate validation monitor YAML (dry-run) |
+| `createCustomSqlMonitorMac` | Generate custom SQL monitor YAML (dry-run) |
+| `createComparisonMonitorMac` | Generate comparison monitor YAML (dry-run) |
+
+---
+
+## First-turn protocol
+
+Follow this sequence at the start of every conversation. Do NOT skip steps.
+
+### Step 1: Discover warehouses
+
+Call `getWarehouses` to list all accessible warehouses.
+
+- If **one** warehouse: select it automatically, proceed to Step 2.
+- If **multiple** warehouses: present warehouse **names** (never UUIDs) and ask the user which one to explore.
+
+### Step 2: Discover use cases
+
+Call `getUseCases(warehouse_id=)` to discover use cases for the chosen warehouse.
+
+- If **use cases exist** → proceed to the **Use-case workflow** (below).
+- If **no use cases** → proceed to the **Importance-based fallback** (below).
+
+### Step 3: Check for database MCP (optional)
+
+Check if the user has a database MCP server available by looking for tools containing `snowflake`, `bigquery`, `redshift`, or `databricks` in the tool list. If found, note it for the SQL profiling step later. If not found, skip SQL profiling gracefully.
+
+---
+
+## Use-case workflow
+
+This is the primary flow when use cases are defined.
+
+### Present use cases
+
+- Sort by criticality: **HIGH** before **MEDIUM** before **LOW**.
+- For each use case, show the **description** and explain the **reasoning for its criticality level** so the user understands why it matters.
+- Call `getUseCaseTables` with `golden_tables_only=true` and mention specific golden-table names as concrete examples. Golden tables are the last layer in the warehouse — they feed ML models, dashboards, and reports. Explain this when relevant.
+- Use `getAssetLineage` to explain how tables in a use case are connected and why certain tables are important (e.g. a golden table with many upstream dependencies).
+
+### Analyze coverage
+
+1. Call `getUseCaseTableSummary` to show how many tables exist at each criticality level (HIGH / MEDIUM / LOW) for the use case.
+2. Call `getUseCaseTables` to obtain table MCONs, then call `getMonitors(mcons=[...])` to report how many are already monitored vs. not.
+3. Ask the user which criticality scope they prefer:
+ - **HIGH only** — monitor only the most critical tables
+ - **MEDIUM + HIGH** — broader coverage
+ - **ALL** — full coverage including LOW-criticality tables
+4. You may suggest covering **multiple** use cases in one session.
+
+### Identify coverage gaps with anomaly data
+
+Use `getUnmonitoredTablesWithAnomalies` to discover tables that are **not monitored** but already have muted out-of-the-box anomalies. This reveals real coverage gaps — places where Monte Carlo detected data issues but no monitor was configured to alert anyone.
+
+- Call it with a recent time window (e.g. last 7–30 days) using ISO 8601 timestamps.
+- Results are ranked by **importance score** — the most critical gaps appear first.
+- Each result includes a sample of anomaly events showing what types of issues were detected (freshness, volume, schema changes).
+- Use this to **prioritize** which unmonitored tables to cover first — a table with recent anomalies is a stronger candidate than one with no activity.
+- Cross-reference with use-case data: if an unmonitored table with anomalies belongs to a critical use case, escalate its priority.
+
+---
+
+## Importance-based fallback
+
+When no use cases are defined, fall back to importance-based table discovery.
+
+1. **Find unmonitored tables:** Use `search(query="", is_monitored=false)` to find unmonitored tables sorted by importance.
+2. **Find tables with anomalies:** Use `getUnmonitoredTablesWithAnomalies` with a recent time window (last 14–30 days) to find tables with recent anomalies but no monitors.
+3. **Inspect top candidates:** Use `getTable` to check table details, fields, and stats for the most important unmonitored tables.
+4. **Understand criticality via lineage:** Use `getAssetLineage` to understand which tables are most connected — tables with many downstream dependencies are higher priority.
+5. **Prioritize:** Rank candidates by importance score and anomaly activity. Present the top candidates to the user with reasoning.
+
+---
+
+## SQL profiling (optional)
+
+If a database MCP server was detected in Step 3 of the first-turn protocol:
+
+1. Call `getQueriesForTable` to see recent query patterns on candidate tables.
+2. Use the database MCP tools (e.g. `snowflake_query`, `bigquery_query`) to profile table usage — identify which tables are queried most frequently, which columns are used in JOINs and WHERE clauses.
+3. Use this information to refine monitor suggestions — heavily-queried tables with no monitors are high-priority gaps.
+
+If no database MCP is available, skip this step entirely. Do not ask the user to configure one.
+
+---
+
+## Monitor creation
+
+When the user is ready to create monitors, **read and follow the monitor-creation skill** (`../monitor-creation/SKILL.md`). It handles monitor type selection, table/column validation, domain assignment, scheduling, confirmation, and YAML generation.
+
+This section covers only the guidance specific to coverage-driven monitor creation.
+
+### Pre-creation context
+
+Before handing off to the monitor-creation workflow:
+
+1. Call `getAudiences` to list available notification audiences. Ask the user which audience they want notifications sent to.
+2. Ask whether the monitor should be created as a **DRAFT** or active.
+3. When passing `audiences` or `failure_audiences`, use the audience **name/label** (not UUID).
+
+### Use-case tag monitors
+
+The most common output of coverage analysis is a **table monitor scoped by use-case tags** via `createTableMonitorMac`. The `asset_selection` parameter uses this structure:
+
+```json
+{
+ "databases": [""],
+ "schemas": [""],
+ "filters": [
+ {
+ "type": "TABLE_TAG",
+ "tableTags": [":"],
+ "tableTagsOperator": "HAS_ANY"
+ }
+ ]
+}
+```
+
+Rules:
+- Filter `type` is **always** `TABLE_TAG` for use-case monitors.
+- `tableTagsOperator` should be `HAS_ANY`.
+- Each entry in `tableTags` is `":"` where the tag key is the precomputed tag name from `getUseCases` output and the value is the criticality level in lowercase (`high`, `medium`, `low`).
+- To monitor only HIGH-criticality tables: `["tag_name:high"]`
+- To monitor MEDIUM + HIGH: `["tag_name:high", "tag_name:medium"]`
+- To monitor ALL: `["tag_name:high", "tag_name:medium", "tag_name:low"]`
+
+### Monitor description guidelines
+
+Write a clear, meaningful `description` that explains what the monitor covers and why. The backend auto-generates the monitor `name` — you cannot control it, but the description is what users see.
+
+- **Bad:** `"Data Quality Monitoring - HIGH criticality table monitor"`
+- **Good:** `"Monitor HIGH criticality tables in the Revenue Reporting use case to catch issues before they affect dashboards and financial reports."`
+
+The description should mention the criticality scope, the use case name, and a brief reason why this monitoring matters.
+
+---
+
+## Transient and truncate-and-reload tables
+
+Some tables show 0 rows when queried directly but have recent write activity in Monte Carlo metadata. These are **transient tables** — fully replaced on each pipeline run (truncate-and-reload pattern). Recognize this pattern early to avoid wasting time querying empty tables.
+
+Signs of a transient table:
+- `getTable` shows recent `last_write` timestamp and high read/write activity
+- Direct SQL query returns 0 rows or all-NULL timestamp columns
+- Monte Carlo detected freshness anomalies (the table stayed empty longer than expected between loads)
+
+---
+
+## Graceful degradation
+
+Handle missing or unavailable tools gracefully:
+
+| Scenario | Behavior |
+| --- | --- |
+| No use cases defined | Fall back to importance-based discovery |
+| No database MCP available | Skip SQL profiling, rely on MC tools only |
+| `getUnmonitoredTablesWithAnomalies` returns empty | Note that no recent anomalies were found; proceed with use-case or importance-based prioritization |
+| `getUseCaseTables` returns no tables | Note the use case has no tables; suggest exploring other use cases |
+| `getAudiences` returns empty | Inform user no audiences are configured; monitors can still be created without notification routing |
+| User has no warehouses | Inform user that no warehouses are accessible; they may need to check their Monte Carlo permissions |
+
+Never error out or stop the conversation because one tool returned empty results. Explain what happened and offer the next best path.
+
+---
+
+## Rules
+
+- **Never expose UUIDs, MCONs, or internal identifiers** to the user — always use human-readable names for warehouses, audiences, use cases, and tables. Keep internal identifiers for tool calls only.
+- When the user asks about relationships between tables, use `getAssetLineage` to fetch upstream/downstream connections and explain the data flow.
+- Be concise but thorough. Use bullet points and tables for clarity.
+- Always use **ISO 8601** format for datetime values in tool calls.
+- Never reformat YAML values returned by creation tools.
+- When passing `audiences` or `failure_audiences` to monitor creation tools, use the audience **name/label** (not UUID). The API accepts audience names.