Skip to content
This repository was archived by the owner on Mar 25, 2026. It is now read-only.

Commit a554ecb

Browse files
committed
fix: update prompts-data with scenario config import fix
@langwatch/scenario/config → @langwatch/scenario/integrations/vitest/config
1 parent 487d69f commit a554ecb

File tree

1 file changed

+39
-148
lines changed

1 file changed

+39
-148
lines changed

snippets/prompts-data.jsx

Lines changed: 39 additions & 148 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
11
// Auto-generated from skills/_compiled/*.docs.txt
2+
// Regenerate with: bash skills/_compiled/generate.sh then run the generation script
23

34
export const PROMPTS = {
4-
tracing: `Instrument my code with LangWatch
5+
tracing: `Add LangWatch Tracing to Your Code
56
67
You are using LangWatch for your AI agent project. Follow these instructions.
78
@@ -56,12 +57,6 @@ Or add to \`~/.claude.json\` or \`.mcp.json\` in the project:
5657
## For other editors
5758
Add to your editor's MCP settings file using the JSON config above.
5859
59-
## For ChatGPT, Claude Chat, or other web assistants
60-
Use the hosted remote MCP server:
61-
- URL: \`https://mcp.langwatch.ai/sse\`
62-
- Authentication: Bearer Token with your LangWatch API key
63-
- Get a key at https://app.langwatch.ai/authorize
64-
6560
**Tip:** If \`LANGWATCH_API_KEY\` is already in the project's \`.env\` file, use that same key for the MCP configuration.
6661
6762
If MCP installation fails, see # Fetching LangWatch Docs Without MCP
@@ -138,10 +133,9 @@ Run the application and check that traces appear in your LangWatch dashboard at
138133
- Do NOT invent instrumentation patterns — always read the docs for the specific framework
139134
- Do NOT skip the \`langwatch.setup()\` call in Python
140135
- Do NOT forget to add LANGWATCH_API_KEY to .env
141-
- Do NOT use \`platform_\` MCP tools — this skill is about adding code, not creating platform resources
142-
`,
136+
- Do NOT use \`platform_\` MCP tools — this skill is about adding code, not creating platform resources`,
143137

144-
evaluations: `Set up evaluations for my agent
138+
evaluations: `Set Up Evaluations for Your Agent
145139
146140
You are using LangWatch for your AI agent project. Follow these instructions.
147141
@@ -316,12 +310,6 @@ Or add to \`~/.claude.json\` or \`.mcp.json\` in the project:
316310
## For other editors
317311
Add to your editor's MCP settings file using the JSON config above.
318312
319-
## For ChatGPT, Claude Chat, or other web assistants
320-
Use the hosted remote MCP server:
321-
- URL: \`https://mcp.langwatch.ai/sse\`
322-
- Authentication: Bearer Token with your LangWatch API key
323-
- Get a key at https://app.langwatch.ai/authorize
324-
325313
**Tip:** If \`LANGWATCH_API_KEY\` is already in the project's \`.env\` file, use that same key for the MCP configuration.
326314
327315
If MCP installation fails, see # Fetching LangWatch Docs Without MCP
@@ -554,10 +542,9 @@ Go to https://app.langwatch.ai and:
554542
- Monitors **measure** (async), guardrails **act** (sync, via code with \`as_guardrail=True\`) — both are online evaluation
555543
- Always set up \`LANGWATCH_API_KEY\` in \`.env\`
556544
- Always call \`discover_schema\` before creating evaluators via MCP to understand available types
557-
- Do NOT create prompts with \`langwatch prompt create\` CLI when using the platform approach — that's for code-based projects
558-
`,
545+
- Do NOT create prompts with \`langwatch prompt create\` CLI when using the platform approach — that's for code-based projects`,
559546

560-
scenarios: `Add scenario tests for my agent
547+
scenarios: `Test Your Agent with Scenarios
561548
562549
You are using LangWatch for your AI agent project. Follow these instructions.
563550
@@ -742,12 +729,6 @@ Or add to \`~/.claude.json\` or \`.mcp.json\` in the project:
742729
## For other editors
743730
Add to your editor's MCP settings file using the JSON config above.
744731
745-
## For ChatGPT, Claude Chat, or other web assistants
746-
Use the hosted remote MCP server:
747-
- URL: \`https://mcp.langwatch.ai/sse\`
748-
- Authentication: Bearer Token with your LangWatch API key
749-
- Get a key at https://app.langwatch.ai/authorize
750-
751732
**Tip:** If \`LANGWATCH_API_KEY\` is already in the project's \`.env\` file, use that same key for the MCP configuration.
752733
753734
If MCP installation fails, see # Fetching LangWatch Docs Without MCP
@@ -791,7 +772,7 @@ scenario.configure(default_model="openai/gpt-5-mini")
791772
For TypeScript, create a \`scenario.config.mjs\` file:
792773
\`\`\`typescript
793774
// scenario.config.mjs
794-
import { defineConfig } from "@langwatch/scenario/config";
775+
import { defineConfig } from "@langwatch/scenario/integrations/vitest/config";
795776
import { openai } from "@ai-sdk/openai";
796777
797778
export default defineConfig({
@@ -1028,12 +1009,6 @@ Or add to \`~/.claude.json\` or \`.mcp.json\` in the project:
10281009
## For other editors
10291010
Add to your editor's MCP settings file using the JSON config above.
10301011
1031-
## For ChatGPT, Claude Chat, or other web assistants
1032-
Use the hosted remote MCP server:
1033-
- URL: \`https://mcp.langwatch.ai/sse\`
1034-
- Authentication: Bearer Token with your LangWatch API key
1035-
- Get a key at https://app.langwatch.ai/authorize
1036-
10371012
**Tip:** If \`LANGWATCH_API_KEY\` is already in the project's \`.env\` file, use that same key for the MCP configuration.
10381013
10391014
### Step 2: Understand the Scenario Schema
@@ -1097,10 +1072,9 @@ For TypeScript: \`npx vitest run\`
10971072
- Do NOT use \`fetch_scenario_docs\` for SDK documentation — that's for code-based testing
10981073
- Write criteria as natural language descriptions, not regex patterns
10991074
- Create focused scenarios — each should test one specific behavior
1100-
- Always call \`discover_schema\` first to understand the scenario format
1101-
`,
1075+
- Always call \`discover_schema\` first to understand the scenario format`,
11021076

1103-
prompts: `Version my prompts with LangWatch
1077+
prompts: `Version Your Prompts with LangWatch Prompts CLI
11041078
11051079
You are using LangWatch for your AI agent project. Follow these instructions.
11061080
@@ -1201,12 +1175,6 @@ Or add to \`~/.claude.json\` or \`.mcp.json\` in the project:
12011175
## For other editors
12021176
Add to your editor's MCP settings file using the JSON config above.
12031177
1204-
## For ChatGPT, Claude Chat, or other web assistants
1205-
Use the hosted remote MCP server:
1206-
- URL: \`https://mcp.langwatch.ai/sse\`
1207-
- Authentication: Bearer Token with your LangWatch API key
1208-
- Get a key at https://app.langwatch.ai/authorize
1209-
12101178
**Tip:** If \`LANGWATCH_API_KEY\` is already in the project's \`.env\` file, use that same key for the MCP configuration.
12111179
12121180
If MCP installation fails, see # Fetching LangWatch Docs Without MCP
@@ -1302,10 +1270,9 @@ Check that your prompts appear on https://app.langwatch.ai in the Prompts sectio
13021270
- Do NOT hardcode prompts in application code — always use \`langwatch.prompts.get()\` to fetch managed prompts
13031271
- Do NOT duplicate prompt text as a fallback (no try/catch around \`prompts.get\` with a hardcoded string) — this silently defeats versioning
13041272
- Do NOT manually edit \`prompts.json\` — use the CLI commands (\`langwatch prompt init\`, \`langwatch prompt create\`, \`langwatch prompt sync\`)
1305-
- Do NOT skip \`langwatch prompt sync\` — prompts must be synced to the platform after creation
1306-
`,
1273+
- Do NOT skip \`langwatch prompt sync\` — prompts must be synced to the platform after creation`,
13071274

1308-
analytics: `How is my agent performing?
1275+
analytics: `Analyze Agent Performance with LangWatch
13091276
13101277
You are using LangWatch for your AI agent project. Follow these instructions.
13111278
@@ -1346,12 +1313,6 @@ Or add to \`~/.claude.json\` or \`.mcp.json\` in the project:
13461313
## For other editors
13471314
Add to your editor's MCP settings file using the JSON config above.
13481315
1349-
## For ChatGPT, Claude Chat, or other web assistants
1350-
Use the hosted remote MCP server:
1351-
- URL: \`https://mcp.langwatch.ai/sse\`
1352-
- Authentication: Bearer Token with your LangWatch API key
1353-
- Get a key at https://app.langwatch.ai/authorize
1354-
13551316
**Tip:** If \`LANGWATCH_API_KEY\` is already in the project's \`.env\` file, use that same key for the MCP configuration.
13561317
13571318
## Step 2: Discover Available Metrics
@@ -1408,10 +1369,9 @@ Summarize the data clearly for the user:
14081369
- Do NOT try to write code -- this skill uses MCP tools only, no SDK installation or code changes
14091370
- Do NOT hardcode metric names -- discover them dynamically so they stay correct as the platform evolves
14101371
- Do NOT use \`platform_\` MCP tools for creating resources -- this skill is read-only analytics
1411-
- Do NOT present raw JSON to the user -- summarize the data in a clear, human-readable format
1412-
`,
1372+
- Do NOT present raw JSON to the user -- summarize the data in a clear, human-readable format`,
14131373

1414-
level_up: `Take my agent to the next level
1374+
level_up: `Add LangWatch Tracing to Your Code
14151375
14161376
You are using LangWatch for your AI agent project. Follow these instructions.
14171377
@@ -1466,12 +1426,6 @@ Or add to \`~/.claude.json\` or \`.mcp.json\` in the project:
14661426
## For other editors
14671427
Add to your editor's MCP settings file using the JSON config above.
14681428
1469-
## For ChatGPT, Claude Chat, or other web assistants
1470-
Use the hosted remote MCP server:
1471-
- URL: \`https://mcp.langwatch.ai/sse\`
1472-
- Authentication: Bearer Token with your LangWatch API key
1473-
- Get a key at https://app.langwatch.ai/authorize
1474-
14751429
**Tip:** If \`LANGWATCH_API_KEY\` is already in the project's \`.env\` file, use that same key for the MCP configuration.
14761430
14771431
If MCP installation fails, see # Fetching LangWatch Docs Without MCP
@@ -2244,7 +2198,7 @@ scenario.configure(default_model="openai/gpt-5-mini")
22442198
For TypeScript, create a \`scenario.config.mjs\` file:
22452199
\`\`\`typescript
22462200
// scenario.config.mjs
2247-
import { defineConfig } from "@langwatch/scenario/config";
2201+
import { defineConfig } from "@langwatch/scenario/integrations/vitest/config";
22482202
import { openai } from "@ai-sdk/openai";
22492203
22502204
export default defineConfig({
@@ -2479,78 +2433,11 @@ The MCP must be configured with your LangWatch API key.
24792433
- Do NOT use \`fetch_scenario_docs\` for SDK documentation — that's for code-based testing
24802434
- Write criteria as natural language descriptions, not regex patterns
24812435
- Create focused scenarios — each should test one specific behavior
2482-
- Always call \`discover_schema\` first to understand the scenario format
2483-
`,
2484-
2485-
platform_analytics: `You are helping me analyze my AI agent's performance using LangWatch.
2436+
- Always call \`discover_schema\` first to understand the scenario format`,
24862437

2487-
IMPORTANT: You will need my LangWatch API key. Ask me for it and direct me to https://app.langwatch.ai/authorize if I don't have one.
2438+
recipe_debug_instrumentation: `Debug Your LangWatch Instrumentation
24882439
2489-
## Setup
2490-
2491-
Install the LangWatch MCP server:
2492-
claude mcp add langwatch -- npx -y @langwatch/mcp-server --apiKey <API_KEY>
2493-
2494-
## What to do
2495-
2496-
1. Call discover_schema with category "all" to learn available metrics
2497-
2. Call get_analytics to query:
2498-
- Total LLM cost (last 7 days)
2499-
- P95 latency trends
2500-
- Token usage over time
2501-
- Error rates
2502-
3. Use search_traces to find traces with errors or high latency
2503-
4. Present the findings clearly with key numbers and anomalies`,
2504-
2505-
platform_scenarios: `You are helping me create scenario tests for my AI agent on the LangWatch platform.
2506-
2507-
IMPORTANT: You will need my LangWatch API key. Ask me for it and direct me to https://app.langwatch.ai/authorize if I don't have one.
2508-
2509-
## Setup
2510-
2511-
Install the LangWatch MCP server:
2512-
claude mcp add langwatch -- npx -y @langwatch/mcp-server --apiKey <API_KEY>
2513-
2514-
## What to do
2515-
2516-
1. Call discover_schema with category "scenarios" to understand the format
2517-
2. Create scenarios using platform_create_scenario for:
2518-
- Happy path: normal, expected interactions
2519-
- Edge cases: unusual inputs, unclear requests
2520-
- Error handling: when things go wrong
2521-
2522-
For each scenario, define:
2523-
- name: A descriptive name for the test case
2524-
- situation: The context and user behavior to simulate
2525-
- criteria: What the agent should do (list of success criteria)
2526-
- labels: Tags for organization (optional)
2527-
2528-
3. Use platform_list_scenarios to review all scenarios
2529-
4. Use platform_update_scenario to refine them
2530-
2531-
Write criteria as natural language descriptions, not regex patterns.
2532-
Each scenario should test one specific behavior.`,
2533-
2534-
platform_evaluators: `You are helping me set up evaluators for my AI agent on the LangWatch platform.
2535-
2536-
IMPORTANT: You will need my LangWatch API key. Ask me for it and direct me to https://app.langwatch.ai/authorize if I don't have one.
2537-
2538-
## Setup
2539-
2540-
Install the LangWatch MCP server:
2541-
claude mcp add langwatch -- npx -y @langwatch/mcp-server --apiKey <API_KEY>
2542-
2543-
## What to do
2544-
2545-
1. Call discover_schema with category "evaluators" to see available types
2546-
2. Use platform_list_evaluators to see existing evaluators
2547-
3. Create evaluators using platform_create_evaluator:
2548-
- LLM-as-judge evaluators for quality assessment
2549-
- Specific evaluator types matching your use case
2550-
4. Use platform_get_evaluator and platform_update_evaluator to review and refine
2551-
5. Then go to https://app.langwatch.ai to set up monitors using these evaluators`,
2552-
2553-
recipe_debug_instrumentation: `You are using LangWatch for your AI agent project. Follow these instructions.
2440+
You are using LangWatch for your AI agent project. Follow these instructions.
25542441
25552442
IMPORTANT: You will need a LangWatch API key. Check if LANGWATCH_API_KEY is already in the project's .env file. If not, ask the user for it — they can get one at https://app.langwatch.ai/authorize. If they have a LANGWATCH_ENDPOINT in .env, they are on a self-hosted instance — use that endpoint instead of app.langwatch.ai.
25562443
First, try to install the LangWatch MCP server for access to documentation and platform tools. If installation fails, you can fetch docs directly via the URLs provided below.
@@ -2608,10 +2495,11 @@ After fixes, compare before/after:
26082495
| Spans not connected to traces | Missing \`@langwatch.trace()\` on entry function | Add trace decorator to the main function |
26092496
| No labels on traces | Labels not set in trace metadata | Add \`metadata={"labels": ["feature"]}\` to trace update |
26102497
| Missing user_id | User ID not passed to trace | Add \`user_id\` to trace metadata |
2611-
| Traces from different calls merged | Missing \`langwatch.setup()\` or trace context not propagated | Ensure \`langwatch.setup()\` called at startup |
2612-
`,
2498+
| Traces from different calls merged | Missing \`langwatch.setup()\` or trace context not propagated | Ensure \`langwatch.setup()\` called at startup |`,
26132499

2614-
recipe_improve_setup: `You are using LangWatch for your AI agent project. Follow these instructions.
2500+
recipe_improve_setup: `Improve Your LangWatch Setup
2501+
2502+
You are using LangWatch for your AI agent project. Follow these instructions.
26152503
26162504
IMPORTANT: You will need a LangWatch API key. Check if LANGWATCH_API_KEY is already in the project's .env file. If not, ask the user for it — they can get one at https://app.langwatch.ai/authorize. If they have a LANGWATCH_ENDPOINT in .env, they are on a self-hosted instance — use that endpoint instead of app.langwatch.ai.
26172505
First, try to install the LangWatch MCP server for access to documentation and platform tools. If installation fails, you can fetch docs directly via the URLs provided below.
@@ -2680,10 +2568,11 @@ After each improvement:
26802568
- Do NOT skip the audit — you can't suggest improvements without understanding the current state
26812569
- Do NOT give generic advice — every suggestion must be specific to this codebase
26822570
- Do NOT overwhelm with 10 suggestions — pick the top 2-3
2683-
- Do NOT skip running/verifying improvements
2684-
`,
2571+
- Do NOT skip running/verifying improvements`,
2572+
2573+
recipe_evaluate_multimodal: `Evaluate Your Multimodal Agent
26852574
2686-
recipe_evaluate_multimodal: `You are using LangWatch for your AI agent project. Follow these instructions.
2575+
You are using LangWatch for your AI agent project. Follow these instructions.
26872576
26882577
IMPORTANT: You will need a LangWatch API key. Check if LANGWATCH_API_KEY is already in the project's .env file. If not, ask the user for it — they can get one at https://app.langwatch.ai/authorize. If they have a LANGWATCH_ENDPOINT in .env, they are on a self-hosted instance — use that endpoint instead of app.langwatch.ai.
26892578
First, try to install the LangWatch MCP server for access to documentation and platform tools. If installation fails, you can fetch docs directly via the URLs provided below.
@@ -2773,10 +2662,11 @@ Run the evaluation, review results, fix issues, re-run until quality is acceptab
27732662
- Do NOT evaluate multimodal agents with text-only metrics — use image-aware judges
27742663
- Do NOT skip testing with real file formats — synthetic descriptions aren't enough
27752664
- Do NOT forget to handle file loading errors in evaluations
2776-
- Do NOT use generic test images — use domain-specific ones matching the agent's purpose
2777-
`,
2665+
- Do NOT use generic test images — use domain-specific ones matching the agent's purpose`,
27782666

2779-
recipe_generate_rag_dataset: `You are using LangWatch for your AI agent project. Follow these instructions.
2667+
recipe_generate_rag_dataset: `Generate a RAG Evaluation Dataset
2668+
2669+
You are using LangWatch for your AI agent project. Follow these instructions.
27802670
27812671
IMPORTANT: You will need a LangWatch API key. Check if LANGWATCH_API_KEY is already in the project's .env file. If not, ask the user for it — they can get one at https://app.langwatch.ai/authorize. If they have a LANGWATCH_ENDPOINT in .env, they are on a self-hosted instance — use that endpoint instead of app.langwatch.ai.
27822672
First, try to install the LangWatch MCP server for access to documentation and platform tools. If installation fails, you can fetch docs directly via the URLs provided below.
@@ -2876,10 +2766,11 @@ Before using the dataset:
28762766
- Do NOT skip negative cases — testing "I don't know" is crucial for RAG
28772767
- Do NOT use the same question pattern for every entry — diversify types
28782768
- Do NOT forget to include the relevant context per row
2879-
- Do NOT generate expected outputs that aren't actually in the knowledge base
2880-
`,
2769+
- Do NOT generate expected outputs that aren't actually in the knowledge base`,
28812770

2882-
recipe_test_compliance: `You are using LangWatch for your AI agent project. Follow these instructions.
2771+
recipe_test_compliance: `Test Your Agent's Compliance Boundaries
2772+
2773+
You are using LangWatch for your AI agent project. Follow these instructions.
28832774
28842775
IMPORTANT: You will need a LangWatch API key. Check if LANGWATCH_API_KEY is already in the project's .env file. If not, ask the user for it — they can get one at https://app.langwatch.ai/authorize. If they have a LANGWATCH_ENDPOINT in .env, they are on a self-hosted instance — use that endpoint instead of app.langwatch.ai.
28852776
First, try to install the LangWatch MCP server for access to documentation and platform tools. If installation fails, you can fetch docs directly via the URLs provided below.
@@ -3012,10 +2903,11 @@ Create reusable criteria for your domain:
30122903
- Do NOT only test with polite, straightforward questions — adversarial probing is essential
30132904
- Do NOT skip multi-turn escalation scenarios — single-turn tests miss persistence attacks
30142905
- Do NOT use weak criteria like "agent is helpful" — be specific about what it must NOT do
3015-
- Do NOT forget to test the "empathetic but firm" response — the agent should show care while maintaining boundaries
3016-
`,
2906+
- Do NOT forget to test the "empathetic but firm" response — the agent should show care while maintaining boundaries`,
30172907

3018-
recipe_test_cli_usability: `You are using LangWatch for your AI agent project. Follow these instructions.
2908+
recipe_test_cli_usability: `Test Your CLI's Agent Usability
2909+
2910+
You are using LangWatch for your AI agent project. Follow these instructions.
30192911
30202912
IMPORTANT: You will need a LangWatch API key. Check if LANGWATCH_API_KEY is already in the project's .env file. If not, ask the user for it — they can get one at https://app.langwatch.ai/authorize. If they have a LANGWATCH_ENDPOINT in .env, they are on a self-hosted instance — use that endpoint instead of app.langwatch.ai.
30212913
First, try to install the LangWatch MCP server for access to documentation and platform tools. If installation fails, you can fetch docs directly via the URLs provided below.
@@ -3110,7 +3002,6 @@ Write scenarios where the agent makes a mistake and must recover:
31103002
- Do NOT output errors without actionable guidance (the agent needs to know how to fix it)
31113003
- DO make \`--help\` comprehensive on every subcommand
31123004
- DO use non-zero exit codes for failures (agents check exit codes)
3113-
- DO output structured information (the agent can parse it)
3114-
`,
3005+
- DO output structured information (the agent can parse it)`,
31153006

3116-
};
3007+
};

0 commit comments

Comments
 (0)