You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This report analyzes 1,000 Copilot-generated PRs from the last 30 days to identify which prompt patterns lead to successful merges versus closed PRs.
Summary
Analysis Period: Last 30 days Total PRs: 1,000 | Merged: 766 (76.6%) | Closed: 233 (23.3%) | Open: 1
Key Finding: Overall, 76.6% of Copilot PRs are successfully merged, indicating strong effectiveness of the Copilot coding agent when properly prompted.
Full Analysis Details
Prompt Categories and Success Rates
Category
Total
Merged
Success Rate
Documentation
455
360
79.1% ⭐
Testing
395
308
78.0%
Update/Modify
536
420
78.4%
Bug Fix
530
413
77.9%
Refactoring
216
166
76.9%
Feature Addition
551
422
76.6%
Other
83
62
74.7%
Insight: Documentation and testing prompts have the highest success rates, suggesting these are well-defined tasks that Copilot handles effectively.
Prompt Characteristics Analysis
✅ Successful Prompt Patterns
Common characteristics in merged PRs:
Average prompt length: 174.6 words (median: 59 words)
URL/Link inclusion: 48.8% include GitHub issue/run links for context
File references: 49.2% reference specific files or extensions
Recommendation: Mention specific files, paths, or file extensions when possible
Why: 49.2% of merged PRs reference files vs 39.9% of closed
Example: Update the authentication logic in auth.js to handle token refresh
✅ DO: Use Clear Imperative Verbs
Recommendation: Start with action verbs: fix, add, update, remove, implement
Why: 33.6% of merged PRs vs 30.8% of closed use imperative starts
Example: Fix javascript tests instead of The tests need fixing
✅ DO: Be Specific About Scope
Recommendation: Define clear boundaries for the task
Why: Focused tasks (docs, tests) have higher success rates (78-79%)
Example: Add error handling to the API client not Improve error handling
❌ AVOID: Vague Instructions
Recommendation: Don't use ambiguous verbs like "improve", "enhance", "optimize" without specifics
Why: These require judgment calls and may not align with expectations
Example: BAD: Improve the code | GOOD: Refactor the parser to use switch statements
❌ AVOID: Multiple Complex Dependencies
Recommendation: Break down tasks with multiple conditions into simpler prompts
Why: Complex workflow logic often requires human judgment
Example: Instead of conditional workflow changes, specify exact desired behavior
💡 IDEAL PROMPT TEMPLATE
[Action Verb] [Specific Target] [Context Link]
Examples:
✅ Fix broken links in documentation https://github.com/org/repo/issues/123
✅ Add error handling to src/api/client.js for network timeouts
✅ Update test coverage for authentication module to include edge cases
✅ Remove deprecated API calls from services/legacy.ts
Prompts with GitHub links: ~6% higher success rate
Prompts with file references: ~9% higher success rate
Conclusion
The analysis reveals that Copilot coding agent is highly effective with a 76.6% merge rate. Success is strongly correlated with:
Specific context (GitHub links, file references)
Clear actions (imperative verbs, defined scope)
Well-defined tasks (documentation, testing excel)
Developers can improve success rates by providing specific context, referencing exact files, and using clear imperative instructions rather than vague improvement requests.
Analysis Period: 2025-11-20 (Last 30 days) Data Source: 1,000 Copilot-generated PRs from githubnext/gh-aw
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
🤖 Copilot PR Prompt Pattern Analysis - 2025-11-20
This report analyzes 1,000 Copilot-generated PRs from the last 30 days to identify which prompt patterns lead to successful merges versus closed PRs.
Summary
Analysis Period: Last 30 days
Total PRs: 1,000 | Merged: 766 (76.6%) | Closed: 233 (23.3%) | Open: 1
Key Finding: Overall, 76.6% of Copilot PRs are successfully merged, indicating strong effectiveness of the Copilot coding agent when properly prompted.
Full Analysis Details
Prompt Categories and Success Rates
Insight: Documentation and testing prompts have the highest success rates, suggesting these are well-defined tasks that Copilot handles effectively.
Prompt Characteristics Analysis
✅ Successful Prompt Patterns
Common characteristics in merged PRs:
Most effective keywords in merged PRs:
issue,workflow,github- Context referencesupdate,add,section- Clear actionscommand,files,description- Specific targetsExample successful prompts:
Short & Specific with Context (PR Fix broken documentation links in troubleshooting and how-it-works pages #4374) → MERGED
Clear Imperative (PR Fix test isolation in collect_ndjson_output.test.cjs #4365) → MERGED
Detailed with Issue Context (Multiple examples) → MERGED
❌ Unsuccessful Prompt Patterns
Common characteristics in closed PRs:
Keywords more common in closed PRs:
Example unsuccessful prompts:
Vague/Incomplete (PR [WIP] Remove common keywords and phrases from analysis #4346) → CLOSED
Potentially Too Complex (PR [WIP] Skip conclusion job if agent job is cancelled #4370) → CLOSED
Key Insights
📊 Pattern 1: Context is King
📏 Pattern 2: File References Matter
🎯 Pattern 3: Prompt Length is Neutral
🔧 Pattern 4: Documentation & Testing Excel
Recommendations
Based on the analysis of 1,000 Copilot PRs:
✅ DO: Include Context Links
Recommendation: Always include GitHub issue or workflow run URLs when relevant
Fix docs broken links https://github.com/org/repo/actions/runs/12345✅ DO: Reference Specific Files
Recommendation: Mention specific files, paths, or file extensions when possible
Update the authentication logic in auth.js to handle token refresh✅ DO: Use Clear Imperative Verbs
Recommendation: Start with action verbs: fix, add, update, remove, implement
Fix javascript testsinstead ofThe tests need fixing✅ DO: Be Specific About Scope
Recommendation: Define clear boundaries for the task
Add error handling to the API clientnotImprove error handling❌ AVOID: Vague Instructions
Recommendation: Don't use ambiguous verbs like "improve", "enhance", "optimize" without specifics
Improve the code| GOOD:Refactor the parser to use switch statements❌ AVOID: Multiple Complex Dependencies
Recommendation: Break down tasks with multiple conditions into simpler prompts
💡 IDEAL PROMPT TEMPLATE
Statistical Highlights
Conclusion
The analysis reveals that Copilot coding agent is highly effective with a 76.6% merge rate. Success is strongly correlated with:
Developers can improve success rates by providing specific context, referencing exact files, and using clear imperative instructions rather than vague improvement requests.
Analysis Period: 2025-11-20 (Last 30 days)
Data Source: 1,000 Copilot-generated PRs from githubnext/gh-aw
Beta Was this translation helpful? Give feedback.
All reactions