You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This analysis examined all 89 .lock.yml files in the .github/workflows/ directory to identify structural patterns, usage trends, and characteristics of agentic workflows in this repository. The analysis reveals a mature ecosystem of AI-powered workflows with standardized patterns, diverse triggers, and consistent safety mechanisms.
Key Findings:
89 total workflows averaging 231 KB per file, representing ~21 MB of workflow definitions
Copilot is the dominant engine (55% of workflows) followed by Claude (39%) and Codex (5%)
96% adoption of security firewall patterns across workflows
49% include cache-memory for persistent state across runs
Highly consistent structure with average of 7 jobs and 60 steps per workflow
Observation: The overwhelming majority (87.6%) of workflows exceed 100 KB, indicating rich, complex agentic systems with extensive configuration and safety mechanisms.
Trigger Analysis
Trigger Type Distribution
Trigger Type
Count
Percentage
Usage Pattern
issues
327 occurrences
93% of workflows
Most common - workflows respond to issue events
pull_request
119 occurrences
87% of workflows
High PR integration for code review and analysis
workflow_dispatch
70 workflows
79%
Manual triggering widely supported
schedule
44 workflows
49%
Daily/weekly automated tasks
push
2 workflows
2%
Rarely used, most avoid automatic commits
Key Insight: The dominance of issues and pull_request triggers (93% and 87% respectively) shows that agentic workflows are primarily event-driven and human-initiated, rather than fully autonomous.
Common Trigger Combinations
issues + workflow_dispatch (67 workflows, 75%) - Most popular pattern allowing both automatic and manual execution
Insight: Scheduled workflows favor business hours (9 AM - 1 PM UTC) and weekday schedules, suggesting they're designed to support human workflows rather than 24/7 automation.
Safe Outputs Analysis
Safe outputs ensure that AI-generated content is reviewed before being published. This is a critical security feature.
Safe Output Types Distribution
Safe Output Type
Workflows
Percentage
Primary Use Case
create-discussion
29
33%
Publishing reports, audits, analysis to discussion forums
add-comment
20
22%
Responding to issues/PRs with context-specific information
create-issue
16
18%
Creating tracking issues for detected problems
Total workflows with safe outputs: 65 (73% of all workflows)
Discussion Categories Used
When creating discussions, workflows most commonly target:
Category
Count
Purpose
audits
12
Security audits, code quality reports
General
4
General announcements and updates
Audits
3
(Case variant of audits)
dev
2
Development-related discussions
artifacts
2
Build artifacts and releases
security
1
Security-specific reports
research
1
Research findings
Note: There's some inconsistency with category naming (e.g., "audits" vs "Audits" vs "audit") that could be standardized.
Workflows with Multiple Safe Outputs
Some workflows use multiple safe output mechanisms for comprehensive reporting. This pattern is less common but indicates sophisticated workflows that need to communicate through multiple channels.
Insight: The near-universal adoption of the activation → agent → detection → conclusion pattern shows strong standardization across the workflow ecosystem.
Full Access (5 workflows): Include contents:write for automated fixes
Engine Distribution
The repository uses three primary AI engines to power workflows:
Engine
Workflows
Percentage
Concurrency Group Pattern
Copilot
67
55%
gh-aw-copilot-${{ github.workflow }}
Claude
48
40%
gh-aw-claude-${{ github.workflow }}
Codex
6
5%
gh-aw-codex-${{ github.workflow }}
Note: Some workflows may use multiple engines, so total > 100%
Engine Selection Patterns:
Copilot: Most popular, likely default choice for general-purpose tasks
Claude: Strong showing (40%), often chosen for complex analysis and reasoning tasks
Codex: Minimal usage (5%), possibly deprecated or specialized use cases
Timeout Patterns
Workflow jobs use timeouts to prevent runaway executions:
Timeout (minutes)
Frequency
Percentage
Use Case
10 minutes
206
49%
Standard jobs (default)
20 minutes
99
24%
Complex agent tasks
5 minutes
83
20%
Quick tasks (activation, detection)
15 minutes
16
4%
Extended analysis
30+ minutes
13
3%
Very complex workflows
Average Timeout: ~12 minutes across all jobs
Insight: The clustering around 5-10-20 minute timeouts shows deliberate tiering of job complexity, with most jobs completing quickly (10 min) but allowances for complex AI tasks (20+ min).
Concurrency Patterns
Concurrency groups prevent multiple workflow runs from interfering with each other:
Concurrency Pattern
Count
Purpose
gh-aw-${{ github.workflow }}
84
Workflow-level locking (general)
gh-aw-copilot-${{ github.workflow }}
67
Copilot-specific locking
gh-aw-claude-${{ github.workflow }}
48
Claude-specific locking
gh-aw-codex-${{ github.workflow }}
6
Codex-specific locking
Pattern: Nearly all workflows (96%) use workflow-specific concurrency groups, often combined with engine-specific groups. This prevents:
Multiple runs of same workflow executing simultaneously
Race conditions in state management
Resource contention for LLM API calls
Tool & GitHub Actions Patterns
Most Used GitHub Actions
Action
Usage Count
Purpose
actions/github-script@v8
1,278
JavaScript automation and API calls
actions/upload-artifact@v5
743
Persist agent outputs and results
actions/download-artifact@v6
531
Retrieve outputs from previous jobs
actions/setup-node@v6
165
Node.js environment for agents
actions/checkout@v5
133
Repository checkout
actions/cache@v4
39
Cache dependencies and memory
actions/setup-go@v5
18
Go environment setup
actions/setup-python@v5
13
Python environment setup
astral-sh/setup-uv
11
Modern Python package installer
Key Observations:
Heavy GitHub Script Usage (1,278 occurrences): Most workflow logic is JavaScript-based using github-script
Artifact-Centric Architecture: Upload (743) and download (531) actions show workflows heavily use artifacts for inter-job communication
Node.js Dominance: With 165 setup-node uses, Node.js is the primary runtime environment
Multi-Language Support: Go (18), Python (13), showing polyglot workflow capabilities
MCP Server Usage
Total MCP Server Mentions: 961 across all files
Average per Workflow: ~11 MCP server references
While specific MCP servers aren't individually tracked in this analysis, the high frequency of mentions (961) indicates extensive use of Model Context Protocol for structured AI-agent interactions.
Feature Adoption Analysis
Security & Safety Features
Feature
Workflows
Adoption Rate
Purpose
Firewall Detection
85
96%
Security scanning and XPIA protection
Agent Job
85
96%
Main AI agent execution
Activation Job
85
96%
Workflow validation and initialization
Conclusion Job
77
87%
Cleanup and result summary
Detection Job
76
85%
Output validation and safety checks
Cache Memory
44
49%
Persistent state across runs
Insights:
Near-Universal Firewall (96%): Strong security posture with XPIA protection
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
This analysis examined all 89
.lock.ymlfiles in the.github/workflows/directory to identify structural patterns, usage trends, and characteristics of agentic workflows in this repository. The analysis reveals a mature ecosystem of AI-powered workflows with standardized patterns, diverse triggers, and consistent safety mechanisms.Key Findings:
Complete Statistical Analysis
Executive Summary
File Size Distribution
Size Statistics:
test-claude-oauth-workflow.lock.yml(80 KB)poem-bot.lock.yml(416 KB)Observation: The overwhelming majority (87.6%) of workflows exceed 100 KB, indicating rich, complex agentic systems with extensive configuration and safety mechanisms.
Trigger Analysis
Trigger Type Distribution
Key Insight: The dominance of
issuesandpull_requesttriggers (93% and 87% respectively) shows that agentic workflows are primarily event-driven and human-initiated, rather than fully autonomous.Common Trigger Combinations
Schedule Patterns
Most Common Cron Schedules:
0 9 * * *0 13 * * 1-50 0,6,12,18 * * *0 9 * * 1-50 10 * * *Insight: Scheduled workflows favor business hours (9 AM - 1 PM UTC) and weekday schedules, suggesting they're designed to support human workflows rather than 24/7 automation.
Safe Outputs Analysis
Safe outputs ensure that AI-generated content is reviewed before being published. This is a critical security feature.
Safe Output Types Distribution
Total workflows with safe outputs: 65 (73% of all workflows)
Discussion Categories Used
When creating discussions, workflows most commonly target:
Note: There's some inconsistency with category naming (e.g., "audits" vs "Audits" vs "audit") that could be standardized.
Workflows with Multiple Safe Outputs
Some workflows use multiple safe output mechanisms for comprehensive reporting. This pattern is less common but indicates sophisticated workflows that need to communicate through multiple channels.
Structural Characteristics
Job Complexity
Common Job Types:
activation(85 workflows, 96%) - Entry point and validationagent(85 workflows, 96%) - Main AI agent executiondetection(76 workflows, 85%) - Firewall and safety checksconclusion(77 workflows, 87%) - Cleanup and summarycreate_discussion,create_issue,add_commentInsight: The near-universal adoption of the activation → agent → detection → conclusion pattern shows strong standardization across the workflow ecosystem.
Step Complexity
Typical Workflow Structure:
Average Lock File Anatomy
Based on statistical analysis, a typical
.lock.ymlfile has:Permission Patterns
Permission Frequency Analysis
Permission Distribution Insights
contentspermission shows a 14:1 read/write ratio, indicating workflows primarily read code but rarely modify itPermission Combinations
Most workflows follow one of these patterns:
contents:read,issues:read,pull-requests:readissues:write,pull-requests:writediscussions:writecontents:writefor automated fixesEngine Distribution
The repository uses three primary AI engines to power workflows:
gh-aw-copilot-${{ github.workflow }}gh-aw-claude-${{ github.workflow }}gh-aw-codex-${{ github.workflow }}Note: Some workflows may use multiple engines, so total > 100%
Engine Selection Patterns:
Timeout Patterns
Workflow jobs use timeouts to prevent runaway executions:
Average Timeout: ~12 minutes across all jobs
Insight: The clustering around 5-10-20 minute timeouts shows deliberate tiering of job complexity, with most jobs completing quickly (10 min) but allowances for complex AI tasks (20+ min).
Concurrency Patterns
Concurrency groups prevent multiple workflow runs from interfering with each other:
gh-aw-${{ github.workflow }}gh-aw-copilot-${{ github.workflow }}gh-aw-claude-${{ github.workflow }}gh-aw-codex-${{ github.workflow }}Pattern: Nearly all workflows (96%) use workflow-specific concurrency groups, often combined with engine-specific groups. This prevents:
Tool & GitHub Actions Patterns
Most Used GitHub Actions
Key Observations:
github-scriptMCP Server Usage
While specific MCP servers aren't individually tracked in this analysis, the high frequency of mentions (961) indicates extensive use of Model Context Protocol for structured AI-agent interactions.
Feature Adoption Analysis
Security & Safety Features
Insights:
Workflow Naming Patterns
The repository follows consistent naming conventions:
test-claude-oauth-workflow,test-secret-maskingdaily-news,daily-code-metrics,daily-team-statussmoke-claude,smoke-copilot,smoke-detectorcopilot-pr-nlp-analysis,copilot-session-insightsgrumpy-reviewer,semantic-function-refactor, etc.Pattern Analysis:
Interesting Findings
1. High Standardization Despite Diversity
With 89 workflows serving different purposes, there's remarkable consistency:
This suggests strong governance and templates in workflow creation.
2. Security-First Architecture
The security posture is exceptionally strong for AI-powered automation.
3. Copilot Dominates, But Claude is Strong
The near-even split between Copilot and Claude suggests:
4. Workflows are Interactive, Not Autonomous
This shows agentic workflows are designed as AI assistants for humans, not autonomous agents.
5. Cache Memory is Growing (49% Adoption)
Nearly half of workflows use persistent cache memory, indicating:
6. Artifact-Heavy Architecture
Workflows extensively use GitHub Actions artifacts for:
7. Discussion Categories Need Standardization
The analysis found inconsistent naming:
A style guide for discussion categories would improve organization.
8. Minimal Use of Push Triggers (2%)
Only 2 workflows use push triggers, showing extreme caution about automatic code modifications. This is a strong safety signal.
Recommendations
Based on this analysis, here are recommendations for the workflow ecosystem:
1. Standardize Discussion Category Names
2. Increase Cache Memory Adoption
3. Document Engine Selection Guidelines
4. Sunset Codex (5% usage)
5. Create Workflow Complexity Tiers
Define standard tiers based on analysis:
6. Optimize Timeout Values
Current distribution: 49% use 10 min, 24% use 20 min
7. Expand Smoke Testing
Methodology
Data Collection
.github/workflows/*.lock.ymlfilesAnalysis Approach
ls -land byte countingon:section keywords- name:patterns)Cache Memory
Analysis scripts stored in
/tmp/gh-aw/cache-memory/scripts/for reuse:comprehensive_analysis.sh- Main analysis scriptdetailed_stats.sh- Extended statisticshistory/2025-11-21-analysis.jsonLimitations
Generated by Lockfile Statistics Analysis Agent on 2025-11-21
Beta Was this translation helpful? Give feedback.
All reactions