From 5f1871d81bc1b3efd5991a40b43609463b0af914 Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Tue, 28 Apr 2026 04:46:33 -0400 Subject: [PATCH 01/18] Add AI usage policy for OpenSSF Technical Initiatives Introduce a comprehensive policy governing how AI development tools are used in contributions to OpenSSF Technical Initiatives. The policy establishes a permissive-by-default stance with clear accountability: - Five guiding principles: transparency, responsibility, understanding, respect for maintainer time, and authentic engagement - Disclosure requirements for AI-assisted contributions in PRs - Commit trailer conventions (Generated-by and Co-authored-by) described neutrally, with individual TIs free to choose - Contributor responsibility framework adapted from ASF guidance - Default prohibition on AI-autonomous contributions (no human in the loop), with exceptions for existing bots and TI-defined workflows - Maintainer review checklist and authority to close low-quality AI-generated submissions - AGENTS.md adoption guidance aligned with the AAIF specification - Security considerations section addressing AI-generated code risks, with references to the BEST WG and AI/ML Security WG - TI-level exception framework via TAC governance process - Annual review cadence with trigger-based interim reviews References: Kubernetes contributor guide, ASF Generative Tooling Guidance, RedMonk 2026 AI policy landscape analysis, Scientific Python community guidelines, AGENTS.md specification, and OpenSSF BEST WG Security-Focused Guide for AI Code Assistant Instructions. Co-Authored-By: Claude Signed-off-by: Stephen Augustus --- policies/ai.md | 279 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 279 insertions(+) create mode 100644 policies/ai.md diff --git a/policies/ai.md b/policies/ai.md new file mode 100644 index 00000000..82b6bb98 --- /dev/null +++ b/policies/ai.md @@ -0,0 +1,279 @@ +# AI Usage in OpenSSF Technical Initiatives + +This policy defines how the OpenSSF approaches the use of AI development tools in contributions to its Technical Initiatives (TIs). It covers AI-assisted and AI-autonomous contributions, disclosure expectations, maintainer review practices, and repository-level agent configuration. + +AI is reshaping open source dynamics. Organizations and foundations that prohibit AI usage risk falling behind; those that accept it uncritically risk quality, security, and legal exposure. This policy takes a pragmatic middle path — permissive by default, with clear accountability expectations — while acknowledging that the landscape is evolving rapidly and that questions around contributor consent, competitive dynamics, and community norms remain open. + +This is a living document. It will be reviewed periodically and updated as the AI landscape evolves (see [Review Cadence](#review-cadence)). + +## Scope + +This policy applies to: + +- Repositories under [github.com/ossf](https://github.com/ossf) and other OpenSSF-controlled organizations +- All interaction types: pull requests, issues, review comments, discussions, and automated activity +- All contributors, regardless of organizational affiliation + +This policy does **not** replace: + +- Individual TI contribution guidelines (`CONTRIBUTING.md`) +- The OpenSSF [Developer Certificate of Origin (DCO)](../dco.md) requirement +- Employer-specific policies that contributors may be subject to + +Individual TIs may adopt **stricter** rules than this policy (see [TI-Level Exceptions](#ti-level-exceptions)) but may not be more permissive. + +## Glossary + +- **AI development tools**: Copilots, agents, editors, and other tools that generate or transform code, documentation, or other project artifacts. +- **AI-assisted contribution**: Any contribution (PR, issue, comment, review) where a human used AI development tools to help produce the work, but the human directed the process and reviewed the output. +- **AI-autonomous contribution**: Any contribution produced and submitted by an AI agent acting independently, without meaningful human direction or review of the specific output. +- **Repo agent config**: Files intended to guide coding agents (e.g., `AGENTS.md`, tool config files, agent instructions). + +## Guiding Principles + +These five principles govern how the OpenSSF approaches AI usage in its Technical Initiatives. They apply to all participants — maintainers, contributors, and reviewers. + +1. **Be transparent.** Disclose AI usage. Don't obscure how contributions were produced. Transparency builds trust with maintainers and the broader community. + +2. **Take responsibility.** The human contributor owns the contribution. AI development tools don't reduce accountability for correctness, security, or licensing. "The AI wrote it" is not a defense. + +3. **Demonstrate understanding.** Contributors must be able to explain what they submit. If you cannot walk through the change and justify its design, it is not ready for review. + +4. **Respect maintainer time.** AI has lowered the cost of producing contributions but not the cost of reviewing them. Contributors must ensure their submissions are worth the review effort and are appropriately scoped. + +5. **Engage authentically.** Interact as if you were interacting with humans, not machines. Review comments, issue discussions, and community interactions should reflect genuine human engagement. + +## Disclosure of AI Usage + +Contributors must disclose AI development tool usage when submitting pull requests to OpenSSF repositories. Disclosure is required whenever AI development tools were used at any point in producing the contribution — there is no minimum threshold. + +### Rationale + +Transparency is foundational to open source collaboration. Disclosure enables maintainers to calibrate review effort, supports provenance tracking, and builds trust with the community. Required disclosure aligns the OpenSSF with emerging ecosystem norms — the majority of organizations surveyed in [RedMonk's 2026 analysis](https://redmonk.com/kholterhoff/2026/02/26/generative-ai-policy-landscape-in-open-source/) either require or recommend AI disclosure. + +### Pull Request Disclosure + +Pull request templates should include at minimum: + +1. An attestation that the contributor has read and understood the project's contribution guidelines (`CONTRIBUTING.md`). +2. A declaration of whether AI development tools were used in producing the contribution, and if so, a brief description of how (e.g., "code generation," "test scaffolding," "documentation drafting"). + +The detailed expectations around AI usage should live in `CONTRIBUTING.md`. The PR template captures the attestation; `CONTRIBUTING.md` holds the substance. + +### Commit Trailers + +For machine-parseable provenance tracking, two conventions exist in the ecosystem: + +- `Generated-by: ` — Recommended by the [Apache Software Foundation](https://www.apache.org/legal/generative-tooling.html). This is a provenance marker indicating which tool was used. It does not imply authorship. +- `Co-authored-by: ` — Widely used in practice and rendered by GitHub's UI. However, this implies co-authorship, which raises unresolved questions about whether AI development tools can hold authorship or copyright. + +There is no industry standard for AI commit attribution. Current behavior varies across tools: Claude Code and Aider both default to `Co-authored-by:` trailers; GitHub Copilot's coding agent co-authors commits with the developer who assigned the task; other tools (Cursor, Windsurf, OpenAI Codex) either add no attribution by default or do not document their behavior. No major AI coding agent currently defaults to `Generated-by:`. + +AI provenance trailers are recommended but not required at the policy level. Individual TIs may require them through their own contribution conventions. AI provenance trailers are additive — they do not replace or conflict with DCO `Signed-off-by:` trailers. Projects using [DCO Bot](https://probot.github.io/apps/dco/) or similar enforcement tools for `Signed-off-by:` are fully compatible with AI provenance trailers. + +## Contributor Responsibility for AI-Assisted Work + +Contributors are fully responsible for AI-assisted contributions. They must understand, test, and be able to explain every change they submit. AI assistance does not reduce contributor responsibility. TIs should document these expectations in their `CONTRIBUTING.md`. + +### Rationale + +AI has lowered the cost of producing contributions but not the cost of reviewing them. Maintainers report that low-quality AI-generated submissions create significant triage burden. Holding contributors to the same standards regardless of tooling ensures review effort is respected and code quality is maintained. + +### Requirements + +All contributors must: + +- **Understand and be able to explain** every meaningful change +- Run tests and verification appropriate for the change +- Keep PRs appropriately scoped; avoid large automated PRs unless coordinated with maintainers +- Write human-readable commit messages explaining "what" and "why" — avoid AI-generated commit messages +- **Do not use AI to respond to review comments** — reviewers expect to engage with the human author ([Kubernetes norms](https://www.k8s.dev/docs/guide/pull-requests/)) + +### Legal Obligations + +Adapted from the [ASF Generative Tooling Guidance](https://www.apache.org/legal/generative-tooling.html), contributors using AI development tools must ensure: + +1. **Tool terms compatibility**: The AI development tool's terms of service do not conflict with the project's open source license or the [Open Source Definition](https://opensource.org/osd). Most major tools grant users rights to generated output, but this varies by plan and terms of service. Contributors should verify. + +2. **No third-party copyrighted material**: AI-generated output does not contain copyrighted third-party code, or if it does, that code is compatible with the project's license. Contributors should review AI output for copied or closely adapted snippets and use code scanning tools where available. + +3. **Reasonable assurance**: Contributors have taken reasonable steps to verify the above — whether by reviewing tool terms, scanning output, or manually reviewing generated code. These conditions require diligence and attestation, not proof — the same model as DCO. + +### Recommendations + +Examples of reasonable AI development tool uses include, but are not limited to: explaining existing code, generating boilerplate, improving grammar for non-native English speakers, brainstorming design alternatives, and scaffolding tests. + +When AI is used to transform or adapt existing code (rather than generating from scratch), contributors should note the source in the PR description alongside the AI disclosure. + +## AI-Autonomous Contributions + +All contributions to OpenSSF repositories must have a responsible human who directed the work and can explain and defend it. Fully autonomous AI contributions — where an agent acts independently without meaningful human oversight of the specific output — are not accepted by default. + +### Rationale + +Autonomous AI agents can produce high volumes of contributions without the human accountability that open source collaboration depends on. This creates disproportionate review burden, degrades signal-to-noise in issue trackers, and undermines the authentic human engagement that sustains open source communities. No step in the contribution workflow should happen without a human in the loop. + +### Details + +The following autonomous agent behaviors are not acceptable: + +- **Automated PR submission**: Agents opening pull requests without a human reviewing and approving the specific changes before submission +- **Automated issue filing**: Agents filing bug reports or feature requests without a human verifying the issue reflects a genuine, real-world use case +- **Issue claiming**: Agents claiming issues (especially "good first issue" labels) without a human intending to personally follow through on the work +- **Automated review comments**: Agents posting unsolicited code review feedback on others' pull requests +- **Automated discussion responses**: Agents responding in issue or discussion threads without human oversight of the specific response + +A human using an AI development tool to *draft* any of the above — and then reviewing, editing, and submitting the output themselves — is an **AI-assisted** contribution, not an AI-autonomous one, and is acceptable under this policy. + +### Exceptions + +**Existing GitHub Apps and bots** (e.g., Dependabot, Scorecard, CI bots) authorized through existing OpenSSF governance processes are not subject to this policy. They predate it and are governed by their own approval processes. + +Individual TIs may define acceptable autonomous agent interactions in their `AGENTS.md` file, provided those interactions are scoped, documented, and have a responsible human owner. For example, a project might allow an agent to run CI checks and post structured results, as long as a maintainer owns the agent configuration. + +## Maintainer Review of AI-Assisted Contributions + +Maintainers should review AI-assisted contributions on technical merit and project standards. They may close contributions that are not reviewable or that impose disproportionate review cost. + +### Rationale + +Maintainer time is the scarcest resource in open source. AI development tools have increased the volume of contributions without increasing review capacity. Maintainers need clear authority and consistent practices for handling AI-assisted and AI-autonomous submissions efficiently. + +### Reviewing Pull Requests + +Suggested maintainer checklist: + +- Does the PR have a clear "what/why"? +- Is it appropriately scoped? +- Do tests/verification cover the change? +- Can the author explain the design and tradeoffs? +- Has the contributor disclosed AI usage as required? +- Any license/provenance red flags? (Review AI output for copied snippets, unusual style, or comments referencing other projects) + +When a PR does not meet project standards, maintainers should first engage with the contributor — ask clarifying questions, request changes, and give the contributor a chance to demonstrate understanding. If the contributor is unwilling or unable to adhere to the project's contribution standards, maintainers may close the PR with a respectful rationale (per [Kubernetes norms](https://www.k8s.dev/docs/guide/pull-requests/)). The close message should reference this policy and offer the contributor the opportunity to reopen the PR if they are willing to meet the project's expectations. + +For obvious spam or fully autonomous submissions with no human behind them, immediate closure is appropriate as a defensive measure. + +### Handling AI-Generated Issues and Comments + +Maintainers may: + +- Close AI-generated issues that lack genuine use cases or real-world context +- Remove or flag AI-generated review comments that are unsolicited or misleading +- Reclaim issues that were claimed by automated agents without follow-through + +When closing or removing AI-generated content, use a respectful, consistent rationale that references this policy. + +### Proactive Recommendations + +- Apply the project's existing commit conventions to AI-assisted contributions the same way as any other contribution. Projects without commit conventions may consider adopting basic standards (e.g., conventional commits, "what/why" format, squash-on-merge policy). +- **Reduce the attack surface for low-quality AI contributions proactively:** + - Keep `CONTRIBUTING.md`, `README.md`, and issue templates current — outdated or vague guidance increases the likelihood that AI agents produce off-target submissions. + - Minimize unnecessary dependency sprawl — each dependency is an additional surface for automated "bump" or "fix" PRs. + - Use clear, specific "good first issue" labels and remove stale ones — AI agents often target these labels for automated issue claiming. + +## AGENTS.md in OpenSSF Repositories + +OpenSSF repositories should include an `AGENTS.md` file to set expectations for AI agent behavior. + +### Rationale + +`AGENTS.md` is a tool-agnostic standard for providing instructions to AI coding agents interacting with a repository, stewarded by the [Agentic AI Foundation (AAIF)](https://aaif.io/). Adopting `AGENTS.md` serves as a mitigation tool — it sets explicit expectations for how AI agents should behave in a repo, reducing low-quality automated contributions. + +### Details + +OpenSSF repos should follow the [AGENTS.md specification](https://agents.md/) for file naming and placement: + +- Place an `AGENTS.md` file at the repository root, consistent with other community health files (`README.md`, `CONTRIBUTING.md`, `SECURITY.md`). +- For monorepos, nested `AGENTS.md` files may be placed in subpackages — the closest `AGENTS.md` to the edited file takes precedence. + +`AGENTS.md` files in OpenSSF repositories must follow these guidelines: + +- **Tool-agnostic content only.** Do not reference specific vendors or products. The file should work with any AI coding agent. +- **No confidential information.** Do not include non-public architecture details, infrastructure references, or information that is not already public in the repo. +- **Project-specific instructions.** Focus on contribution standards, testing requirements, coding conventions, and review expectations specific to the project. +- **Keep it maintained.** An outdated `AGENTS.md` is worse than none — it will produce contributions that don't match current project expectations. Treat it as a living document alongside `CONTRIBUTING.md`. +- **Maintainer-controlled.** `AGENTS.md` and other agent configuration files are maintained by project maintainers, not external contributors. Pull requests from external contributors that add or modify agent configuration files should not be accepted — this prevents tool-specific config sprawl and ensures agent instructions remain consistent with the project's contribution standards. + +For security-specific agent instructions, TIs should reference the [Security-Focused Guide for AI Code Assistant Instructions](https://best.openssf.org/Security-Focused-Guide-for-AI-Code-Assistant-Instructions) published by the OpenSSF Best Practices Working Group. + +## Security Considerations + +As a security-focused foundation, the OpenSSF must be particularly attentive to the security implications of AI-generated contributions. This section highlights risks and review practices specific to AI-assisted work in security-critical open source projects. + +### AI-Generated Code Security Risks + +AI development tools can introduce security vulnerabilities through several mechanisms: + +- **Insecure code patterns**: AI models may generate code with injection flaws, insecure defaults, improper input validation, or other [OWASP Top 10](https://owasp.org/www-project-top-ten/) vulnerabilities. +- **Outdated security practices**: AI-generated code may use deprecated cryptographic algorithms, insecure random number generation, or other patterns that were acceptable when the training data was produced but are no longer considered safe. +- **Dependency risks**: AI tools may suggest unmaintained, compromised, or non-existent packages (dependency confusion / hallucinated packages), introducing supply chain risk. +- **Sensitive data exposure**: Contributors must not include secrets, credentials, or sensitive data in AI tool prompts, as this data may be logged, stored, or used for model training depending on the tool's terms of service. + +### Security Review of AI-Assisted Contributions + +Maintainers of OpenSSF TIs should apply heightened scrutiny to AI-assisted contributions that touch security-sensitive areas, including: + +- Cryptographic implementations or configurations +- Authentication and authorization logic +- Input validation and sanitization +- Supply chain tooling (build systems, dependency resolution, artifact signing) +- Security policy files (`SECURITY.md`, vulnerability disclosure processes) + +For security-critical changes, maintainers should verify that AI-generated tests are not merely testing the AI-generated implementation against itself. Domain expert review is strongly recommended for any AI-assisted changes to security-critical logic. + +### Relevant OpenSSF Resources + +- [AI/ML Security Working Group](https://github.com/ossf/ai-ml-security) — Addresses security challenges unique to AI/ML in open source +- [Security-Focused Guide for AI Code Assistant Instructions](https://best.openssf.org/Security-Focused-Guide-for-AI-Code-Assistant-Instructions) — Best practices for configuring AI coding agents with security in mind +- [SAFE Framework](https://github.com/SAFE-MCP/safe-mcp) — Security framework for AI agent interactions (OpenSSF Sandbox project) + +## TI-Level Exceptions + +This policy establishes a permissive default for OpenSSF Technical Initiatives. Individual TIs may adopt stricter rules — up to and including a prohibition on AI-assisted contributions — if justified by the TI's risk profile. + +### Criteria + +Criteria that may justify stricter rules: + +- **Security-critical code**: Cryptography, authentication, access control, or code that directly handles secrets +- **Infrastructure code**: Code that runs in production environments where bugs have outsized impact +- **Regulatory or compliance requirements**: Projects subject to specific regulatory obligations +- **Maintainer capacity constraints**: Projects where the maintainer team has explicitly documented that AI-assisted contribution volume exceeds review capacity + +### Process + +1. TI leads propose the exception with a documented rationale, submitted as a PR to this repository. +2. The PR is reviewed per the [TAC Decision Process](../process/TAC-Decision-Process.md) as a Content-type decision (72h review, 3 approvers). +3. Approved exceptions must be documented in the TI's `CONTRIBUTING.md` so contributors can see the rules before submitting. +4. Exceptions include a review date (maximum 12 months from approval) at which point the exception must be renewed or it lapses. + +## Review Cadence + +This policy will be reviewed on at least an annual basis. Reviews may also be triggered by significant changes in the ecosystem. + +### Review Triggers + +- **Legal developments**: Court rulings or regulatory changes affecting AI-generated code, copyright, or licensing +- **Foundation guidance changes**: Updates to AI contribution guidance by major foundations (ASF, Linux Foundation, CNCF, TODO Group) +- **New AI capabilities**: Emergence of materially new AI development tool capabilities (e.g., fully autonomous agents, new interaction modes) that affect the assumptions in this policy +- **Community incidents**: Significant incidents involving AI-generated contributions in any open source ecosystem that reveal gaps in current guidance +- **TI feedback**: Patterns reported by OpenSSF TI maintainers that suggest this policy needs adjustment + +Review outcomes are documented as updates to this policy document. Changes that modify requirements follow the [TAC Decision Process](../process/TAC-Decision-Process.md) (Content type for requirement changes, Editorial for clarifications). + +## References + +This policy draws from the following sources: + +- [Kubernetes contributor guide: pull requests](https://www.k8s.dev/docs/guide/pull-requests/) — Author responsibility, explainability, discouraging AI-generated commit messages, allowing closure of unreviewable PRs +- [Apache Software Foundation: Generative Tooling Guidance](https://www.apache.org/legal/generative-tooling.html) — Contributor obligations for AI-generated code (tool terms, third-party materials, reasonable assurance), `Generated-by:` trailer convention +- [Kate Holterhoff / RedMonk: The Generative AI Policy Landscape in Open Source](https://redmonk.com/kholterhoff/2026/02/26/generative-ai-policy-landscape-in-open-source/) — Survey of 73 open source organizations' AI guidance; framework for stance, concern dimensions, and disclosure conventions +- [Probabl / scikit-learn maintainers: Maintaining Open Source in the Age of Gen AI](https://blog.probabl.ai/maintaining-open-source-age-of-gen-ai) — Maintainer burden, categories of problematic AI contributions, `AGENTS.md` as mitigation +- [Scientific Python community: Community Considerations Around AI Contributions](https://blog.scientific-python.org/scientific-python/community-considerations-around-ai/) — Guidelines on transparency, responsibility, understanding, and authentic engagement +- [AGENTS.md specification](https://agents.md/) — Standard format and placement for AI coding agent instructions +- [Agentic AI Foundation (AAIF)](https://aaif.io/) — Stewards the AGENTS.md standard under the Linux Foundation +- [OpenSSF: Security-Focused Guide for AI Code Assistant Instructions](https://best.openssf.org/Security-Focused-Guide-for-AI-Code-Assistant-Instructions) — Security best practices for AI coding agent configurations +- [OpenSSF AI/ML Security Working Group](https://github.com/ossf/ai-ml-security) — Working group addressing AI/ML security in open source +- [OpenSSF Developer Certificate of Origin](../dco.md) — DCO requirement for OpenSSF contributions +- [OpenSSF TAC Decision Process](../process/TAC-Decision-Process.md) — Governance process for policy changes From 48cfb1fd2c984523b4285f79fe6f782d4cb7b0cd Mon Sep 17 00:00:00 2001 From: Ulises Gascon Date: Tue, 28 Apr 2026 20:12:36 +0200 Subject: [PATCH 02/18] policy(ai): treat AI tool use as workflow, scope disclosure to accountability gaps The current rule requires disclosure for any AI use at any point, with no minimum threshold. This treats AI tool choice as a reviewer-facing signal, but tool choice is heterogeneous and does not change what reviewers are evaluating. Under a strict read it also forces disclosure for comprehension, translation, and grammar polish, which the policy's own Recommendations section already lists as reasonable uses. Reframe the disclosure section so AI tool use is treated as part of a contributor's workflow (comparable to editor, linter, or language server choice) and not disclosed by default. The contributor remains fully responsible under the existing Contributor Responsibility section, with DCO as the formal accountability attestation. Require disclosure only in the two cases where the standard accountability assumption breaks down: 1. AI-autonomous contributions (already defined in the policy). 2. AI-produced content the contributor has not meaningfully reviewed and cannot fully explain. Update the Rationale to carry the shift, replace the PR template tool-use declaration with a review attestation, and detach the "transform or adapt existing code" guidance from "the AI disclosure" so it stands on its own as a source-attribution concern. Signed-off-by: Stephen Augustus --- policies/ai.md | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/policies/ai.md b/policies/ai.md index 82b6bb98..255daee4 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -45,18 +45,29 @@ These five principles govern how the OpenSSF approaches AI usage in its Technica ## Disclosure of AI Usage -Contributors must disclose AI development tool usage when submitting pull requests to OpenSSF repositories. Disclosure is required whenever AI development tools were used at any point in producing the contribution — there is no minimum threshold. +The OpenSSF treats AI development tools as part of a contributor's workflow, comparable to editors, linters, or language servers. Tool use itself is not something reviewers need flagged. Contributors are fully responsible for every contribution they submit (see [Contributor Responsibility for AI-Assisted Work](#contributor-responsibility-for-ai-assisted-work)), and the [Developer Certificate of Origin (DCO)](../dco.md) provides the formal accountability attestation. Within that framework, disclosure of AI tool use is **not** required by default. + +Disclosure **is** required in the two cases where the normal accountability assumption breaks down: + +1. **AI-autonomous contributions** (see [AI-Autonomous Contributions](#ai-autonomous-contributions)): the contribution lacks a human who directed the work and reviewed the specific output. +2. **Unreviewed AI-produced content**: the contributor is submitting AI-produced content they have not meaningfully reviewed and cannot fully explain. For example, a large generated change the contributor is asking maintainers to evaluate without having walked through it themselves. + +In both cases, disclosure exists so reviewers know the standard accountability assumption does not hold for the contribution, and can adjust their review accordingly. Outside these two cases, AI tool use is treated as workflow and does not require disclosure. ### Rationale -Transparency is foundational to open source collaboration. Disclosure enables maintainers to calibrate review effort, supports provenance tracking, and builds trust with the community. Required disclosure aligns the OpenSSF with emerging ecosystem norms — the majority of organizations surveyed in [RedMonk's 2026 analysis](https://redmonk.com/kholterhoff/2026/02/26/generative-ai-policy-landscape-in-open-source/) either require or recommend AI disclosure. +Disclosure adds value when it tells reviewers something they need to act on. Whether a contributor used an AI tool while working on a contribution does not, on its own, change what reviewers are evaluating. The change still has to be correct, in scope, well-explained, and aligned with project standards. AI tool use is heterogeneous across contributors, and across the lifecycle of a single contribution (research, drafting, refactoring, testing, documentation). Tracking it adds noise without improving review, and works against the policy's own *Respect maintainer time* principle. + +What does change reviewers' work is whether a human is meaningfully behind the contribution. When a contribution is AI-autonomous, or when the contributor is forwarding AI-produced content they have not reviewed, the accountability frame the project relies on (DCO, contributor responsibility, ability to engage in review) is weaker. Those are the cases where reviewers benefit from explicit signal, and where this policy requires disclosure. + +This framing is consistent with the broader policy: the contributor owns the contribution, AI tool use does not reduce responsibility, and "the AI wrote it" is not a defense. It is also consistent with [RedMonk's 2026 analysis](https://redmonk.com/kholterhoff/2026/02/26/generative-ai-policy-landscape-in-open-source/) of disclosure norms across open source organizations, which describes disclosure as a mechanism for surfacing accountability concerns rather than as a tracking mechanism for tool use. ### Pull Request Disclosure Pull request templates should include at minimum: 1. An attestation that the contributor has read and understood the project's contribution guidelines (`CONTRIBUTING.md`). -2. A declaration of whether AI development tools were used in producing the contribution, and if so, a brief description of how (e.g., "code generation," "test scaffolding," "documentation drafting"). +2. An attestation that the contributor has reviewed the contribution and can explain its design and tradeoffs. If any AI-produced content in the contribution has not been meaningfully reviewed by the contributor, the PR description should say so and identify which parts, so reviewers can adjust their review accordingly. The detailed expectations around AI usage should live in `CONTRIBUTING.md`. The PR template captures the attestation; `CONTRIBUTING.md` holds the substance. @@ -103,7 +114,7 @@ Adapted from the [ASF Generative Tooling Guidance](https://www.apache.org/legal/ Examples of reasonable AI development tool uses include, but are not limited to: explaining existing code, generating boilerplate, improving grammar for non-native English speakers, brainstorming design alternatives, and scaffolding tests. -When AI is used to transform or adapt existing code (rather than generating from scratch), contributors should note the source in the PR description alongside the AI disclosure. +When existing code is used as the basis for a change (whether transformed by hand or via AI tooling), contributors should note the source in the PR description. This is a source-attribution and licensing concern and is independent of whether the contribution otherwise requires disclosure under the rules above. ## AI-Autonomous Contributions From b798ed7f4c61adfa53c742bf3d6b4982bd66b87c Mon Sep 17 00:00:00 2001 From: Ulises Gascon Date: Tue, 28 Apr 2026 20:15:03 +0200 Subject: [PATCH 03/18] policy(ai): allow AI-drafted commit messages reviewed by the contributor The current Requirements list bans AI-generated commit messages outright while permitting AI-drafted-then-human-reviewed output everywhere else. Either the human-reviews-and-owns standard applies throughout, or it does not. Apply it consistently here. Substantive requirement (commit messages must explain what and why) is preserved. Signed-off-by: Stephen Augustus --- policies/ai.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/policies/ai.md b/policies/ai.md index 255daee4..40232ac0 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -97,7 +97,7 @@ All contributors must: - **Understand and be able to explain** every meaningful change - Run tests and verification appropriate for the change - Keep PRs appropriately scoped; avoid large automated PRs unless coordinated with maintainers -- Write human-readable commit messages explaining "what" and "why" — avoid AI-generated commit messages +- Write commit messages that explain what the change does and why. AI-drafted commit messages are acceptable when the contributor has reviewed them and can stand by what they say, the same standard that applies to any AI-assisted content - **Do not use AI to respond to review comments** — reviewers expect to engage with the human author ([Kubernetes norms](https://www.k8s.dev/docs/guide/pull-requests/)) ### Legal Obligations From c8b980cfbe3f9e6d281832ca5fe3d4b764ec957a Mon Sep 17 00:00:00 2001 From: Ulises Gascon Date: Tue, 28 Apr 2026 20:17:43 +0200 Subject: [PATCH 04/18] policy(ai): pin bot exemption to a fixed effective date Carve-outs should be bounded. "Existing" floats forward as new bots get approved, so the exception never closes. Signed-off-by: Stephen Augustus --- policies/ai.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/policies/ai.md b/policies/ai.md index 40232ac0..c30e6cee 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -138,7 +138,7 @@ A human using an AI development tool to *draft* any of the above — and then re ### Exceptions -**Existing GitHub Apps and bots** (e.g., Dependabot, Scorecard, CI bots) authorized through existing OpenSSF governance processes are not subject to this policy. They predate it and are governed by their own approval processes. +**GitHub Apps and bots authorized through OpenSSF governance processes prior to the effective date of this policy** (e.g., Dependabot, Scorecard, CI bots) are not subject to this policy. They predate it and are governed by their own approval processes. Bots authorized on or after that date are subject to this policy and must satisfy its requirements through whatever autonomous-behavior pathway the policy provides. Individual TIs may define acceptable autonomous agent interactions in their `AGENTS.md` file, provided those interactions are scoped, documented, and have a responsible human owner. For example, a project might allow an agent to run CI checks and post structured results, as long as a maintainer owns the agent configuration. From 1dec7ef207672aed2af2a01e9ba781bbda0f38c1 Mon Sep 17 00:00:00 2001 From: Ulises Gascon Date: Tue, 28 Apr 2026 20:20:23 +0200 Subject: [PATCH 05/18] policy(ai): require maintainer approval for external AGENTS.md changes Blanket non-acceptance forbids good-faith fixes (typos, broken links, outdated commands). Maintainer review already provides the consistency guard the rule is reaching for. Signed-off-by: Stephen Augustus --- policies/ai.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/policies/ai.md b/policies/ai.md index c30e6cee..54846326 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -204,7 +204,7 @@ OpenSSF repos should follow the [AGENTS.md specification](https://agents.md/) fo - **No confidential information.** Do not include non-public architecture details, infrastructure references, or information that is not already public in the repo. - **Project-specific instructions.** Focus on contribution standards, testing requirements, coding conventions, and review expectations specific to the project. - **Keep it maintained.** An outdated `AGENTS.md` is worse than none — it will produce contributions that don't match current project expectations. Treat it as a living document alongside `CONTRIBUTING.md`. -- **Maintainer-controlled.** `AGENTS.md` and other agent configuration files are maintained by project maintainers, not external contributors. Pull requests from external contributors that add or modify agent configuration files should not be accepted — this prevents tool-specific config sprawl and ensures agent instructions remain consistent with the project's contribution standards. +- **Maintainer-controlled.** `AGENTS.md` and other agent configuration files are maintained by project maintainers. Pull requests from external contributors that modify agent configuration files require explicit maintainer review and approval. This prevents tool-specific config sprawl and keeps agent instructions consistent with the project's contribution standards, while still allowing good-faith fixes (typos, broken links, outdated commands) to be reviewed on their merits. For security-specific agent instructions, TIs should reference the [Security-Focused Guide for AI Code Assistant Instructions](https://best.openssf.org/Security-Focused-Guide-for-AI-Code-Assistant-Instructions) published by the OpenSSF Best Practices Working Group. From 17519ac3c95efe75b195b9a3d43e305a4fae4fb8 Mon Sep 17 00:00:00 2001 From: Ulises Gascon Date: Tue, 28 Apr 2026 20:22:30 +0200 Subject: [PATCH 06/18] policy(ai): name the dimensions stricter rules can raise "Stricter" needs a named axis, otherwise rules that tighten one dimension while loosening another get labeled stricter case by case with no principled criterion. Signed-off-by: Stephen Augustus --- policies/ai.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/policies/ai.md b/policies/ai.md index 54846326..895d701a 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -243,6 +243,19 @@ For security-critical changes, maintainers should verify that AI-generated tests This policy establishes a permissive default for OpenSSF Technical Initiatives. Individual TIs may adopt stricter rules — up to and including a prohibition on AI-assisted contributions — if justified by the TI's risk profile. +### Dimensions + +"Stricter" rules are rules that raise requirements along one or more of the following dimensions, without weakening any other dimension below this policy's defaults: + +- **Disclosure**: requiring disclosure of AI tool use beyond the cases this policy requires. +- **Autonomous contributions**: prohibiting categories of autonomous activity, or applying tighter constraints to autonomous behaviors that would otherwise be permitted. +- **Review process**: requiring additional review steps for AI-assisted contributions (e.g., two-maintainer approval, mandatory domain-expert sign-off for security-critical changes). +- **Provenance**: requiring `Generated-by:` or other provenance trailers that this policy treats as optional. +- **Scope of AI use**: prohibiting AI tool use in specific code areas (e.g., cryptography, authentication) or in the project as a whole. +- **Verification**: requiring specific verification practices for AI-assisted output (e.g., mandatory code scanning for copied snippets, mandatory end-to-end test runs). + +Rules that weaken any dimension below this policy's defaults (for example, waiving DCO sign-off for AI-assisted commits, or accepting autonomous contributions this policy prohibits) are not "stricter" rules and may not be adopted at the TI level. + ### Criteria Criteria that may justify stricter rules: From 50a92fe450b9fb4a4734ef8e42b0411f2c4126ae Mon Sep 17 00:00:00 2001 From: Ulises Gascon Date: Tue, 28 Apr 2026 20:24:14 +0200 Subject: [PATCH 07/18] policy(ai): document the path TIs use to permit autonomous behaviors "Not accepted by default" implies an override path, but the policy only gestures at AGENTS.md without naming what makes a permission valid. Replace the gesture with operational requirements: scoped actions, named maintainer owner, identifiable agent, revocability. Signed-off-by: Stephen Augustus --- policies/ai.md | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/policies/ai.md b/policies/ai.md index 895d701a..d5d49ad0 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -140,7 +140,14 @@ A human using an AI development tool to *draft* any of the above — and then re **GitHub Apps and bots authorized through OpenSSF governance processes prior to the effective date of this policy** (e.g., Dependabot, Scorecard, CI bots) are not subject to this policy. They predate it and are governed by their own approval processes. Bots authorized on or after that date are subject to this policy and must satisfy its requirements through whatever autonomous-behavior pathway the policy provides. -Individual TIs may define acceptable autonomous agent interactions in their `AGENTS.md` file, provided those interactions are scoped, documented, and have a responsible human owner. For example, a project might allow an agent to run CI checks and post structured results, as long as a maintainer owns the agent configuration. +Individual TIs may permit specific autonomous agent behaviors in their repositories by documenting them in `AGENTS.md`. To be valid, a permitted behavior must: + +- Be scoped to specific actions, not open-ended autonomy. For example, "post a structured CI summary on PRs" is in scope; "act on PRs" is not. +- Have a named maintainer owner accountable for the agent's outputs and for revoking access if the behavior becomes harmful. +- Identify the agent's account or bot username so reviewers can recognize it in the contribution stream. +- Be revocable by any maintainer if the behavior produces low-quality, off-target, or harmful output. + +Permitted autonomous behaviors do not waive DCO, contributor responsibility, or the maintainer's authority to close or revert outputs. A TI may not use this mechanism to permit categories of autonomous activity that this policy explicitly prohibits in the [Details](#details) section above. ## Maintainer Review of AI-Assisted Contributions From 1475287837c8fa4b46e542d35cb50dc611c53b2d Mon Sep 17 00:00:00 2001 From: Ulises Gascon Date: Tue, 28 Apr 2026 20:25:33 +0200 Subject: [PATCH 08/18] policy(ai): soften the DCO comparison in Legal Obligations DCO has 20 years of precedent and tested practice. The AI assurance analog has neither. Treat the comparison as a working analogy rather than a settled equivalence so contributors do not assume protections the AI side has not accumulated. Signed-off-by: Stephen Augustus --- policies/ai.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/policies/ai.md b/policies/ai.md index d5d49ad0..2589cb16 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -108,7 +108,7 @@ Adapted from the [ASF Generative Tooling Guidance](https://www.apache.org/legal/ 2. **No third-party copyrighted material**: AI-generated output does not contain copyrighted third-party code, or if it does, that code is compatible with the project's license. Contributors should review AI output for copied or closely adapted snippets and use code scanning tools where available. -3. **Reasonable assurance**: Contributors have taken reasonable steps to verify the above — whether by reviewing tool terms, scanning output, or manually reviewing generated code. These conditions require diligence and attestation, not proof — the same model as DCO. +3. **Reasonable assurance**: Contributors have taken reasonable steps to verify the above, whether by reviewing tool terms, scanning output, or manually reviewing generated code. These conditions ask for diligence and attestation, not proof. The framing is analogous to DCO, but the AI legal landscape (copyright in generated output, training-data provenance, derivative-work questions) is unsettled and lacks DCO's accumulated precedent and tested practice. Contributors and TIs should treat the analogy as a working framework rather than a settled equivalence. ### Recommendations From 91e6eaeb9f0e9b02ff0698548185c55d03f39872 Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Wed, 29 Apr 2026 03:22:07 -0400 Subject: [PATCH 09/18] policies/ai: Incorporate arewm's copyedit Co-authored-by: Andrew McNamara Signed-off-by: Stephen Augustus --- policies/ai.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/policies/ai.md b/policies/ai.md index 2589cb16..f59c5b1d 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -58,7 +58,7 @@ In both cases, disclosure exists so reviewers know the standard accountability a Disclosure adds value when it tells reviewers something they need to act on. Whether a contributor used an AI tool while working on a contribution does not, on its own, change what reviewers are evaluating. The change still has to be correct, in scope, well-explained, and aligned with project standards. AI tool use is heterogeneous across contributors, and across the lifecycle of a single contribution (research, drafting, refactoring, testing, documentation). Tracking it adds noise without improving review, and works against the policy's own *Respect maintainer time* principle. -What does change reviewers' work is whether a human is meaningfully behind the contribution. When a contribution is AI-autonomous, or when the contributor is forwarding AI-produced content they have not reviewed, the accountability frame the project relies on (DCO, contributor responsibility, ability to engage in review) is weaker. Those are the cases where reviewers benefit from explicit signal, and where this policy requires disclosure. +Reviewers' work will change depending on whether a human is meaningfully behind the contribution. When a contribution is AI-autonomous, or when the contributor is forwarding AI-produced content they have not reviewed, the accountability frame the project relies on (DCO, contributor responsibility, ability to engage in review) is weaker. Those are the cases where reviewers benefit from explicit signal, and where this policy requires disclosure. This framing is consistent with the broader policy: the contributor owns the contribution, AI tool use does not reduce responsibility, and "the AI wrote it" is not a defense. It is also consistent with [RedMonk's 2026 analysis](https://redmonk.com/kholterhoff/2026/02/26/generative-ai-policy-landscape-in-open-source/) of disclosure norms across open source organizations, which describes disclosure as a mechanism for surfacing accountability concerns rather than as a tracking mechanism for tool use. From 6e27916b0a9659d6de48b08941cfe64b53ce9017 Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Wed, 29 Apr 2026 03:46:29 -0400 Subject: [PATCH 10/18] Apply suggestion from @justaugustus Signed-off-by: Stephen Augustus --- policies/ai.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/policies/ai.md b/policies/ai.md index f59c5b1d..4eea279a 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -143,7 +143,7 @@ A human using an AI development tool to *draft* any of the above — and then re Individual TIs may permit specific autonomous agent behaviors in their repositories by documenting them in `AGENTS.md`. To be valid, a permitted behavior must: - Be scoped to specific actions, not open-ended autonomy. For example, "post a structured CI summary on PRs" is in scope; "act on PRs" is not. -- Have a named maintainer owner accountable for the agent's outputs and for revoking access if the behavior becomes harmful. +- Have a named maintainer team (ideally, more than one person) accountable for the agent's outputs and for revoking access if the behavior becomes harmful - Identify the agent's account or bot username so reviewers can recognize it in the contribution stream. - Be revocable by any maintainer if the behavior produces low-quality, off-target, or harmful output. From f20ee1345ac9407541918a2f7687a73ac5951bc7 Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Wed, 29 Apr 2026 04:10:43 -0400 Subject: [PATCH 11/18] policy(ai): align Legal Obligations with LF Generative AI Policy - Strengthen third-party copyrighted material obligations to require notice, attribution, and license terms per LF rule #2 - Acknowledge practical challenge of identifying third-party materials in AI output, as raised by @lehors - Add LF Generative AI Policy to References as foundation-level guidance - Reorder references to lead with foundation policies (LF, ASF) - Remove ASF-specific intro from Legal Obligations since obligations now draw from both LF and ASF guidance Co-Authored-By: Claude Signed-off-by: Stephen Augustus --- policies/ai.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/policies/ai.md b/policies/ai.md index 4eea279a..3dfc3d48 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -102,11 +102,11 @@ All contributors must: ### Legal Obligations -Adapted from the [ASF Generative Tooling Guidance](https://www.apache.org/legal/generative-tooling.html), contributors using AI development tools must ensure: +Contributors using AI development tools must ensure: 1. **Tool terms compatibility**: The AI development tool's terms of service do not conflict with the project's open source license or the [Open Source Definition](https://opensource.org/osd). Most major tools grant users rights to generated output, but this varies by plan and terms of service. Contributors should verify. -2. **No third-party copyrighted material**: AI-generated output does not contain copyrighted third-party code, or if it does, that code is compatible with the project's license. Contributors should review AI output for copied or closely adapted snippets and use code scanning tools where available. +2. **No third-party copyrighted material**: AI-generated output does not contain copyrighted third-party code, or if it does, that code is compatible with the project's license. When pre-existing copyrighted materials are included in AI-generated output, contributors should confirm they have permission to use and modify those materials (such as an open source license or public domain declaration that complies with the project's licensing policies), provide notice and attribution of third-party rights, and include information about the applicable license terms with their contribution. Contributors should review AI output for copied or closely adapted snippets and use code scanning tools where available. Note: as the [Linux Foundation's Generative AI Policy](https://www.linuxfoundation.org/legal/generative-ai) acknowledges, reliably identifying third-party materials in AI-generated output remains a practical challenge with current tools. 3. **Reasonable assurance**: Contributors have taken reasonable steps to verify the above, whether by reviewing tool terms, scanning output, or manually reviewing generated code. These conditions ask for diligence and attestation, not proof. The framing is analogous to DCO, but the AI legal landscape (copyright in generated output, training-data provenance, derivative-work questions) is unsettled and lacks DCO's accumulated precedent and tested practice. Contributors and TIs should treat the analogy as a working framework rather than a settled equivalence. @@ -286,7 +286,7 @@ This policy will be reviewed on at least an annual basis. Reviews may also be tr ### Review Triggers - **Legal developments**: Court rulings or regulatory changes affecting AI-generated code, copyright, or licensing -- **Foundation guidance changes**: Updates to AI contribution guidance by major foundations (ASF, Linux Foundation, CNCF, TODO Group) +- **Foundation guidance changes**: Updates to AI contribution guidance by major foundations (Linux Foundation, CNCF, TODO Group, Apache Software Foundation) - **New AI capabilities**: Emergence of materially new AI development tool capabilities (e.g., fully autonomous agents, new interaction modes) that affect the assumptions in this policy - **Community incidents**: Significant incidents involving AI-generated contributions in any open source ecosystem that reveal gaps in current guidance - **TI feedback**: Patterns reported by OpenSSF TI maintainers that suggest this policy needs adjustment @@ -297,9 +297,10 @@ Review outcomes are documented as updates to this policy document. Changes that This policy draws from the following sources: -- [Kubernetes contributor guide: pull requests](https://www.k8s.dev/docs/guide/pull-requests/) — Author responsibility, explainability, discouraging AI-generated commit messages, allowing closure of unreviewable PRs +- [Linux Foundation: Generative AI Policy](https://www.linuxfoundation.org/legal/generative-ai) — Foundation-level guidance on AI-generated contributions, tool terms compliance, and third-party content attribution obligations - [Apache Software Foundation: Generative Tooling Guidance](https://www.apache.org/legal/generative-tooling.html) — Contributor obligations for AI-generated code (tool terms, third-party materials, reasonable assurance), `Generated-by:` trailer convention - [Kate Holterhoff / RedMonk: The Generative AI Policy Landscape in Open Source](https://redmonk.com/kholterhoff/2026/02/26/generative-ai-policy-landscape-in-open-source/) — Survey of 73 open source organizations' AI guidance; framework for stance, concern dimensions, and disclosure conventions +- [Kubernetes contributor guide: pull requests](https://www.k8s.dev/docs/guide/pull-requests/) — Author responsibility, explainability, discouraging AI-generated commit messages, allowing closure of unreviewable PRs - [Probabl / scikit-learn maintainers: Maintaining Open Source in the Age of Gen AI](https://blog.probabl.ai/maintaining-open-source-age-of-gen-ai) — Maintainer burden, categories of problematic AI contributions, `AGENTS.md` as mitigation - [Scientific Python community: Community Considerations Around AI Contributions](https://blog.scientific-python.org/scientific-python/community-considerations-around-ai/) — Guidelines on transparency, responsibility, understanding, and authentic engagement - [AGENTS.md specification](https://agents.md/) — Standard format and placement for AI coding agent instructions From 8954bd22cb3a93fd84c1170f7661e9c35e1922a5 Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Wed, 29 Apr 2026 14:49:23 -0400 Subject: [PATCH 12/18] Apply suggestion from @arewm Co-authored-by: Andrew McNamara Signed-off-by: Stephen Augustus --- policies/ai.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/policies/ai.md b/policies/ai.md index 3dfc3d48..db609f8d 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -88,7 +88,7 @@ Contributors are fully responsible for AI-assisted contributions. They must unde ### Rationale -AI has lowered the cost of producing contributions but not the cost of reviewing them. Maintainers report that low-quality AI-generated submissions create significant triage burden. Holding contributors to the same standards regardless of tooling ensures review effort is respected and code quality is maintained. +AI has lowered the cost of producing contributions but not the cost of reviewing them. Maintainers report that low-quality AI-generated submissions create significant triage burden. Holding contributors to the same standards regardless of tooling ensures review effort is respected and code, issue, and comment quality is maintained. ### Requirements From cf58728ed820e17e319d3e59dd56599c7b712ab1 Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Wed, 29 Apr 2026 20:25:05 -0400 Subject: [PATCH 13/18] Rename to "AI Contribution Policy for OpenSSF Technical Initiatives" Signed-off-by: Stephen Augustus --- policies/ai.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/policies/ai.md b/policies/ai.md index db609f8d..c0acd3e9 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -1,4 +1,4 @@ -# AI Usage in OpenSSF Technical Initiatives +# AI Contribution Policy for OpenSSF Technical Initiatives This policy defines how the OpenSSF approaches the use of AI development tools in contributions to its Technical Initiatives (TIs). It covers AI-assisted and AI-autonomous contributions, disclosure expectations, maintainer review practices, and repository-level agent configuration. From 4e4152813c1bdce93f593b98766279b3f70a1a24 Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Wed, 29 Apr 2026 20:36:33 -0400 Subject: [PATCH 14/18] policy(ai): add Assisted-by trailer and fix Co-authored-by language - Add Assisted-by: as a third commit trailer convention, documenting the format emerging from Sigstore and SLSA TIs (AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]) - Fix Co-authored-by description: replace "raises unresolved questions" with language reflecting established copyright law (AI cannot hold authorship or copyright), per feedback from funnelfiasco and hen Co-Authored-By: Claude Signed-off-by: Stephen Augustus --- policies/ai.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/policies/ai.md b/policies/ai.md index c0acd3e9..4717e49a 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -73,10 +73,11 @@ The detailed expectations around AI usage should live in `CONTRIBUTING.md`. The ### Commit Trailers -For machine-parseable provenance tracking, two conventions exist in the ecosystem: +For machine-parseable provenance tracking, three conventions exist in the ecosystem: - `Generated-by: ` — Recommended by the [Apache Software Foundation](https://www.apache.org/legal/generative-tooling.html). This is a provenance marker indicating which tool was used. It does not imply authorship. -- `Co-authored-by: ` — Widely used in practice and rendered by GitHub's UI. However, this implies co-authorship, which raises unresolved questions about whether AI development tools can hold authorship or copyright. +- `Co-authored-by: ` — Widely used in practice and rendered by GitHub's UI. However, this implies co-authorship, which is inconsistent with established copyright law holding that copyright requires human creative activity — AI tools cannot hold authorship or copyright. +- `Assisted-by: : [tool1] [tool2]` — Emerging convention adopted by OpenSSF TIs including Sigstore and SLSA. Indicates AI assistance without implying authorship. Example: `Assisted-by: Claude:claude-4.5-sonnet`. There is no industry standard for AI commit attribution. Current behavior varies across tools: Claude Code and Aider both default to `Co-authored-by:` trailers; GitHub Copilot's coding agent co-authors commits with the developer who assigned the task; other tools (Cursor, Windsurf, OpenAI Codex) either add no attribution by default or do not document their behavior. No major AI coding agent currently defaults to `Generated-by:`. From b00e35233628e69cf022af49c0a75dae5776b43d Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Wed, 29 Apr 2026 21:03:55 -0400 Subject: [PATCH 15/18] policy(ai): document commit trailer marketing risk Add note that vendor/tool names in commit trailers can be used by third parties to claim project endorsement. Reference Kubernetes' decision to ban AI commit trailers for this reason. TIs should weigh provenance tracking benefits against this marketing risk. Co-Authored-By: Claude Signed-off-by: Stephen Augustus --- policies/ai.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/policies/ai.md b/policies/ai.md index 4717e49a..2757d684 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -79,6 +79,8 @@ For machine-parseable provenance tracking, three conventions exist in the ecosys - `Co-authored-by: ` — Widely used in practice and rendered by GitHub's UI. However, this implies co-authorship, which is inconsistent with established copyright law holding that copyright requires human creative activity — AI tools cannot hold authorship or copyright. - `Assisted-by: : [tool1] [tool2]` — Emerging convention adopted by OpenSSF TIs including Sigstore and SLSA. Indicates AI assistance without implying authorship. Example: `Assisted-by: Claude:claude-4.5-sonnet`. +TIs adopting commit trailers that include vendor or tool names should be aware that third parties may use the presence of these trailers in merged commits to publicly claim that a project uses or endorses their product. Some projects (notably [Kubernetes](https://www.k8s.dev/docs/guide/pull-requests/#ai-guidance)) have banned AI commit trailers for this reason. TIs should weigh provenance tracking benefits against this marketing risk when deciding whether to require trailers. + There is no industry standard for AI commit attribution. Current behavior varies across tools: Claude Code and Aider both default to `Co-authored-by:` trailers; GitHub Copilot's coding agent co-authors commits with the developer who assigned the task; other tools (Cursor, Windsurf, OpenAI Codex) either add no attribution by default or do not document their behavior. No major AI coding agent currently defaults to `Generated-by:`. AI provenance trailers are recommended but not required at the policy level. Individual TIs may require them through their own contribution conventions. AI provenance trailers are additive — they do not replace or conflict with DCO `Signed-off-by:` trailers. Projects using [DCO Bot](https://probot.github.io/apps/dco/) or similar enforcement tools for `Signed-off-by:` are fully compatible with AI provenance trailers. From 77365b1703afb2f01a5f8c1b446af4de65c2c8d4 Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Wed, 29 Apr 2026 21:06:50 -0400 Subject: [PATCH 16/18] policy(ai): clarify conformant TI guidance needs no TAC approval Add Conformant Project-Specific Guidance subsection to TI-Level Exceptions. TIs are encouraged to document how the foundation policy applies to their project (examples, conventions, operational guidance) in their own CONTRIBUTING.md without TAC approval. Only stricter exceptions require the approval process. Addresses feedback from Hayden-IO (Sigstore/SLSA) who asked why TIs maintaining conformant AI docs would need TAC overhead. Co-Authored-By: Claude Signed-off-by: Stephen Augustus --- policies/ai.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/policies/ai.md b/policies/ai.md index 2757d684..e715c2db 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -253,6 +253,10 @@ For security-critical changes, maintainers should verify that AI-generated tests This policy establishes a permissive default for OpenSSF Technical Initiatives. Individual TIs may adopt stricter rules — up to and including a prohibition on AI-assisted contributions — if justified by the TI's risk profile. +### Conformant Project-Specific Guidance + +TIs are encouraged to document how this policy applies to their project — including project-specific examples, contribution conventions, and operational guidance — in their own `CONTRIBUTING.md` or equivalent. Conformant project-specific guidance that does not raise requirements beyond this policy's defaults does not require TAC approval. Only exceptions that impose stricter rules (see below) require the approval process. + ### Dimensions "Stricter" rules are rules that raise requirements along one or more of the following dimensions, without weakening any other dimension below this policy's defaults: From 750dcdfffe46bd656d5c5e1eb5924b02f4f2cdfa Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Tue, 12 May 2026 01:35:39 -0400 Subject: [PATCH 17/18] Update policies/ai.md Co-authored-by: Zach Steindler Signed-off-by: Stephen Augustus --- policies/ai.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/policies/ai.md b/policies/ai.md index e715c2db..58955982 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -101,7 +101,7 @@ All contributors must: - Run tests and verification appropriate for the change - Keep PRs appropriately scoped; avoid large automated PRs unless coordinated with maintainers - Write commit messages that explain what the change does and why. AI-drafted commit messages are acceptable when the contributor has reviewed them and can stand by what they say, the same standard that applies to any AI-assisted content -- **Do not use AI to respond to review comments** — reviewers expect to engage with the human author ([Kubernetes norms](https://www.k8s.dev/docs/guide/pull-requests/)) +- **Not use AI to respond to review comments** — reviewers expect to engage with the human author ([Kubernetes norms](https://www.k8s.dev/docs/guide/pull-requests/)) ### Legal Obligations From 858cb2d5256fcea1b49badcb45a276cfb68cc451 Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Tue, 12 May 2026 15:34:21 -0400 Subject: [PATCH 18/18] Update policies/ai.md Co-authored-by: David A. Wheeler Signed-off-by: Stephen Augustus --- policies/ai.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/policies/ai.md b/policies/ai.md index 58955982..1351d230 100644 --- a/policies/ai.md +++ b/policies/ai.md @@ -35,7 +35,7 @@ These five principles govern how the OpenSSF approaches AI usage in its Technica 1. **Be transparent.** Disclose AI usage. Don't obscure how contributions were produced. Transparency builds trust with maintainers and the broader community. -2. **Take responsibility.** The human contributor owns the contribution. AI development tools don't reduce accountability for correctness, security, or licensing. "The AI wrote it" is not a defense. +2. **Take responsibility.** The human contributor is responsible for the contribution. AI development tools don't reduce accountability for correctness, security, or licensing. "The AI wrote it" is not a defense. 3. **Demonstrate understanding.** Contributors must be able to explain what they submit. If you cannot walk through the change and justify its design, it is not ready for review.