Skip to content
49 changes: 40 additions & 9 deletions policies/ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,18 +45,29 @@ These five principles govern how the OpenSSF approaches AI usage in its Technica

## Disclosure of AI Usage

Contributors must disclose AI development tool usage when submitting pull requests to OpenSSF repositories. Disclosure is required whenever AI development tools were used at any point in producing the contribution — there is no minimum threshold.
The OpenSSF treats AI development tools as part of a contributor's workflow, comparable to editors, linters, or language servers. Tool use itself is not something reviewers need flagged. Contributors are fully responsible for every contribution they submit (see [Contributor Responsibility for AI-Assisted Work](#contributor-responsibility-for-ai-assisted-work)), and the [Developer Certificate of Origin (DCO)](../dco.md) provides the formal accountability attestation. Within that framework, disclosure of AI tool use is **not** required by default.

Disclosure **is** required in the two cases where the normal accountability assumption breaks down:

1. **AI-autonomous contributions** (see [AI-Autonomous Contributions](#ai-autonomous-contributions)): the contribution lacks a human who directed the work and reviewed the specific output.
2. **Unreviewed AI-produced content**: the contributor is submitting AI-produced content they have not meaningfully reviewed and cannot fully explain. For example, a large generated change the contributor is asking maintainers to evaluate without having walked through it themselves.

In both cases, disclosure exists so reviewers know the standard accountability assumption does not hold for the contribution, and can adjust their review accordingly. Outside these two cases, AI tool use is treated as workflow and does not require disclosure.

### Rationale

Transparency is foundational to open source collaboration. Disclosure enables maintainers to calibrate review effort, supports provenance tracking, and builds trust with the community. Required disclosure aligns the OpenSSF with emerging ecosystem norms — the majority of organizations surveyed in [RedMonk's 2026 analysis](https://redmonk.com/kholterhoff/2026/02/26/generative-ai-policy-landscape-in-open-source/) either require or recommend AI disclosure.
Disclosure adds value when it tells reviewers something they need to act on. Whether a contributor used an AI tool while working on a contribution does not, on its own, change what reviewers are evaluating. The change still has to be correct, in scope, well-explained, and aligned with project standards. AI tool use is heterogeneous across contributors, and across the lifecycle of a single contribution (research, drafting, refactoring, testing, documentation). Tracking it adds noise without improving review, and works against the policy's own *Respect maintainer time* principle.

What does change reviewers' work is whether a human is meaningfully behind the contribution. When a contribution is AI-autonomous, or when the contributor is forwarding AI-produced content they have not reviewed, the accountability frame the project relies on (DCO, contributor responsibility, ability to engage in review) is weaker. Those are the cases where reviewers benefit from explicit signal, and where this policy requires disclosure.

This framing is consistent with the broader policy: the contributor owns the contribution, AI tool use does not reduce responsibility, and "the AI wrote it" is not a defense. It is also consistent with [RedMonk's 2026 analysis](https://redmonk.com/kholterhoff/2026/02/26/generative-ai-policy-landscape-in-open-source/) of disclosure norms across open source organizations, which describes disclosure as a mechanism for surfacing accountability concerns rather than as a tracking mechanism for tool use.

### Pull Request Disclosure

Pull request templates should include at minimum:

1. An attestation that the contributor has read and understood the project's contribution guidelines (`CONTRIBUTING.md`).
2. A declaration of whether AI development tools were used in producing the contribution, and if so, a brief description of how (e.g., "code generation," "test scaffolding," "documentation drafting").
2. An attestation that the contributor has reviewed the contribution and can explain its design and tradeoffs. If any AI-produced content in the contribution has not been meaningfully reviewed by the contributor, the PR description should say so and identify which parts, so reviewers can adjust their review accordingly.

The detailed expectations around AI usage should live in `CONTRIBUTING.md`. The PR template captures the attestation; `CONTRIBUTING.md` holds the substance.

Expand Down Expand Up @@ -86,7 +97,7 @@ All contributors must:
- **Understand and be able to explain** every meaningful change
- Run tests and verification appropriate for the change
- Keep PRs appropriately scoped; avoid large automated PRs unless coordinated with maintainers
- Write human-readable commit messages explaining "what" and "why" — avoid AI-generated commit messages
- Write commit messages that explain what the change does and why. AI-drafted commit messages are acceptable when the contributor has reviewed them and can stand by what they say, the same standard that applies to any AI-assisted content
- **Do not use AI to respond to review comments** — reviewers expect to engage with the human author ([Kubernetes norms](https://www.k8s.dev/docs/guide/pull-requests/))

### Legal Obligations
Expand All @@ -97,13 +108,13 @@ Adapted from the [ASF Generative Tooling Guidance](https://www.apache.org/legal/

2. **No third-party copyrighted material**: AI-generated output does not contain copyrighted third-party code, or if it does, that code is compatible with the project's license. Contributors should review AI output for copied or closely adapted snippets and use code scanning tools where available.

3. **Reasonable assurance**: Contributors have taken reasonable steps to verify the abovewhether by reviewing tool terms, scanning output, or manually reviewing generated code. These conditions require diligence and attestation, not proofthe same model as DCO.
3. **Reasonable assurance**: Contributors have taken reasonable steps to verify the above, whether by reviewing tool terms, scanning output, or manually reviewing generated code. These conditions ask for diligence and attestation, not proof. The framing is analogous to DCO, but the AI legal landscape (copyright in generated output, training-data provenance, derivative-work questions) is unsettled and lacks DCO's accumulated precedent and tested practice. Contributors and TIs should treat the analogy as a working framework rather than a settled equivalence.

### Recommendations

Examples of reasonable AI development tool uses include, but are not limited to: explaining existing code, generating boilerplate, improving grammar for non-native English speakers, brainstorming design alternatives, and scaffolding tests.

When AI is used to transform or adapt existing code (rather than generating from scratch), contributors should note the source in the PR description alongside the AI disclosure.
When existing code is used as the basis for a change (whether transformed by hand or via AI tooling), contributors should note the source in the PR description. This is a source-attribution and licensing concern and is independent of whether the contribution otherwise requires disclosure under the rules above.

## AI-Autonomous Contributions

Expand All @@ -127,9 +138,16 @@ A human using an AI development tool to *draft* any of the above — and then re

### Exceptions

**Existing GitHub Apps and bots** (e.g., Dependabot, Scorecard, CI bots) authorized through existing OpenSSF governance processes are not subject to this policy. They predate it and are governed by their own approval processes.
**GitHub Apps and bots authorized through OpenSSF governance processes prior to the effective date of this policy** (e.g., Dependabot, Scorecard, CI bots) are not subject to this policy. They predate it and are governed by their own approval processes. Bots authorized on or after that date are subject to this policy and must satisfy its requirements through whatever autonomous-behavior pathway the policy provides.

Individual TIs may permit specific autonomous agent behaviors in their repositories by documenting them in `AGENTS.md`. To be valid, a permitted behavior must:

Individual TIs may define acceptable autonomous agent interactions in their `AGENTS.md` file, provided those interactions are scoped, documented, and have a responsible human owner. For example, a project might allow an agent to run CI checks and post structured results, as long as a maintainer owns the agent configuration.
- Be scoped to specific actions, not open-ended autonomy. For example, "post a structured CI summary on PRs" is in scope; "act on PRs" is not.
- Have a named maintainer owner accountable for the agent's outputs and for revoking access if the behavior becomes harmful.
- Identify the agent's account or bot username so reviewers can recognize it in the contribution stream.
- Be revocable by any maintainer if the behavior produces low-quality, off-target, or harmful output.

Permitted autonomous behaviors do not waive DCO, contributor responsibility, or the maintainer's authority to close or revert outputs. A TI may not use this mechanism to permit categories of autonomous activity that this policy explicitly prohibits in the [Details](#details) section above.

## Maintainer Review of AI-Assisted Contributions

Expand Down Expand Up @@ -193,7 +211,7 @@ OpenSSF repos should follow the [AGENTS.md specification](https://agents.md/) fo
- **No confidential information.** Do not include non-public architecture details, infrastructure references, or information that is not already public in the repo.
- **Project-specific instructions.** Focus on contribution standards, testing requirements, coding conventions, and review expectations specific to the project.
- **Keep it maintained.** An outdated `AGENTS.md` is worse than none — it will produce contributions that don't match current project expectations. Treat it as a living document alongside `CONTRIBUTING.md`.
- **Maintainer-controlled.** `AGENTS.md` and other agent configuration files are maintained by project maintainers, not external contributors. Pull requests from external contributors that add or modify agent configuration files should not be accepted — this prevents tool-specific config sprawl and ensures agent instructions remain consistent with the project's contribution standards.
- **Maintainer-controlled.** `AGENTS.md` and other agent configuration files are maintained by project maintainers. Pull requests from external contributors that modify agent configuration files require explicit maintainer review and approval. This prevents tool-specific config sprawl and keeps agent instructions consistent with the project's contribution standards, while still allowing good-faith fixes (typos, broken links, outdated commands) to be reviewed on their merits.

For security-specific agent instructions, TIs should reference the [Security-Focused Guide for AI Code Assistant Instructions](https://best.openssf.org/Security-Focused-Guide-for-AI-Code-Assistant-Instructions) published by the OpenSSF Best Practices Working Group.

Expand Down Expand Up @@ -232,6 +250,19 @@ For security-critical changes, maintainers should verify that AI-generated tests

This policy establishes a permissive default for OpenSSF Technical Initiatives. Individual TIs may adopt stricter rules — up to and including a prohibition on AI-assisted contributions — if justified by the TI's risk profile.

### Dimensions

"Stricter" rules are rules that raise requirements along one or more of the following dimensions, without weakening any other dimension below this policy's defaults:

- **Disclosure**: requiring disclosure of AI tool use beyond the cases this policy requires.
- **Autonomous contributions**: prohibiting categories of autonomous activity, or applying tighter constraints to autonomous behaviors that would otherwise be permitted.
- **Review process**: requiring additional review steps for AI-assisted contributions (e.g., two-maintainer approval, mandatory domain-expert sign-off for security-critical changes).
- **Provenance**: requiring `Generated-by:` or other provenance trailers that this policy treats as optional.
- **Scope of AI use**: prohibiting AI tool use in specific code areas (e.g., cryptography, authentication) or in the project as a whole.
- **Verification**: requiring specific verification practices for AI-assisted output (e.g., mandatory code scanning for copied snippets, mandatory end-to-end test runs).

Rules that weaken any dimension below this policy's defaults (for example, waiving DCO sign-off for AI-assisted commits, or accepting autonomous contributions this policy prohibits) are not "stricter" rules and may not be adopted at the TI level.

### Criteria

Criteria that may justify stricter rules:
Expand Down