Feature Area
Core functionality
Is your feature request related to a an existing bug? Please link it here.
NA. This is a new feature request for compliance infrastructure in multi-agent crews. Related cross-framework discussion: langchain-ai/langchain#35691 (12+ teams converging on a shared compliance interface).
Describe the solution you'd like
When a crew of agents collaborates on a task, there's no way to prove which agent did what, under what rules, and whether each agent stayed within its scope. In regulated industries this blocks deployment entirely because auditors can't reconstruct the decision chain across agents.
The ask: a compliance layer where each agent in a crew gets its own behavioral covenant (what it's allowed to do), and every action produces a cryptographic receipt proving the covenant was enforced before execution. When the Researcher agent passes findings to the Writer agent, the handoff itself gets a signed receipt linking the two agents' authorization chains.
An open source protocol for this already exists: Nobulex (MIT licensed, TypeScript). The approach:
- each agent in a crew gets a covenant defining its scope:
permit web:search where topic in ["market research"] for the Researcher, permit file:write where path matches "drafts/**" for the Writer
- every tool call goes through enforcement middleware. if the covenant says no, the tool never executes
- each action produces a bilateral receipt (Ed25519 signed pre-execution authorization + post-execution result, hash-chained)
- inter-agent handoffs reference upstream receipts via digest linking, so an auditor can walk the full crew execution path
- the CLI generates compliance reports for EU AI Act, SOC2, ISO 42001, and Colorado AI Act from the receipt data
CrewAI's task delegation and role-based architecture maps naturally to per-agent covenants. Each role already has a defined scope — covenants make that scope cryptographically enforced rather than prompt-enforced.
This is being discussed across frameworks. There's an active LangChain RFC with 12+ independent teams, an IETF Internet-Draft for the protocol, and NIST's AI Agent Standards Initiative is building an Interoperability Profile for Q4 2026 that covers exactly this kind of cross-agent compliance evidence. Happy to help scope a CrewAI integration.
Describe alternatives you've considered
No response
Additional context
Protocol: https://github.com/arian-gogani/nobulex
LangChain RFC: langchain-ai/langchain#35691
IETF Draft: draft-gogani-nobulex-proof-of-behavior-00
Website: https://nobulex.com
EU AI Act enforcement deadline: August 2, 2026
Willingness to Contribute
Yes, I'd be happy to submit a pull request
Feature Area
Core functionality
Is your feature request related to a an existing bug? Please link it here.
NA. This is a new feature request for compliance infrastructure in multi-agent crews. Related cross-framework discussion: langchain-ai/langchain#35691 (12+ teams converging on a shared compliance interface).
Describe the solution you'd like
When a crew of agents collaborates on a task, there's no way to prove which agent did what, under what rules, and whether each agent stayed within its scope. In regulated industries this blocks deployment entirely because auditors can't reconstruct the decision chain across agents.
The ask: a compliance layer where each agent in a crew gets its own behavioral covenant (what it's allowed to do), and every action produces a cryptographic receipt proving the covenant was enforced before execution. When the Researcher agent passes findings to the Writer agent, the handoff itself gets a signed receipt linking the two agents' authorization chains.
An open source protocol for this already exists: Nobulex (MIT licensed, TypeScript). The approach:
permit web:search where topic in ["market research"]for the Researcher,permit file:write where path matches "drafts/**"for the WriterCrewAI's task delegation and role-based architecture maps naturally to per-agent covenants. Each role already has a defined scope — covenants make that scope cryptographically enforced rather than prompt-enforced.
This is being discussed across frameworks. There's an active LangChain RFC with 12+ independent teams, an IETF Internet-Draft for the protocol, and NIST's AI Agent Standards Initiative is building an Interoperability Profile for Q4 2026 that covers exactly this kind of cross-agent compliance evidence. Happy to help scope a CrewAI integration.
Describe alternatives you've considered
No response
Additional context
Protocol: https://github.com/arian-gogani/nobulex
LangChain RFC: langchain-ai/langchain#35691
IETF Draft: draft-gogani-nobulex-proof-of-behavior-00
Website: https://nobulex.com
EU AI Act enforcement deadline: August 2, 2026
Willingness to Contribute
Yes, I'd be happy to submit a pull request