Consider adding support for Project CodeGuard rules for Codex.
Developers and researchers using the openai-guardrails-python library to build applications with code-generating models like Codex currently lack built-in capabilities to validate the security of the generated code.
We could integrate Project CodeGuard's rules and methodology into the guardrails. This could involve an integration and/or a well-documented process for users to import and use the rules.
Allow the guardrail to run the CodeGuard analysis on code snippets generated by the model. The guardrail would be able to output a detailed report of any security issues found, trip a "blocker" to halt execution, or completely avoid introducing a new vulnerability.