GouvernAI-claude-code-plugin
Runtime guardrails for Claude Code. Auto-approve what's safe, gate what's risky, block what's dangerous. Dual enforcement, full audit trail. MIT.
- Auto-approve safe actions and block dangerous ones with audit trails
- Classify every assistant action by risk level automatically
- Generate compliance reports showing all gated and blocked actions
Runtime guardrails that auto-approve safe operations, gate risky ones, and block dangerous actions. Dual enforcement with full audit trail. MIT licensed.
Enterprise teams deploying Claude Code who need an automated risk classification layer with auditable decision logs.
https://github.com/Myr-Aya/GouvernAI-claude-code-plugin
By Myr-Aya
How to Get It
claude plugins install Myr-Aya/GouvernAI-claude-code-plugin
Tip: Paste this into a Claude Code conversation. Verify command matches your Claude Code version.
Trust Signals Automated Scan
Data & Access
Community Pulse Emerging
Discussed on Reddit
- I built a governance layer for Claude Code: risk tiers, approvals, and hard-bloc — Reddit · 1 pts
- Meta just had another "Sev 1" incident with a rogue AI agent — Reddit
2 mentions across 1 sources
Reviewer notes
Automated Scan review. These are observations, not a security certification.
Runtime guardrails: auto-approve safe, gate risky, block dangerous. Dual enforcement, audit trail. MIT.
Things to check
- Single maintainer. Consider the risk if this person stops maintaining the project.
How to evaluate tools before deploying →
Data shown here comes from public APIs and automated scanning. Reviewer notes reflect one person's experience. This is not a security certification or legal recommendation. Always evaluate tools according to your own organization's policies.