We audited how 30 popular AI agent projects handle authorization. The results are alarming: nearly all of them rely on unscoped API keys with no per-agent identity, no user consent, and no revocation mechanism.
We analyzed 30 of the most popular open-source AI agent projects on GitHub, representing over 500,000 combined stars. For each project, we evaluated six authorization capabilities:
We reviewed each project's documentation, source code, configuration files, and example applications. Projects were evaluated on their built-in authorization mechanisms, not on what a developer could build around them.
28 out of 30 projects (93%) rely exclusively on environment-variable API keys for authorization. The standard pattern is:
Store your OpenAI API key in a .env file. The agent will use it for all operations.
This key grants the agent the same permissions as the key owner. There is no mechanism to restrict which operations the agent can perform, on whose behalf it acts, or when the access expires.
Not a single project we reviewed assigns a unique, cryptographically verifiable identity to each agent instance. When multiple agents share the same API key (which is the default configuration), it is impossible to determine which agent performed which action.
29 out of 30 projects (97%) have no mechanism for the end-user to approve what the agent is doing on their behalf. The developer decides what the agent can access at build time, not the user at runtime. The one partial exception is LangGraph's human-in-the-loop feature, which pauses execution for approval but does not issue scoped authorization tokens.
Every project we reviewed treats revocation as a binary: rotate the API key, or don't. There is no way to revoke access for a single agent while leaving others operational. In multi-agent systems (which are the explicit use case for frameworks like CrewAI and AutoGen), this means one misbehaving agent forces a full credential rotation.
Only 4 projects (13%) include any form of action logging. Where it exists, it is opt-in, application-level, and not tied to an authorization grant. No project produces an audit trail that links a specific action to a specific agent, a specific user authorization, and a specific set of scopes.
In multi-agent frameworks (CrewAI, AutoGen, MetaGPT, OpenClaw), when one agent calls another, the child agent either inherits the parent's full credentials or receives its own independent key. No project implements scope narrowing, depth limits, or cascade revocation for delegated access.
| Project | Stars | Scoped Perms | Agent ID | User Consent | Revocation | Audit | Delegation |
|---|---|---|---|---|---|---|---|
| OpenClaw | 210k | No | No | No | No | Partial | No |
| Dify | 130k | Partial | No | No | No | Partial | No |
| RAGFlow | 70k | No | No | No | No | No | No |
| AutoGen | 55k | No | No | No | No | No | No |
| CrewAI | 46k | No | No | No | No | No | No |
| AutoGPT | 42k | No | No | Partial | No | Partial | No |
| MetaGPT | 38k | No | No | No | No | No | No |
| LangGraph | 25k | No | No | Partial | No | Partial | No |
| BabyAGI | 20k | No | No | No | No | No | No |
| SuperAGI | 16k | Partial | No | No | No | Partial | No |
| AgentGPT | 14k | No | No | No | No | No | No |
| OpenDevin | 12k | No | No | No | No | No | No |
| Camel | 10k | No | No | No | No | No | No |
| TaskWeaver | 9k | Partial | No | No | No | No | No |
| OpenAI Swarm | 8k | No | No | No | No | No | No |
Star counts as of March 2026. "Partial" indicates the feature exists in a limited form (e.g., opt-in logging, basic role separation) but does not meet the bar for production authorization.
.env files or os.environ for credentialsThis is not a theoretical risk. Major incidents have already occurred:
Censys identified 21,639 exposed OpenClaw instances publicly accessible on the internet. Misconfigured instances were leaking API keys, OAuth tokens, and plaintext credentials. With 210,000+ GitHub stars and rapid adoption, the blast radius was enormous.
Source: Reco.ai
BlueRock Security analyzed over 7,000 MCP servers and found that 36.7% were potentially vulnerable to SSRF. Trend Micro found 492 MCP servers with zero client authentication and zero traffic encryption on the public internet.
The Moltbook database breach exposed 1.5 million API authentication tokens and 35,000 email addresses, demonstrating the cascading impact of centralized credential storage.
Source: Wiz.io
Thousands of Google Cloud API keys deployed as billing tokens were discovered to have unrestricted access, effectively becoming live Gemini credentials on the public internet.
Source: The Hacker News
In December 2025, OWASP released the Top 10 for Agentic Applications, peer-reviewed by 100+ security researchers. Several of the top risks directly map to the authorization gap we've documented:
| OWASP Risk | Description | Authorization Gap |
|---|---|---|
| ASI01 | Agent Goal Hijacking | Unscoped API keys mean a hijacked agent has full access to all resources |
| ASI03 | Identity & Privilege Abuse | No per-agent identity makes it impossible to attribute actions or enforce least-privilege |
| ASI05 | Privilege & Access Escalation | Without scope enforcement, agents can access any resource the API key allows |
| ASI09 | Human-Agent Trust Exploitation | No consent flow means users never approve what agents do on their behalf |
| ASI10 | Rogue Agents | Without revocation, a compromised agent retains access indefinitely |
48% of cybersecurity professionals identify agentic AI as the number-one attack vector heading into 2026 — outranking deepfakes, ransomware, and supply chain compromise. Yet only 34% of enterprises have AI-specific security controls in place.
— Dark Reading, 2026
Based on our audit, we recommend that every AI agent deployment implement these six authorization primitives:
| Primitive | What it means | Why it matters |
|---|---|---|
| Scoped tokens | Agents receive permissions for specific resources and actions only | Limits blast radius when an agent is compromised |
| Per-agent identity | Each agent has a unique, cryptographically verifiable identity (e.g., DID) | Enables attribution, audit, and targeted revocation |
| User consent | End-users explicitly approve what agents can do on their behalf | Required by GDPR, EU AI Act, and emerging US AI regulations |
| Granular revocation | Revoke one agent's access without affecting others | Prevents credential rotation cascades in multi-agent systems |
| Audit trail | Every agent action linked to a specific authorization grant | Required for compliance, forensics, and incident response |
| Delegation control | Scope narrowing and depth limits when agents call other agents | Prevents privilege escalation in multi-agent pipelines |
Grantex is an open protocol (Apache 2.0) that implements all six primitives as a standard. It provides signed JWT grant tokens with scoped claims, per-agent DIDs, user consent flows with optional FIDO2/WebAuthn passkeys, granular revocation with cascade, immutable audit trails, and delegation chains with depth limits and scope narrowing.
The protocol has an IETF Internet-Draft submitted to the OAuth Working Group, a NIST NCCoE public comment filed, and SOC 2 Type I certification completed. SDKs are available for TypeScript, Python, and Go, with integrations for LangChain, CrewAI, OpenAI Agents SDK, Vercel AI, Google ADK, AutoGen, MCP, Express.js, FastAPI, and Terraform.