If Every AI Agent Needs a Certificate, Who Issues Them?
RSAC 2026 made AI agent identity the defining theme. Nobody talked about the CA layer.
April 1, 2026 · [cyphrs] Team · 11 min read
RSAC wrapped, and there was a gap in the conversation
If you followed the RSAC 2026 coverage, the theme was hard to miss. IBM, Auth0, and Yubico announced a joint framework requiring cryptographic verification for high-risk AI actions. RSA extended their identity platform to handle machine and human identities under the same umbrella. Swissbit previewed post-quantum authentication for FIDO2 hardware keys. Every vendor in the convention center, it seemed, had arrived at a version of the same conclusion: AI agents need cryptographic identity, and the industry needs a framework for managing it.
The IBM/Auth0/Yubico framework is interesting. The core idea is that autonomous agents performing high-risk actions, moving money, modifying infrastructure, calling external services, need a cryptographically verifiable identity that can be checked against a policy engine before the action is permitted. That's sensible. It mirrors how we already think about privileged access for humans. The "Human-in-the-Loop authorization" framing is mostly marketing, but the underlying architecture is sound.
But here's what nobody talked about at RSAC, and what I kept waiting for in the coverage: if you're issuing cryptographic credentials to AI agents, something has to sign those credentials. In an X.509 architecture, that something is a certificate authority. And operating a CA, it turns out, is where most of the conversation quietly stopped.
The scale of the problem is already uncomfortable
Dark Reading published a piece this week titled "AI Agent Overload: How to Solve the Workload Identity Crisis." The headline statistic: non-human identities now outnumber human identities in some organizations at ratios of 40:1 to 100:1, and the ratio is growing 40% year-over-year. AI agents are the fastest-growing category within NHI, ahead of service accounts, IoT devices, and CI/CD pipelines.
That 40:1 ratio is worth sitting with for a moment. A company with 500 engineers might have 20,000 non-human identities in its infrastructure right now. By next year it could be 28,000. Traditional IAM tooling, Okta, Active Directory, CyberArk PAM, was designed around the assumption that identities are things humans create and manage deliberately, one at a time. It was never designed for populations that grow this fast or that behave this autonomously.
The credentials those identities carry reflect that design gap. API keys, static tokens, service account passwords. Things that are easy to issue, easy to share accidentally, and very difficult to track once distributed. When an AI agent chains across three services in an automated workflow, you want each hop to be cryptographically authenticated. What you usually get instead is a long-lived API key somewhere in an environment variable.
mTLS is the obvious answer for agent-to-service authentication
For service-to-service communication, mTLS is already the answer the industry has converged on. Kubernetes uses it. Envoy uses it. SPIFFE/SPIRE uses it. The pattern is well-understood: each workload gets an X.509 certificate encoding its identity (a SPIFFE SVID if you're in that ecosystem), and every service call involves mutual verification. You know not just that the connection is encrypted, but that the client presenting that certificate has been explicitly authorized to have it.
Short-lived certificates work particularly well for ephemeral agents. Instead of rotating a long-lived API key on some awkward schedule, you issue a certificate that's valid for four hours and let the agent request a fresh one when it starts. The revocation problem, which has historically been a headache in PKI, mostly goes away when the cert expires before you'd have time to act on a revocation anyway.
The architecture fits so naturally that several of the RSAC vendors were implicitly describing it without using the mTLS or PKI framing. The IBM/Auth0 framework talks about "cryptographic verification" for agent actions. The RSA machine identity platform is, at its core, certificate management with a machine-identity wrapper around it. They're all pointing at the same thing. The gap is in the CA layer that makes any of it work.
But mTLS requires a CA, and that's where it gets complicated
You can't just decide to use mTLS. You need something that will sign the certificates you issue to your agents. That something is a certificate authority, and operating one has historically been an enterprise infrastructure problem that most organizations either outsourced (public CAs, for server auth) or skipped entirely (self-signed certs with all the attendant risks).
The enterprise solutions exist. Venafi, which came up in a r/devops thread last week with someone describing it as "mid six figures a year." Keyfactor, which Axelspire's 2026 CLM comparison describes as "60 to 70% cheaper than Venafi at similar scale" but still aimed squarely at organizations with dedicated PKI teams. ADCS, which is Windows-dependent and architecturally tied to Active Directory in ways that make extraction painful. HashiCorp Vault with the PKI secrets engine, which is powerful but puts the operational complexity squarely on your engineering team.
All of these assume you have people whose job includes thinking about certificate hierarchies, CRL distribution points, OCSP responders, and HSM configuration. That is a reasonable assumption for a Fortune 500. It is not a reasonable assumption for the mid-market company that is nonetheless deploying agentic AI systems right now.
And it's not like the open-source options are much easier. Smallstep's step-ca is genuinely good software. But "good software" and "easy to operate correctly" are different things, especially when your security team has three people and none of them specialize in PKI. The gap between "we could run this" and "we are running this correctly with proper root key protection, appropriate intermediate CA policies, and sensible ACME configuration" is where most organizations quietly give up and go back to API keys.
What an agentic CA actually needs to do
The requirements for AI agent identity infrastructure are somewhat different from classic enterprise PKI, and that distinction matters for anyone thinking about what to build or buy. Traditional enterprise PKI is optimized for a world where certificates are relatively few and relatively long-lived: user certificates, server certificates, VPN endpoints. You might issue a few thousand a year and think hard about each one.
An agentic infrastructure needs something closer to the opposite. High-volume, low-latency issuance. Certificates measured in hours, not months. Per-agent identity rather than per-team or per-service identity. Programmatic access to issuance via API, because no human is going to manually approve certificates for agents that might spin up and down hundreds of times a day. And a meaningful audit trail, because when something goes wrong in an agentic workflow, you need to be able to trace exactly which identity was present at each hop.
What a CA for agentic workloads actually needs
Certificates valid for hours, not months. Agents that only exist for a single task don't need a 90-day cert. Short lifetimes reduce the blast radius of a compromised credential, and they remove the revocation problem almost entirely.
No human in the loop. An agent needs to request a certificate at startup and get one back in milliseconds. That means an ACME endpoint or a REST API with machine-appropriate authentication, not a web form.
Each agent instance gets a unique cert encoding its specific identity: which workflow it belongs to, what permissions it carries, when it was issued. Not a shared service account cert that any agent in the fleet can present.
Every issuance event logged and queryable. When your security team needs to reconstruct an incident, they should be able to see exactly which certificate was active for which agent at which time. This is table stakes for compliance with NIS2, DORA, and PCI DSS 4.0.
Most organizations deploying agentic AI at scale are not large financial institutions with dedicated PKI engineers. The CA layer has to be operable by a security-aware DevOps team, not a specialist group that the mid-market company simply does not have.
None of that is exotic. Kubernetes clusters have been handling something close to this with cert-manager and internal CAs for years. SPIRE runs SVID issuance at the workload level with short TTLs and automated rotation. The tooling exists. What doesn't exist is a CA layer that packages this for the organization that is not already running Kubernetes at scale and does not want to operate a SPIRE deployment from scratch.
The window to build this into the foundation is narrowing
The organizations deploying agentic AI right now are making credential choices that will be difficult to change later. API keys and long-lived service account tokens are the path of least resistance today. They work, at small scale, until they don't. The problem with not building cryptographic identity infrastructure from the start is that retrofitting it into an existing agentic system is a much harder problem than building it in initially.
Consider what the retrofit looks like. You have 50 agents in production, all using API keys. You want to move them to mTLS. You need to stand up a CA (and figure out all the operational questions that go with that). You need to modify every agent to request a certificate at startup. You need to modify every service those agents call to perform client certificate verification. You need to coordinate a migration window. Then you need to rotate out the old API keys without breaking anything. That's a significant engineering project, and it scales with the number of agents and services involved.
The cost-benefit analysis shifts when the agent population is still small. Getting the CA layer right at 10 agents is much cheaper than retrofitting it at 500. And the identity architecture you establish now, the hierarchies, the naming conventions, the policy structure, will either serve you well as you scale or constrain you in ways you won't fully understand until they start hurting.
The market still has a gap
The CLM vendors (Venafi, Keyfactor, the rest) manage certificates. They are useful tools. But they manage certificates issued by other CAs. They are not the trust anchor. They do not decide which identities are valid. They consume the output of a CA and help you track and renew it. If you don't have a CA, CLM tooling doesn't help you.
Public CAs (Let's Encrypt, DigiCert, Sectigo) issue certificates for server authentication. They are actively exiting the client authentication space, driven by Chrome Root Program requirements. As of July 2026, Let's Encrypt will not issue a certificate with the TLS Client Authentication EKU at all. The other public CAs will follow the same direction. We covered the mechanics of that shift in a previous piece. The short version is: public CAs are now a browser-compatibility tool, not an identity infrastructure layer.
The gap is accessible private CA infrastructure. Something that can serve as the trust anchor for agent-to-service mTLS, issue short-lived certificates via API, maintain a meaningful audit trail, and be operated by an engineering team without a dedicated PKI specialist. That product category exists in rough form across several open-source projects but not as something you can stand up and have confidence in within a day.
That's the gap RSAC 2026 implicitly pointed at, without naming it. Every vendor announced a piece of the agent identity puzzle. The IBM/Auth0/Yubico framework handles authorization decisions. RSA handles identity lifecycle. But authorization decisions require a cryptographic identity to evaluate, and cryptographic identity requires a CA to issue it. The CA layer is the foundation that everything else sits on.
A thought worth leaving you with
We're early in the agentic AI era. The organizations building serious agentic systems today are, by definition, ahead of the market. Most of them are making credential choices under time pressure, using whatever works now, with a vague intention to revisit security later when there's time.
There are two ways to read that pattern. One is that it's fine: the risk is small, the agents are internal, the stakes are manageable. The other is that "we'll fix the identity model later" is a statement that has preceded a lot of security incidents. The history of service account passwords, static tokens, and shared API keys in traditional infrastructure suggests that "later" often arrives under worse circumstances than "now."
My guess is that by 2028, mTLS-based workload identity for AI agents will be considered standard practice, the way TLS for web traffic is now considered non-negotiable rather than optional. The organizations that built their CA layer in 2025 or 2026 will have a two-year head start on everyone who decided to wait.
The infrastructure is not glamorous. But trust infrastructure never is. It's the thing you notice only when it isn't there.
Further reading and sources referenced in this article:
AI agent identity and next-gen enterprise authentication at RSAC 2026 — Biometric Update
AI Agent Overload: How to Solve the Workload Identity Crisis — Dark Reading
Machine Identity: mTLS + SPIFFE Zero Trust Guide — Petronella Cybersecurity
Tool recommendation for large org to manage certificate lifecycle — r/devops
Why You Can't Buy a Client Certificate from a Public CA Anymore — [cyphrs] Insights
[cyphrs] is building it
Accessible trust infrastructure for teams deploying mTLS and agent identity without a PKI team. Scan your current certificate posture for free and get your [cyphrs] Score.
Scan your infrastructure