DISA finalizes standards for Non-Person Entity identity management. AI agents and automated systems will soon have distinct credentials—and accountability.
Here's a question that's been bouncing around DoD security circles: When an AI agent accesses a classified system, whose credential does it use?
Until now, the answer has been a workaround. Service accounts. Shared credentials. Human-sponsored access with someone's name attached to actions the human didn't take. It's a mess—and it's about to change.
DISA has finalized standards for Non-Person Entity (NPE) identity management. The implications are significant: AI agents, robotic process automation bots, and automated systems will soon have their own distinct identities in federal systems. Not borrowing a human's access. Not hiding behind a service account. Their own credentials, with their own accountability.
The concept isn't new, but the formalization is. A Non-Person Entity is any automated process, script, or system that requires authenticated access to resources. This includes:
Previously, these were handled through service accounts—generic credentials shared across systems, often with no clear ownership or audit trail. The problem? When something goes wrong, you can't trace accountability. When access needs to be revoked, you often break multiple systems.
The timing isn't coincidental. We're entering the era of agentic AI—systems that don't just respond to queries but take autonomous action. These agents need to:
Under the old model, an AI agent processing Navy financial data might use a service account that six other systems also use. If that agent is compromised, if it starts behaving unexpectedly, if it needs to be shut down—good luck isolating the impact.
NPE identity standards create a clean model: every agent has its own identity, its own access policies, its own audit trail, its own revocation pathway.
DISA's NPE standards extend the existing Identity, Credential, and Access Management (ICAM) framework. Key elements include:
NPEs have birth, life, and death—just like human identities. They're provisioned with specific capabilities, monitored throughout their operational life, and decommissioned with full audit trails when no longer needed.
NPE access is governed by attributes, not just roles. An AI agent might have attributes like:
These attributes flow into access decisions dynamically, enabling fine-grained control that static roles can't provide.
NPE credentials aren't "set and forget." The standards require continuous verification—checking that the entity is still authorized, still behaving within expected parameters, still needed.
Every action an NPE takes is logged against its identity. Not a shared service account—the specific entity. This enables forensics, audit, and compliance verification at a granularity we've never had before.
If you're building systems that include AI agents or automation for federal clients, here's what's changing:
Retrofitting NPE identity into existing architectures is painful. New systems should be designed with distinct agent identities as a core architectural pattern.
A complex system might have dozens or hundreds of NPEs. Managing their credentials—issuance, rotation, revocation—requires automation. Manual processes won't scale.
Auditors will want to know what each agent did, when, and why. Your logging infrastructure needs to capture actions at the NPE level, not just the service account level.
With continuous verification comes the need to define "normal" behavior for each NPE. Anomaly detection becomes part of the identity management story—if an agent starts behaving outside its baseline, that's a security event.
NPE identity is a natural extension of Zero Trust principles. In a Zero Trust architecture, every access request is verified regardless of source. But verification requires identity—and "service account" isn't a meaningful identity.
With NPE standards, the Zero Trust model extends cleanly to automated systems:
This creates a consistent security posture across human and non-human actors. No more carve-outs for "it's just automation."
The standards are finalized, but implementation will take time. Here's the trajectory I'm watching:
Near-term (2025): Pilot programs in high-security environments. Agencies with advanced automation needs will be early adopters.
Mid-term (2026): Integration into major platform certifications. FedRAMP and CMMC will likely incorporate NPE identity requirements for systems that include automated components.
Long-term (2027+): Default requirement. Any system with AI agents or significant automation will need NPE identity management to achieve authorization.
We're at an inflection point in how we think about machine identity. For decades, automation was a sidecar to human activity—scripts and bots that extended human capabilities but operated under human credentials.
Agentic AI changes that relationship. These systems don't just extend human capabilities—they act autonomously, make decisions, take actions with real-world consequences. They need their own accountability framework.
NPE identity standards are the foundation for that framework. It's not just about security compliance—it's about building trustworthy autonomous systems that can operate in high-stakes environments.
When AI agents get their own credentials, everything becomes more traceable—and more manageable. The days of "well, it's using a service account" as an excuse for fuzzy accountability are ending.
For those of us building agentic systems in defense environments, this is welcome structure. Clear identity means clear accountability means auditable systems means trustworthy AI.
The CAC for AI agents isn't a metaphor. It's the future architecture.