Every enterprise buying AI tools asks about security. SOC 2 certification. Encryption at rest and in transit. Access controls. The checklist is familiar.
But the checklist misses something fundamental: AI changes the security model itself.
Permissioning Across Vendor Lines
When you use multiple AI vendors, each has their own permissioning model. User roles, access scopes, audit logging—all implemented differently.
Ensuring that access controls are respected across vendor boundaries becomes a full-time job. Does the agent in Vendor B have access to data that only certain roles should see in Vendor A? How would you even know?
An AmLaw 100 firm we work with spent weeks mapping this during a security audit. They discovered gaps they hadn't anticipated—data flowing between systems in ways that technically violated their own policies. Not maliciously, just architecturally.
The Subprocessor Blast Radius
Here's the question most security reviews don't ask deeply enough: Who are your vendor's subprocessors?
Your AI vendor may be SOC 2 certified. But what about their model provider? Their cloud infrastructure partner? Their logging and monitoring services? Their offshore support teams?
Each subprocessor is part of the trust chain. And each subprocessor has their own subprocessors. The blast radius extends far beyond the contract you signed.
When a top-tier professional services firm we work with mapped their AI vendor trust chains, they found data potentially flowing through 14 different subprocessors across three jurisdictions—none of which they had directly evaluated.
This isn't an edge case. It's the norm when you outsource AI capabilities.
Onshoring collapses the blast radius. When AI runs in your environment, on your infrastructure, with models you've selected and audited, the trust boundary is your own. You're not inheriting the security posture of a chain of subprocessors you've never met.
Security for the Age of Agents
Here's what's genuinely new: agents aren't just tools. They're actors in your security model.
When an AI agent can read documents, query databases, draft communications, and execute workflows, it needs governance equivalent to—or stricter than—a human employee.
But most AI tools treat agents as black boxes. You don't know what data they accessed, what reasoning they applied, what actions they took. If an agent violates policy, how would you detect it? How would you audit it? How would you roll it back?
A Big Four firm we work with required this capability before deployment: the ability to verify what an agent did, why it did it, and undo it if necessary. They call it "keeping an eye on the toddler." Without that level of governance, they wouldn't put AI anywhere near client work.
The New Requirement
Data security has always mattered. For regulated industries—finance, healthcare, legal, defense—it's non-negotiable.
But the bar has moved. It's no longer enough to secure data from external threats. Enterprises must now govern how their own AI agents interact with that data—with the same rigor they apply to human employees.
This isn't achievable by procurement. It's achievable by ownership.