Gal Helemski

April 7, 2025

 

SANS just released their guidelines for Critical AI Security, highlighting the growing urgency to align AI development with comprehensive, enterprise-grade security practices. As AI continues to permeate business systems, the focus must shift from experimentation to secure and scalable deployments.

 

In this blog, we delve into the critical category of Access Controls, as outlined in the SANS report, and explore how policy-based access—implemented outside the model itself—is essential for real-world AI security. We’ll also discuss how PlainID helps enterprises build security-first AI infrastructures with centralized, fine-grained access control that is decoupled from the model architecture.

AI Security Essentials

 

The SANS Critical AI Security Guidelines identify six key categories of concern for organizations deploying AI technologies:

 

  1. Access Controls 
  2. Data Protection 
  3. Deployment Strategies 
  4. Interface Security 
  5. Monitoring 
  6. Governance, Risk and Compliance (GRC) 

 

While each category plays a vital role, access controls are foundational—they dictate who or what can interact with sensitive data, services, and ultimately, AI capabilities.

Why AI Access Control Needs a Rethink

 

AI systems are no longer static models living in isolated environments. They are becoming dynamic, context-aware, and increasingly interconnected—powered by protocols like MCP (Model Context Protocol), which give agents the ability to access vast ecosystems of tools and data.

 

This interconnectedness, while powerful, also introduces new attack surfaces. Without proper access governance, AI agents can:

 

  • Access sensitive data they shouldn’t 
  • Trigger unauthorized actions across internal systems 
  • Inadvertently violate compliance frameworks 

 

SANS emphasizes that access control must extend beyond the model, protecting interfaces, context windows, tool integrations, and APIs that the model interacts with.

Implementing Policy-Based Access Outside the Model

 

Relying on static or course-grained permissions isn’t enough. As the AI landscape becomes modular and API-driven, externalizing access control through a centralized policy engine is the only scalable approach.

 

This is where PlainID comes in.

 

PlainID enables organizations to manage authorization as a service, enforcing access decisions in real time across every touchpoint the AI system interacts with—not just inside the model, but across the full lifecycle.

 

With PlainID, organizations can:

 

  • Enforce Fine-Grained Access: Define dynamic policies based on user roles, resource attributes, context, and risk level. 
  • Control AI-to-Tool Interactions: Ensure AI agents only invoke permitted APIs or tools, based on business policy—not static code. 
  • Protect Sensitive Context: Govern what data the model can access via its context window or plugins. 
  • Audit and Monitor Access: Maintain a full record of AI access behavior for compliance, risk analysis, and policy optimization. 

 

This approach aligns directly with the SANS guidance to establish layered and adaptive access controls that work regardless of where the model resides—cloud, on-prem, or embedded in an application.

Practical Example: Controlling AI Plugin Usage

 

Consider an enterprise AI assistant with access to HR and Finance tools via APIs. With MCP or similar capabilities, it can seamlessly pull or act on information across these domains.

 

Without external access policies, there’s nothing stopping the AI from querying payroll data during an HR task—or worse, acting on behalf of a user without the right privileges.

 

With PlainID in place:

 

  • Policies prevent cross-domain access unless explicitly allowed 
  • Context-aware rules restrict what data is pulled based on the user, task, and risk posture 
  • Real-time enforcement ensures the AI agent operates within its intended guardrails 

 

This ensures business continuity and security while retaining the flexibility and power of modern AI integrations.

The Path Forward: Secure AI Starts with Smart Access

 

The SANS Critical AI Security Guidelines make it clear—access control isn’t optional. It’s a core pillar of any responsible AI strategy. But securing AI requires going beyond legacy methods and static permissions.

 

By decoupling access control from models and applications, and using a centralized policy engine like PlainID, organizations can:

 

  • Reduce the risk of data exposure 
  • Prevent unauthorized actions 
  • Maintain regulatory compliance 
  • Build AI systems that are both powerful and trustworthy

Ready to Future-Proof Your AI Strategy?

 

Don’t wait until AI security becomes a liability. Start with access control—done right.

 

Explore how PlainID can help your organization secure AI at scale with externalized, policy-based access control.

MCP Empowers AI Agents — PlainID Keeps Them Secure
Apr 07 2025 Blogs
Cisco Webinar LI Post
Customer Case Study: How Cisco Leverages PBAC to Modernize and Optimize Their Business Through Authorization
Mar 24 2025 Webinars
screenshot for web Finserve video
Financial Services Demo and Video
Mar 17 2025 Uncategorized