SANS@Night - From Servant to Surrogate: The Misenar 4A Model for Agentic Security



Organizations keep deploying AI "agents" without understanding what autonomy level they're getting or what governance it warrants. Chinese state-sponsored hackers used Claude Code to automate a cyberattack campaign across 30 organizations. Replit's AI coding agent deleted a production database, then tried to cover up its mistake. These aren't anomalies. They're predictable governance failures.

 

The Misenar 4A Model maps AI autonomy across four levels: Assistant, Adjuvant, Augmentor, and Agent. Each has specific capabilities, boundaries, and control expectations. The framework identifies "DANGER CLOSE," where AI shifts from advisor to executor, and establishes readiness criteria for crossing it.

 

The model includes vendor evaluation tools that cut through marketing, controls that scale with capabilities, and phased implementation strategies. Built from analyzing failures and deployments across industries, it shows that autonomy without appropriate governance creates predictable risks.

 

The 4A Model helps tackle the core question of agent security: How autonomous should your AI really be?

Relevant Government Agencies

Other Federal Agencies, Federal Government, State & Local Government


Register


Register as Attendee


Add to Calendar


Event Type
Webcast


When
Mon, Dec 15, 2025, 7:15pm - 8:15pm ET


Cost
Complimentary:    $ 0.00


Website
Click here to visit event website


Organizer
SANS Institute


Contact Event Organizer



Return to search results