Abstract

Agentic AI systems increasingly move beyond text generation to goal-directed operation that can plan, use tools, manage state, and coordinate across multiple agents. As autonomy increases, security risk shifts from incorrect outputs from a single agent to unsafe actions executed through connected multi-agent systems and workflows. Despite rapid growth in agentic AI security research, current studies have only presented threats, defenses, and architectures, but there is no widely adopted, capability-based framework that specifies what security characteristics are required at each stage of agentic autonomy. This study aims to develop a level-aware Agentic AI Security Taxonomy that maps Agentic Level to Attack Surface, Threat Pattern, and Guardrails (security characteristics). Using an evidence-based approach grounded in literature focused on agentic AI security, the study will (1) operationalize agentic levels as a capability axis, (2) evaluate attack surfaces, (3) synthesize guardrail.

Author: Samuel Akinnusi

Our Sponsors