Programmable 2026 Presentation
Four pillars of Agentic AI security
As Large Language Models evolve into autonomous agents capable of executing complex workflows, the attack surface expands exponentially. It is no longer enough to guard against prompt injection; we must now secure the actions the AI takes. This talk introduces a comprehensive framework for securing Agentic AI, moving beyond basic guardrails to architectural resilience. We will dissect the four critical pillars of this new security paradigm: implementing robust User Authentication for non-human entities, managing permissions when Acting on Behalf of Users, integrating Human-in-the-Loop approval flows for high-stakes actions, and enforcing Finely Scoped Retrieval Augmented Generation (RAG) to prevent data leakage. Attendees will leave with a blueprint for building agents that are not only intelligent but inherently trustworthy.