Top 5 Takeaways from CSA’s Agentic AI Security Summit

By Robert Buccigrossi, TCG CTO

The recent Cloud Security Alliance (CSA) Agentic AI Security Summit on August 19th, 2025, offered a clear-eyed look at the security challenges we face with autonomous AI systems. Here are my top 5 takeaways.

1. Threat Actors are Already Weaponizing AI

Christopher Porter from Google provided an overview of how advanced persistent threats (APTs) are actively using AI today. The focus was on how current generative AI is used as a “force multiplier” to make existing attack techniques more efficient and effective.

  • The Challenge: Adversaries are leveraging LLMs to generate flawless, context-aware phishing emails that bypass traditional human skepticism. They’re using AI to automate operational tasks, allowing them to scale their efforts. Critically, generative AI is being used to create convincing deepfake audio to impersonate executives and authorize fraudulent transactions.
  • The Proof: This is happening now. In August 2025, the Russian APT group APT28 was identified using malware named “LameHug” that integrates with the Hugging Face API. The Black Basta ransomware group has been seen using ChatGPT to help write and debug their malicious code. Furthermore, a recent report documented a staggering 1,600% surge in deepfake-enabled vishing (voice phishing) in the first quarter of 2025, with one European company losing $25 million to an attack using a deepfake audio clone of their CFO.
  • Learn More:

2. IAM Wasn’t Built for AI Agents

The concept of identity for AI agents is evolving into a significant architectural challenge. Asaf Ahmad from Schneider Electric highlighted that traditional IAM systems, designed for human-centric workflows, are not equipped for the scale and dynamism of agentic AI. The core challenge is that these agents are not static tools; they are autonomous entities that can learn and modify their own permissions, creating a complex new landscape for identity management.

  • The Challenge: The rapid, often decentralized deployment of AI agents is leading to an explosion of non-human identities (NHIs). This creates a new form of “Shadow AI,” where it becomes difficult to track ownership, manage the identity lifecycle, and ensure accountability for agent actions.
  • The Proof: The scale of this is remarkable. Recent reports show that for every human employee, there are now, on average, over 90 non-human identities. A 2025 CyberArk report found that while 96% of enterprises recognize AI agents as a significant risk, less than half have mature governance controls in place. The GitGuardian OWASP Top 10 for Non-Human Identity Risks for 2025 details critical vulnerabilities like “Improper Offboarding” and “Secret Leakage,” which are amplified by the autonomy of AI agents.
  • Learn More:

3. Agentic AI Changes the Threat Landscape

Presenters from Wiz and Netskope detailed how agentic AI is fundamentally changing the nature of cloud security. Unlike traditional applications, agents are designed to “act, not just advise”. They autonomously execute tasks and move data across systems, often creating novel attack paths by crossing boundaries between cloud environments and applications in ways that require a new security posture.

  • The Challenge: This introduces new and dynamic risks. An agent built to automate a CI/CD pipeline, for example, might possess broad and persistent permissions. A compromise of that single agent could enable a sophisticated supply chain attack. As Raaz Herzberg from Wiz noted, this leads to “unpredictable entities with extreme access and permissions,” requiring us to rethink our security models.
  • The Proof: These scenarios are becoming reality. In early 2025, a supply chain attack on a popular GitHub Action exposed the secrets of over 23,000 repositories, a process that could be easily automated by a compromised agent. A recent Netskope report also highlighted the growth of on-premise agentic frameworks like LangChain, which create a less visible form of shadow AI. For those of us who follow standards, the latest NIST AI Risk Management Framework (AI RMF) provides essential guidance for navigating these new classes of risk.
  • Learn More:

4. Red Teaming Agentic AI Requires Planning

A joint session by the CSA and OWASP drove home the point that testing agentic AI requires a new, more sophisticated playbook. Because these systems have memory, leverage external tools, and execute multi-step plans, the attack surface is far more complex than that of a simple input/output model.

  • The Challenge: Red teams must now contend with novel attack vectors. These include Memory Poisoning, where an agent’s internal or external memory is manipulated to alter its behavior, and Control Hijacking, where identity vulnerabilities are exploited to gain unauthorized access. Simply treating the agent as a black box is insufficient; we must test its architecture and entire operational lifecycle.
  • The Proof: To address this, the CSA and OWASP have co-published an “Agentic AI Red Teaming Guide” that outlines 12 high-risk threat categories specific to these systems. The open-source community is also building new tools for this purpose, including Microsoft’s PyRIT and the Garak framework. A recent paper from Georgetown University’s Center for Security and Emerging Technology highlights the “measurement challenge” of testing these non-deterministic systems, confirming that this is an active and important area of research.
  • Learn More:

5. Zero Trust is Finally a Practical Necessity for AI

“Zero Trust” has been a popular concept for years, but with agentic AI, it becomes a concrete and necessary design principle. The core tenet of “never trust, always verify” is a perfect match for systems designed to operate with autonomy. Speakers from both the CSA and Netskope emphasized that applying least privilege and continuous, adaptive trust is one of the most effective strategies for securing these systems.

  • The Challenge: An AI agent is goal-oriented. Without proper guardrails, it will attempt to access any data or tool it can to complete its objective. A Zero Trust architecture provides the necessary constraints, ensuring an agent has only “just enough” access, for “just enough” time, to perform a specific task. This moves us toward a model of “Just-in-Time” (JIT) access for agents, a crucial concept mentioned by Asaf Ahmad.
  • The Proof: We are moving beyond theory to implementation. Accenture has published a detailed “AI Agent Zero Trust Model” that offers a clear, actionable roadmap for engineers. This approach is also being formalized by standards bodies. NIST Special Publication 800–207 (“Zero Trust Architecture”) serves as the foundational text, while the newer SP 1800–35 series provides practical implementation guides that are directly relevant to securing the infrastructure powering AI agents.
  • Learn More:

All-in-all, the conference drove home the need for awareness and vigilance in the security required for AI systems. In essence, expertise is required, even for “vibe coded” environments.