Agentic AI
A major focus was Agentic AI. For some time, we have been using AI as an assistant for analysis, helping with research, polishing documents, expanding context, and writing code snippets. That is how tools like ChatGPT and Copilot became popular. As soon as we started to understand the risk landscape around those use cases, Agentic AI emerged with a whole new set of risks. These systems not only analyze data but also take actions, often without human approval or supervision. Many of the risks associated with non-human identities are relevant. However, there is a growing discussion about the correct way to treat Agentic AI—is it an automated system, a member of the team, or something in between?
In many aspects, Agentic AI could be treated as an automated “bot”—we must secure the training data, monitor the output for anomalies, and control access to other data and systems. On the other hand, there is a level of non-determinism that requires a more novel approach—if you ask the same question, you may not receive the same answer; therefore, we cannot use a purely algorithmic approach.

The proliferation of AI agents has led to a surge in machine identities, surpassing human identities within organizations. This shift challenges existing identity and access management frameworks. David Bradbury from Okta talked about identity management challenges in this new paradigm.
Ease of deployment is definitely appealing for the business, but it could also be a concern. We had to deal with “shadow IT” before when the cloud was becoming more widely accepted; now, we have to deal with “rogue AI.” There were some interesting approaches to this old/new problem—it is definitely an area to keep an eye on. Jason Clinton from Anthropic highlighted the rapid deployment of AI agents, noting that security teams often lack involvement in these initiatives. He warned of a future where AI agents manage other AI agents, necessitating a shift in management training for human employees.

Productivity increase is one of the major reasons cited in AI adoption. However, we must be cognizant of the maturity levels of different parts of the company. If Engineering is using a copilot to increase their productivity, can Security keep up with the demand, or will additional risk be introduced? There were quite a few interesting AI-based solutions to help Security organizations scale up their operations. While the maturity level was not consistently there, it is another area to keep an eye on.
The Dark Side of AI
Unfortunately, AI increases not only our own productivity but also boosts the adversaries’ capabilities. A perfect example is phishing attacks. The cost of targeted, highly sophisticated attacks using deep fakes is rapidly going down. Faking someone’s voice and video, combined with very specific context, is expanding. This is now used not only by nation-states (how robust is your background check?) but also by criminals. Unfortunately, companies are exposing Agentic AI to the internet in an insecure way, which allows adversaries to abuse system resources and create these types of attacks.
All of that underscores once again the concept of Zero Trust. The new twist is again related to the increase in productivity. The number and sophistication of these attacks will only grow, so we will have to start using AI to combat them. Thankfully, some interesting solutions are emerging—another area to monitor.

Risk and Regulatory Landscape
Governance around AI is still very much in development. A few larger organizations have developed more comprehensive frameworks; it remains to be seen how these will hold up in such a rapidly developing field.
Governments and regulatory bodies are already looking at these risks. It is a very tricky balance between preventing hazards and stifling innovation. At the same time, the rules must apply to a very broad set of use cases.
The EU AI Act looks like a good foundation for thinking about AI-related risks. When and how it will be implemented remains to be seen. At the same time, ISO 42001 follows the well-established “risk management system” route with the AI Risk Management System. If you are starting on your AI journey, it is definitely something to consider, especially if you already have Security (ISO 27001) and Privacy (ISO 27701) management systems.

Considering how dynamic the AI landscape is, collaboration between regulators/standardization bodies and businesses/practitioners is of paramount importance. For example, the DoD has just announced the SWIFT initiative and issued an RFI. If you use AI solutions in your security organization, please consider responding to it.
Around the Conference
While the main events were at the beautiful Moscone Center, the whole downtown San Francisco was abuzz. Many interesting events took place before the conference started, around the Moscone Center, and in the evenings. Trustero sponsored the "AI, Security + SciFi Bites, Insights & Cocktails" event, where we discussed not only AI technology in Security but also the impact AI will have on our organizations, how we interact with other organizations, how we structure our operations, etc. It was interesting to hear from non-technical/non-security organizations about the AI impact and how they think/strategize about it. Surprisingly (or not), there were quite a few parallels between the security and movie industries.


In Summary
While the topic of the conference was “Many Voices, One Community,” it might very well have been “AI, AI Everywhere.” Many discussions, many presentations, many vendors talking about AI. There was some chaff, of course, but the good news is that there were quite a few very legitimate AI-based solutions. Additionally, security organizations (and some companies) are starting to understand the AI risk landscape and act accordingly. This will continue to be a very dynamic (and volatile) field for some time.