Home

Agentic AI and the Future of Threat Detection with Elastic Security - Six Five On The Road

Agentic AI and the Future of Threat Detection with Elastic Security - Six Five On The Road

Mike Nichols, General Manager, Security at Elastic, joins host Mitch Ashley to discuss how agentic AI and the Elasticsearch platform are driving faster, more intelligent threat detection and response for today’s security teams.

How is agentic AI fundamentally altering security operations to address sophisticated cyber threats?

From AWS re:Invent 2025, host Mitch Ashley is joined by Elastic's Mike Nichols, General Manager, Security, for a conversation on how Elastic Security is harnessing agentic AI and advanced analytics to transform modern cyber defense. They explore how security analysts are leveraging agent-driven automation, AI-native analytics, and cutting-edge threat detection, powered by the Elasticsearch platform, to respond proactively to AI-driven cyberattacks and streamline security workflows.

Key Takeaways Include:

🔹The Role of Agentic AI: How agents are reshaping modern security operations by automating detection and response workflows.

🔹AI-native Security: Insights into Elastic Security's ai-native capabilities and how they empower analysts to identify and mitigate advanced threats.

🔹Key Considerations: Architectural and safety considerations when integrating agentic AI into enterprise environments.

🔹Adding AI to Existing SecOps: Steps and mindset required for organizations to introduce AI into existing security tools and workflows.

🔹Elastic’s Competitive Edge: Differentiators between Elastic Security’s approach to agentic AI and those of traditional or AI-only competitors.

Learn more at Elastic.

Watch the full video at sixfivemedia.com, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript

Mitch Ashley:

Welcome to Six Five On The Road. We're here in AWS reInvent in Las Vegas, Nevada. It's quite a show, pretty intense time. It's nice to take a break and just have a good conversation with Mike Nichols. Great to be talking with you, Mike. General Manager, Security at Elastic.

Mike Nichols:

Yes, sir. Good to talk to you as well and actually sit on a nice couch and not be in the 60,000 people swarm downstairs.

Mitch Ashley:

Yeah, the stacked chairs. This is a much better place. Stay here as long as we can. So I'd love to get your feedback on kind of the impact of AI. Of course, that's the big topic. We're always talking about security, though, also like top of the list as well. How is that impacting how we're thinking about AI and security?

Mike Nichols:

Yeah, how much time do we have? As much as you want to take. There's a few different pieces I'd like to talk about. In fact, we were just in New York, I think, a couple weeks ago at a public sector cybersecurity summit talking about the adversarial impact, because I think one of the challenges we hear a lot about, and it's getting better now, but there has been an aversion or a risk challenge with adopting AI in the side of the defensive side. But the problem is, I think every month over the past, whatever, three, four months now has been a record-breaking patch Tuesday. We've seen a zero-day exploit increase that we've never seen before. The adversaries have already grabbed on to AI, and they're using it to more quickly develop and instrument technologies. In fact, you know, my background, early, early days, I was a U.S. Army intelligence. I did some work in different government SOCs and ultimately ended up in the vendor space. In those worlds, you know, we used to see that adversaries would not burn these very expensive and hard to develop advanced adversarial techniques, right? So a lot of times you didn't have to worry too much unless you were a specific target, you had more of the commodity challenges or the after an attack came out, you would see that then permeate. Unfortunately, now we're seeing these zero day type burns very quickly on what would be previously very expensive to develop techniques because of AI. So the first thing we see is that that's rapidly increasing and that defenders have to embrace it because it's not about... I hear the stories downstairs in the show floor, oh, we're going to replace the SOC and the autonomous SOC. I think that's ridiculous. What's happening now is that defenders who were already underwater before are even more underwater now because the adversaries have just rapidly increased. So what AI does on the defender side is just... I wouldn't say level the playing field, but it's just causing a bit of ability for us to keep up with what's happening. So on the defensive side, AI is allowing us to add that sort of machine speed to the human instrumentation, the human thinking that's there.

Mitch Ashley:

The attackers certainly have a shorter interval, shorter loop to do what they're doing. Speaking of which, shorter loops, you've all been very aggressive in AI and thinking about how you incorporate that and be part of this kind of move towards. That's all energetic, but AI and agents and that capability. Talk about the Elastic strategy and what you've done.

Mike Nichols:

Yeah, I think our strategy is very much, again, looking at the world and seeing that in the consumer side, AI is being adopted into everything. The next generation of defenders are going to be like they're so used to things like touchscreen and the way web applications work today and SaaS. Similarly, they're used to AI embedded in everything now. So the first is just being pragmatic that the defenders of tomorrow are going to expect this to be part of the applications. But secondly, as I said earlier, you know, my life through the 25 plus years of this, we've had the same problem. It's always been a skills gap problem, not because we don't have a lot of smart people, but there aren't, the education is not enough to push the workforce we need into cybersecurity. And so, you know, we've been advocates of this. We ourselves offer free training. We do a lot to embrace this. In my life, I've done work, for example, as a professor in Georgetown University's Continuing Education to help bring this to market, but that's a small impact, right? The reality is we have to bring cybersecurity practices to the masses. We have to make it part of sort of the core doctrine. And I think what AI allows us to do finally is solve that. We've had other, you know, machine learning, and we've had, you know, playbooks and things we thought might help that, but we saw those still didn't quite close the gap. What we saw with AI when we first started to implement it was it really was kind of bringing those expertise, the bespoke expertise of your environment to other people so they could actually elevate them. And you didn't have to have a tier three next to you every hour. You could actually have the tier three's knowledge shared with the rest of the team and actually start to bring them, you know, again, not quite level, but at least bring them up to where the adversaries are. So, yeah, we've been really focused heavily in both AI as a development platform. So we are the most deployed vector database in the world. We are creating technologies to build AI applications. In fact, we have something recently released called Agent Builder, which allows you to develop very simply your own agentic workflows. But on the security side, my job is then to take that and make out-of-the-box implementations. Things like taking one of the concepts we talk about is that you shouldn't worry about reducing false positives anymore. The challenge with that approach is as you bring false positives down, you open up the detection gap and the adversaries will just hide there where you no longer look. So think of what we thought of alerts before as just a significant events or interesting events, you know, so you take your billions of events down to maybe a few hundred thousand and then apply another layer of technology to surface those again. So LLMs for us, we have an agentic workflow called attack discovery that will work through those and actually surface relevant information out of that noise. You can still go look at it if you want to, but what that allows you to do is have a broader aperture, you know, cast a wider net, as they say, but then still not give your team 100,000 problems, they get six. They go look at what really matters. That same philosophy applied to many other steps of the process we can talk through, but that's really the idea is how do we bring machine speed again to what a human does do, but if they did it at that scale, you'd have a team of hundreds of people or you'd never get the problem solved.

Mitch Ashley:

That's one of the things that's unique about Elastic's strategy. I was involved in getting some exposed to the RAG that you did early on and also thinking about it as let's get at the development part of aspect of this as well as the data and kind of winnowing down. So speaking of that, there's a lot of emphasis around security guardrails. Yes. I think for the first time, at least in my career, we always talked about shifting left. It's actually happening with AI and putting guardrails into agents and into the development cycle. Talk about that. how that's changing how we think about security earlier on in creating some of the software.

Mike Nichols:

Yeah, I think it goes back to what I was saying earlier as well about the risk, right? A challenge that C-level executives have of learning AI is the fact that they think it's, you know, you're going to go ask a, you know, a model and get, you know, anything might happen afterwards, right? It's, it's this, uh, you don't know where it's going to go. And I think the layer, what Garber is really do is allow us to, to ground that, to say, Hey, this is the areas of applicability. A good example of that is in the agent builder that we have today, where you could rate these. And of course, in ours that we produce out of the box. Agents have a concept of tools. Tools are just the, what am I allowed to do? And then what the AI is able to do is then reason the right steps, but within the boundaries of the tools you've offered it. So you're not saying to a model, go off and figure it out of the randomness that's out there. You're saying, okay, you have access to maybe the data within this specific area of my environment. You have access to enrichments from maybe things like VirusTotal or, you know, a Whois database or things like that are a great noise for reducing those things. So potentially you're getting the additional layers there, but you're not allowing it to go anywhere, right? So when you take that sort of boundaries and then you apply additional layers, for example, you know, anonymization or redaction of potential PAI so that you can, you know, of course, create those or reduce the risk of those things. Bringing in the bespoke context, you mentioned RAG or retrieval augmented generation. So bringing the context of your environment into that so you have a more unique or specific answer to what is applicable to your environments. I think is extremely important. And all those things allow us to hopefully show the executives that you can bring this in in a reduced risk capacity to actually elevate and accelerate the teams.

Mitch Ashley:

You know, with AI, the first thought is, oh, this changes everything. We have to do everything differently. We know that isn't quite how things evolve. So talk about the mindset that's required, too, in thinking about bringing AI into the security process, into the tools that we use. It isn't throwing out what we do today. It's actually building on to what we already do.

Mike Nichols:

What I like about the AI adventure, we'll say today, is that something that I and my team always talk about when we talk to an executive, maybe not great for a product vendor, but usually I'll say to them, you shouldn't be looking at a product first if your process is not developed fully, right? So the first thing we always try to do is help them build or enhance the process of their operations. What AI really does, as you said, it should fit into that process and just make that process more seamless, fill gaps, accelerate, things like that. And so, yeah, I think what AI is allowing us to do now is take, for example, I think when it started, you know, it started as this, you know, sidecar chatbot, which is interesting, but that would take an analyst in their daily activities and they have their runbooks and now they have to pivot somewhere else. And we don't, we want to reduce the pivot. What we're seeing now with agentic workflows, things like attack discovery, things like, you know, I can talk to you some cool stuff we're doing later about like reverse engineering of malware and all kinds of other areas. But the idea is where your analyst works today and in their current workflows, how can you just enhance that seamlessly behind the scenes with AI so they don't even have to worry about pivoting? And they shouldn't even think about it. In fact, I mentioned earlier, the workforce of tomorrow isn't going to think about AI. They're just going to assume everything has some kind of AI behind it. So yeah, it's doing your exact job, but doing it in a way where you're getting a helper behind the scenes of each process and step without you having to pivot somewhere else and ask a chatbot.

Mitch Ashley:

It's interesting to describe it that way. Sometimes people talk about AI as a teammate. So working alongside co-worker, other times it's more behind the scenes. Do you see it more doing work for you, more behind the scenes for a security agent or an engineer or a SOC?

Mike Nichols:

I think that's the primary or maybe the first access point you'll have. The teammate aspect is, not to belittle the chat experience, it is very useful as a, oh, what do I do next? When I grew up in this space, there was no remote work and the socks had no windows. So you were sitting there next to your buddy and you're saying, wait a minute, I just need to brainstorm something. Have you seen this before? Or you had that serendipitous moment where you're talking about something and someone says, I saw that particular indicator a week ago. Did you look at this previous case we have, for example? That's become very difficult in the remote world to replicate. We tried things like wikis and knowledge bases, and that takes the analyst having to stop and then enter that information, which of course, when they're buried in alerts, they usually don't do. So I think that that's where that sort of teammate aspect does help, because what's happening behind the scenes is AI, if instrumented properly, is learning and has access to all this information. So when you do get stuck, or you do want to ask, well, have we seen this before? Then that virtual teammate is able to look at a much faster rate and much more information and actually surface it to you. Yes, we saw this six months ago, maybe, or this was seen in a previous vertical somewhere else, if you have access outside intelligence coming in, things like that. So I think it does have its role, but if that's the only way you implement it, I think that really diminishes how much AI can accelerate.

Mitch Ashley:

I'm really curious about your view on the tech services is changing externally, but it's also changing internally because we're introducing agent control planes for managing agents and directing orchestration, things like that. How do your customers think about how are we going to solve those challenges? How do they approach you and say, how can Elastic do something for us that I don't have to figure out what we're going to do and how we're going to do it?

Mike Nichols:

Yeah, there's a few layers to it. I think the first is always like what model should I choose in the discussion of should I use a hyperscaler cloud model where where I do have a, you know, I do have a specific document that explains what's happening, but maybe I'm a bit opaque to what's really happening to should I use a model locally that maybe I run on my own hardware that, you know, I can put a bunch of GPUs behind. So one of the first things we did was ensure that we could support both. We support really any model that, you know, follows sort of that OpenAI APIs, whether it's a hyperscaler model or a local model. So the first thing you do is, you know, choose your own adventure, whether you are a very secured environment, think of like a federal type environment where you need to be fully disconnected from the, you can bring in, you know, a bunch of GPUs, your own model and still operationalize it that way. If you do have cloud access, you could do it that way. So I think that's the first choice is, you know, pick what model suits you best, pick your risk profile. We hear lots of conversations now about things like sovereignty as well. So you can choose not to use models you might not want to use based on your belief of the sovereignty behind your environment. So I think that's sort of step one. Step two, as I mentioned, is like you said earlier, guardrails or controls, redaction, anonymization, that allows you to ensure the data that's being communicated to these things also is not sending out potential PII. But I think the final one that we are big believers in is transparency. So when we take AgentBuilder as a good example, and you see this now when you interact with like a Gemini or a bedrock, it shows you its thinking steps. We do that in every single aspect of what we do. Every step that's being thought is documented, audited. You can see what's happening there. You can understand why. There's no opaqueness. We don't hide what the different systems are doing. And then allows you instrumentation through many different mechanisms to actually capture and save that data for later. I think one of the core questions I get now is we were already drowning in data and now we have to capture all this LM information. So it is creating that challenge, but that's a challenge that we are also continuing to solve as a core data company as well by instrumenting different technologies. We released one earlier in the year that reduced our storage capacity by 50% with better compression algorithms. So we'll keep trying to tackle that problem of the data storage and saving things into object stores and other ways to save that piece while also ensuring we have the instrumentation and controls around the models as well.

Mitch Ashley:

You mentioned transparency, talk a lot about transparency is an important to AI. It seems like you mentioned the word journey. It's also transparency with experience of using AI is what builds that trust in AI. Talk about that process of adoption of what you see yourself, customers.

Mike Nichols:

Yeah. Going through. Well, I should say for earlier, I said that, you know, you said behind the scenes and I agree with you. That is true, but also not behind the scenes without telling you what's happening. So the first thing is, we're not just going and reaching out to AI without you understanding that where AI isn't vented behind the scenes, it is still being very clear to this happening. And you could, of course, choose the visibility into what visibility into when AI is operationalizing. When is it, you know, you know, not. Of course, with our agentic workflows, we have both deterministic and agentic. So there are some things, the golden path, they say, that you'll want to accomplish. It's also like bringing a hammer to a thumbtack. Not everything needs an LLM. So you also have that decision process to be able to say, hey, here's my golden access, or here's what I always want to do. And then sometimes, if you're not sure yet, reason out this way through a models access. I think that's very important, and explaining that to the user. Uh, we also are big believers in citations, of course, linking deeply into everything that we are accessing. So when you get an answer, it links into either the, where it came from, from a third party resource you might've asked or definitely into your data itself. So you can understand that. So again, all those things are very important in the journey, but to using it, um, we have a hosted, uh, service where we provide out of the box capabilities to just turn on a model and it just works. And we see that as a great path to adoption to start. And then, as I mentioned, you can then, from there, very easily pivot into any model of your choice. So usually during sort of an evaluation phase, they'll just use what we're hosting and say, OK, this works. I like what's there. But now I'm going to start digging into the risk profile. And whether or not what we host works for them or not, they can then choose to say, actually, I'm going to use my tentative version of Bedrock. Maybe I'll use, maybe I'm in a GovCloud and have a different version of it. Or maybe I'll use a model, you know, like maybe like a Mistral or like a Llama3 or something on my own environment. It gives them that kind of capacity. But the first piece is show them the value before you get into the risk. Show them the value. They're like, wow, this really does work. I like this. And then from there, and then go to the risk profile of, OK, now what do you want to utilize it for?

Mitch Ashley:

You had to boil it down to kind of one or two essential things. what helps Elastic break through the fog of war. Stan, I'm asking you to reinvent. Talk about the ultimate proving ground.

Mike Nichols:

Yeah. I think ultimately for us, Elastic is an open source company. Even in security, we were one of the first ones ever to take our leading endpoints and make it fully included, just saying, look, the edge protection should be commoditized. It's the data that matters. All of our detection rules, all of our models, they're all freely in the open in GitHub. You can interact directly with our developers. We have community slacks where I sit, and my team sits, and our developers sit. So the first piece is just we remove all the barriers. I think enterprise sales has gotten a bit elitist. You sort of have this account barrier, the team has to let you access into the team, or if you don't have the budget, they won't even let you into the, you must be this tall to ride the ride. So we also believe very heavily in having a free version available for users as well. That is not a freeware or a freemium, but actually a very powerful technology suite so that we can get protection in an altruistic way to everyone across the world. Cause if we're more secure in areas that can't afford it, then we have less targets of opportunity that can then be used against us. So I think that's, that whole area, the ethos of elastic, I think is critical. And then the second layer, what differentiates us, I think from a typical security company is that we did the hard problem first of data. You see lots of security companies now chasing into the data world. either via an acquisition of a logs company or trying to build it themselves. And the challenge is that's a really hard problem, right? Elastic is 16 years young and trying to solve this very difficult data challenge. We get to ride on that benefit as a security company now as saying, look, we have this solution, we have an amazing world-class team building that, and then I get to build a security layer on top. So when exponential data increases, including LLM, you know, uh, you know, logs and, uh, you know, of course our SAS applications and who knows what's next, right. The next big burst of information, uh, you know, that you have a partner you can trust that can scale to that. We don't do predatory pricing like others. We don't have make you choose to drop data because you can't afford to keep it. You know, you get to see all the information, determine if there's an insight in that or not, and then move on.

Mitch Ashley:

Great. Mike, appreciate it very much.

Mike Nichols:

Hey, it was a great conversation. Thank you so much as well.

Mitch Ashley:

Yeah, I loved it. Thanks for joining us here for Six Five On The Road at AWS reInvent in Las Vegas. Mitch Ashley on behalf of Futurum Group and Six Five. Take care. Talk to you soon.

MORE VIDEOS

Building Enterprise-Ready Agentic AI with Search - Six Five On The Road

Steve Kearns, GM of Search Solutions at Elastic, joins Nick Patience to share how Elastic is enabling enterprises to move from RAG to agentic AI, solving operational challenges, and powering the next generation of autonomous workflows.

How Agentic AI is Reinventing Observability - Six Five On The Road

Baha Azarmi, General Manager of Observability at Elastic, joins host Jason Andersen to discuss how agentic AI is advancing observability tools, enabling faster insights, automated operations, and improved root cause detection for modern IT teams.

Building Trust at Scale: How RAG Is Evolving for Enterprise AI - Signal65

Seamus Jones, Director, Technical Marketing Engineering at Dell, joins David Nicholson and Brian Martin to share lab-based insights on the evolution of RAG—including how Agentic RAG and advanced AI infrastructure are raising the bar for enterprise-ready, trustworthy generative AI.

See more

Other Categories

CYBERSECURITY

QUANTUM