A Closer Look at AWS AgentCore - Six Five On the Road
David Richardson, VP at AWS, joins Jason Andersen to discuss how AgentCore is enabling enterprises to deploy secure, scalable AI agents, what's unique about AWS's approach, and what’s next for developers in the rapidly evolving AI landscape.
How is purpose-built infrastructure like AgentCore changing how enterprises deploy scalable, secure AI agents?
From AWS re:Invent 2025, host Jason Andersen is joined by Amazon Web Services' David Richardson, VP, AgentCore, for a conversation on how AWS is transforming the enterprise AI landscape with purpose-built infrastructure for AI agents through their AgentCore platform. They unpack AgentCore’s core components, its role within the AWS AI stack, and how it addresses the challenges of security, scalability, and integration for organizations moving beyond basic chatbots to sophisticated AI deployments.
Key Takeaways Include:
🔹AWS Motivation and Market Gaps: AWS built AgentCore in response to enterprise needs unmet by existing solutions, focusing on security, scalability, and deployment at scale for AI agents.
🔹Core Value of AgentCore: The platform offers secure runtime environments, built-in memory management, and broad integration, making it a foundational layer in the AWS AI ecosystem.
🔹Key Differentiators: AgentCore stands out for its comprehensive approach to security, seamless AWS integration, and ability to support sophisticated, production-ready AI agents.
🔹Market Feedback and Future Roadmap: Early user feedback highlights strong adoption, and AWS is actively exploring new features and developer resources based on customer needs.
🔹Getting Started: Developers can begin leveraging AgentCore via AWS, supported by documentation, SDKs, and integration guides.
Learn more at Amazon Web Services.
Watch the full video at sixfivemedia.com, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five On the Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Jason Andersen:
Hi there, it's Jason Andersen. I'm VP and Principal Analyst for Application Development at Moor Insights and Strategy. And today is another episode of Six Five on the Road here at AWS reInvent 2025. Today's topic is gonna be a really great one. It's a relatively new product for AWS and it's called Agent Core. And today we're gonna do a bit of a deep dive with one of the Agent Core leaders, who's David Richardson. So thank you for joining me.
David Richardson:
Thanks for having me.
Jason Andersen:
If you wouldn't mind, give us a little bit on your background. Tell us a little bit about yourself before we dig in.
David Richardson:
Yeah, I'm David Richardson. I'm the Vice President for Agent Core at AWS, and I've been a longtime AWS team member. I joined back in 2006, a little bit before we launched. So I've been able to be on a long part of the journey. I launched CloudFront and Route 53, ran EBS for a while, and Cerberus for about five years.
Jason Andersen:
Oh, great. I talked to the EBS team here last year doing one of the videos. So, AgentCore is very interesting in a lot of different ways, but the thing that kind of stuck out with me was its origin story. What was the kind of observation that AWS was making when they said, you know, there's a market need here? you know, there's maybe a gap, if you will. So what kind of got that going? What got that conversation going? And then how did we get to where we are now?
David Richardson:
Yeah, well, the big thing to keep in mind is just how fast all this happened. Right before we started, I went and looked it up. MCP, the model context protocol, was announced November 25th, 2024. So we're like a year and a week. So really the origin story for Amazon Bedrock Agent Core started as we were building some of our own agents, many of which were launching at reInvent this week. And we were talking with customers and As sort of happens at AWS, we noticed some patterns, some things that like, yeah, everybody has to do this. It's not really uniquely giving value to the agent. It's more a kind of cost of doing business, if you will. So one of our team members, Preeti, wrote a proposal back in April to bring this together. The team got started in April. We went to a preview in July and we just went GA in October. So doing our best to move at the speed of AI with Agent Core.
Jason Andersen:
Yeah, yeah. When we get into, like, Agent Core itself, like, what really is it? And what's the key value proposition? Those are the types of things I'd like to know a little bit more about.
David Richardson:
Yeah, I think of Agent Core as really having two halves. There's the lower half, the infrastructure half, and then there's the upper half, what is an agent? What is an AI-based application? And our hope is that the infrastructure part mostly disappears. you know, in, in the sense of, it just provides the kind of capabilities that any agent needs, with kind of one exception. But so this is where, like many AWS products, we have a set of composable primitives, you can use and mix and match in a lot of different ways, either all of them or just a few. Um, but runtime, which is where the agent runs itself gateway, which is how you define your, the tools you would like an agent to use identity, which is how you give an identity to all of them. They will automatically scale. They'll do versioning, like all the kinds of best practices of a modern, um, production application, which at first you might think, well, you know, I can just run an agent on my laptop. But when you want to transition into production, you need all these sort of long term maintenance capabilities. So we have that built in. And then we have other capabilities that kind of run at the agent level itself, like memory, a browser, a code interpreter, things like that.
Jason Andersen:
And I think some of those higher level features are also really geared towards helping save some money, right? Because it's reducing the context going in or it's kind of offloading some of the tasks, right? Is that kind of a... Very much so.
David Richardson:
You know, I think across the stack, you know, whether on the infrastructure side, like, you know, compute is still something you have to pay for, you know, even if not as much as the tokens, we want all of that to be cost effective. But then when you're building an agent, I think a thing that people are learning is the context window, the amount of tokens you're sending to the model matters a lot. And those can come from different things. They can come from the prompt that you wrote, that's driving the agent. It can come from the things you're trying to retain across a user's session. We call that memory. It can actually come from the tools you define and how you define them. You know, like two or three months ago, the common practice would have been like, shove the entire documentation stack into the model. And now people are saying, whoa, maybe just a one liner and let the model then go figure out if it needs things. So yeah, like helping a customer make cost effective use of their tokens is definitely part of the product.
Jason Andersen:
It's also as interesting as you said, Amazon Bedrock Agent Core. That's right. We've had Amazon Bedrock now for a couple of years on the model side of things. So how does Agent Core kind of tie in or tie off with the rest of the Bedrock family and kind of how does that all kind of interplay?
David Richardson:
That's right. Yeah, inside of AWS, we've been working on AI for many, many years through a variety of product surfaces. Like we have our SageMaker that helps people build models. With Bedrock, we host a wide variety of both commercial and open source models, as well as our own models. And now with AgentCore, we're helping people build agents. So I think of these as layered products. Since I lead AgentCore, we'll focus on that as the center of the world. Yeah, which is really there to help you take advantage of these AI capabilities in an agent that runs for a customer or runs to solve a business problem. So as I mentioned, we've built it with a lot of flexibility. So you can pick and choose which agent core components, but you can also pick and choose which model you want to use. So it's very easy to have a model that's hosted in Bedrock, be the smarts of the agent, if you will. But you could also choose to use somebody else's model. Like you could use OpenAI or you could use Gemini. You can use a model running anywhere. That's a key part of it. So Bedrock will be providing the model, Agent Core, sort of the scaffolding for the agent itself. Then another thing that we have in our toolkit is an agentic framework. Ours is called Strands. But we support customers using a lot of other frameworks, so LangChain or Google's agent development kit. Again, you can kind of pick and choose. These are all very focused on the sort of developer or software professional persona. So somebody who's either a software engineer or maybe a platform operator. We also have products in AWS focused on more of the business persona. So our quick suite, which might be focused on a business analyst. So, you know, today they're all semi-separate products. We're trying to do everything we can to make it easy to use them together. But generally that stack will be, you're running your model on bedrock. You're building your agent using agent core. You're maybe exposing it in a reusable way to a business professional through a quick suite.
Jason Andersen:
You know, it's an interesting differentiation you did bring up, though, because you've got flexibility on the framework side. You mentioned flexibility also, major core on the service side, and you don't have to use bedrock as an underpinning, right? A lot of your classic layered products require the layer below, right, to work. And then, of course, you have Bedrock, which is, you know, model transferable. So I mean, do you think that's the biggest differentiator for agent core in the market? Or is it or are there other things that really kind of set it apart? I think flexibility clearly stands out. But what else?
David Richardson:
Yeah, flexibility is definitely one of them. I think the completeness of the parts of the agent toolkit is the other. You know, I think about the ability of a customer. This is part of the fun of designing these products is how can we make them pluggable so you can pick and choose, but still feel like an integrated whole the more parts of it that you use. So being able to have an incredibly secure, scalable compute runtime, which I don't think anybody else has the level of security we do. It leverages the same micro VM technology we built for Lambda and that we've open sourced called Firecracker. We also use that same incredible security boundary for our web browser tool and for our code interpreter. And then we've layered on top of that the ability to have session level isolation, not just agent level isolation. So if you and I both consumed or interacted with the same agent, our contacts would be completely segregated by the agent or runtime. So I think about security, scalability, the ability to have runtime, gateway identity, and so on. I'd say that completeness. But I know that you have done some research and your colleagues have as well. What have you found?
Jason Andersen:
I think the comprehensiveness really did stand out. I think that's a great point. So just to give the viewer a little education here. So over the last few months, we've been working with a company that we work with called Signal 65 and more insights who I work with. We were starting to look at the idea of how Agent Core is a great augmentation to some of these other frameworks, even frameworks that aren't strands, right? And we did some testing on it. And what we found that stood out was how the modularity really helped because we built three different types of agents and benchmarked them, you know, with and without Agent Core. And we were able to kind of mix and match, which was really the optimal solution. So instead of having to kind of pick one monolithic thing to build off of, we're able to say, well, for this customer service agent, we'll use these services and, you know, memory and, and like runtime and things like that, then over here, maybe we use gateway and something else for a research agent. So the results, actually, I don't know, have you heard the results? Because they're pretty impressive. With agent core, we saw just a couple couple tidbits, like the end-to-end development process. So, you know, setting up the services, configuring everything was over twice as fast. And then the other thing that really stuck out with me was, especially in the age of DevOps and now AIOps and everything, was the overhead of developer time, right? So, if you looked at working with some of the kind of just off-the-shelf frameworks without the benefit of AgentCore, it was about 75% more work than with agent cores, because agent core had everything kind of set up and ready to go. So the developers were able to just tap into those services. And like you said, they're on demand, so they can just get going much, much quicker and spend more time actually working on the innovation versus the configuration. So you're right. That was a really pretty impressive study that's already out there. We have that out on the site now and both AWS site and on Signal 65. And so it is a nice piece of work.
David Richardson:
Um, well, hopefully that is getting better as well, because I think you did that study before we released a bunch of work we've been doing explicitly focused on the getting started time. Yeah. So we've, we've built some new capabilities into the CLI, um, because we do think you just should be focused on the agent itself rather than setting everything up.
Jason Andersen:
Yeah, I also think that the new preview capabilities kind of take it even in a different direction. I think you're absolutely right that the first set of services were focused on speed and time to value, but now we're starting to see some bits around control and accuracy with policy and evaluation. Can you chat about that a little bit?
David Richardson:
Yeah, I think that's one of the really exciting things. One of the challenges with building an agent is deciding how much you trust the LLM? I like the Simon Willison definition that an agent is an LLM in a loop using tools to solve a problem. And the models keep getting better, but they're not perfect. And so a lot of times when people are deciding how to build an agent or what is an appropriate agent to build, one of the quandaries is, Am I like trying to be super prescriptive to the model in telling it, do this, then do this, then do this, then do this, sort of a workflow in a prompt or even strung together with code and sort of micro bits of asking the model to do things? Or can you instead say to the model, this is the outcome I want. These are kind of the constraints I want you to operate within, but I'm going to trust your model. You know, people are fearful of doing that because models can hallucinate. They're susceptible to prompt injection. Um, but that is the approach that lets you take advantage of how models keep improving every month. That's our bet actually. Right.
Jason Andersen:
And that's kind of a cornerstone of strands as well.
David Richardson:
Yeah, that's right. Yeah. So, so, you know, and that's the beauty of choice. Like if, if you don't, you know, agree with that, that there are lots of other great frameworks that work really well with agent core, but you know, we know there's some risk with that. And so that's where I'm one of the, of all the things we're announcing at reInvent this week in agent core, policy is the one I'm most excited about. And because I think they go so well together. So policy is an ability outside of the LLM loop in the gateway, which is where you gain access to tools for you to be able to set constraints. So like the canonical example is a customer support agent. You know, you may have written in your prompt for the customer support agent, you are a customer support agent, you are allowed to issue a refund of up to $100, or anything more than that, you should escalate to a human operator. But, you know, maybe the model hallucinated, maybe it fell victim to an attack, and it actually asks the refund tool for a $1,000 refund. Well, with policy, you can write a policy that would just block that call. And so it can still talk to the tool, but it can't ask the tool the things that are out of bounds. And what I like about that is it runs in a separate security boundary, if you will. So it's not susceptible to prompt injection. It's not actually part of the LLM. It's part of the gateway. you have this nice layered abstraction where you've got like IAM and VPC about what you can talk to. Then you've got a policy about what you can ask it to do. And then you've got the model and your agent. And then on top of that, you've got bedrock guardrails that influence what you can say back to the end user.
Jason Andersen:
Right, right. And I think that's actually a pretty classic thing that we've observed is when people hear guardrails, at least in the context of AWS, they think about what they're really talking about is policy a lot of the time. I mean, guardrails certainly serve a purpose. But I think when people are looking at it, they're going, well, where's all this stuff? And it's like, well, no, that's really in policy. And it's applied. And I think the point about where you apply it is really important because you're applying it, again, outside. And it's also at the control point, right? The gateway is the control point. And that can be shared amongst your entire estate of agents. I think that's another thing that people haven't quite gotten to yet is they're building the agent or they're doing no-code agents. There's a lot of agent work being done, but what we're still seeing people kind of ramp up on is this idea of scale, whether it's scaling to support thousands, millions, or billions of agents, or scaling up to agents that are a significant mission-critical aspect of the business process. So we're still not quite there. quite there yet. But when you start to look at what you have, it kind of opens the door to that, I think.
David Richardson:
Yeah. One of the fun parts about the breadth and number of customers we get to work with, that there actually are some who are quite far along, hundreds to maybe thousands of agents. Now, a lot of those will be more internal facing and productivity oriented rather than, say, a big, giant external customer facing. So there's a real mix. But I think speed has been one of the hallmarks. We're really excited to work together with Heroku, who offer a platform as a service, that's container-based, but they actually found that by using AgentCore Runtime, they were able to get to market with their sort of vibe coding product much faster than they would have using containers. So that was a great partnership. Similarly, we've worked with Ericsson, who have just a massive amount of existing systems, and that sort of goes back to an agent as a tool. Tools are the systems, whether they're APIs or data that you've built for decades. And so they were able to take advantage of the incredible breadth of things that they had already as they started to build agents.
Jason Andersen:
Oh, wow. So the product was in preview this summer when GA unknocked Holber.
David Richardson:
Yep. um and we now have previews coming so how's the uh how's the feedback like what are people um i heard a big number on downloads yeah that's right already so tell me what's how's it going yeah uh good uh so we have had two million uh agent core sdk downloads in review okay um so clearly you know lots of people using it uh i think as we talked about before people like the overall breadth of the offerings and how they come together. And I think the things that we're launching this weekend, re-advent, really address the next set of feedback that we heard. So we talked a lot about policy. I think that's going to be the biggest one because it's kind of a trust but verify feature where you can just now feel safer doing things in an agent because you can know that you've kept it in bounds. We will also be launching evaluations, which is sort of the next thing in being able to use AI to help you figure out if your agent is performing well. So we already had observability, which can either look like traditional AWS CloudWatch and X-Ray, or it can look like OpenTelemetry traces. And you need that, but it's in the same way, the way you might have managed a traditional application of looking at latency and error rates and things like that. Now with an agent, you want to be able to look at user sentiment, or is it making progress, or other things like that. And so evaluations will help with that. And then we're enhancing the memory capability. Like we were talking earlier about how managing the context window is one of the critical things, both for cost and for effectiveness. And one technique is to use a separate model in order to figure out what I should retain from this session that might be useful in future sessions.
Jason Andersen:
Oh, so it's not that you don't even have to explicitly declare it anymore. It's actually kind of making the call because you still have to make some configuration.
David Richardson:
But the way it works is, you know, so we already had short term memory, and that helped with automatically compacting a context window and other things. and we've had a semantic version of long-term memory with episodic memory, it can store the information in a different way. And really what it's looking for is sort of like a skill that was learned. The analogy I think of is if you've used a coding agent you try to get the agent to do as much work for you as you can. And so if you were really happy with how it implemented a particular architectural pattern, the best practice is to say to the agent, write that down. Write it down in a way that I could give it back to you and you would do that again. So it's the same concept here, but it's automatically both recording that But then it can guide in knowing when to bring that back into the context window in future interactions.
Jason Andersen:
Cool. Last question. Okay. How can people get started? Right. So 2 million downloads so far, but where are we, you know, what's the, what are we, what are we going to ask them to do?
David Richardson:
So go to your favorite search and look for an agent core getting started. Uh, we have a quick little doc on building your first agent. You'll download the AgentCore command line interface and it can actually scaffold it out. It'll start a project for you and it'll let you first test it locally and then upload into the cloud and you can start asking your agent questions. Perfect.
Jason Andersen:
Awesome. David, thank you very much for spending some time with us and for everybody else, thank you for tuning in. I think it was a great discussion about Amazon Bedrock AgentCore and I'm just going to sign off from Six Five On the Road. It's Jason Andersen. Thanks.
MORE VIDEOS

HPC, Data, and AI in Research: A Conversation with MPCDF and Lenovo - Six Five On The Road
Scott Tease, VP, ISG Product Group at Lenovo, and Dr. Erwin Laure, Director at MPCDF, join the Six Five team to discuss how their organizations are partnering to drive innovation in HPC, data, and AI for groundbreaking scientific research across Europe.

From Prompt to Production: AI Spec Driven Development with Kiro - Six Five On the Road
Al Harris, Principal Engineer, and Jessie Vanderveen, Head of Product Marketing at AWS, join Jason Andersen to discuss how Kiro leverages spec-driven development to accelerate production-ready, AI-powered application development.

HR Compliance Today: Challenges, Misconceptions & the Role of Technology - Six Five Virtual Webcast
Jeff Glauber and Janusz Smilek from SAP join Keith Kirkpatrick to discuss how compliance requirements, misconceptions, and emerging technologies—especially AI—are shaping the future of HR compliance and what organizations can do to stay ahead.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
QUANTUM

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.

