Building Context-Aware AI for the Enterprise - Six Five On The Road
Savinay Berry, EVP & Chief Product/Technology Officer at OpenText, joins Patrick Moorhead and Daniel Newman to discuss how context-aware AI, open platforms, and unified data management are shaping the future of enterprise AI at OpenText World 2025.
How are enterprises advancing from managing content toward enabling true context-aware AI, and what hurdles still need to be overcome?
From OpenText World 2025, hosts Patrick Moorhead and Daniel Newman are joined by OpenText’s Savinay Berry, EVP, Chief Product Officer & Chief Technology Officer, for a conversation on Building Context-Aware AI for the Enterprise. They explore how enterprises are shifting strategies to embrace context-aware AI, emphasizing the need for secure, governed, and extensible platforms to operationalize AI across organizations.
Key Takeaways Include:
🔹Transitioning to Context-Aware AI: Enterprises are moving beyond content management, facing new complexities in contextualizing and integrating data for AI applications.
🔹Security and Governance Priorities: OpenText emphasizes robust security and data governance, shaping feature development and ensuring enterprise-readiness for AI deployments.
🔹Data Integration: Effective AI relies on blending human-created data, machine-generated data, and shared data. OpenText has identified solutions for overcoming challenges in unifying these data types.
🔹API-Driven Extensibility: The demand for open, extensible platforms is rising, prompting OpenText to strengthen its API ecosystem and foster broader solution collaboration.
🔹Upcoming AI Trends: Increased automation, ecosystem integration, and platform openness are key technology trends forecasted to reshape enterprise AI development in the coming year.
Learn more at OpenText.
Watch the full video at sixfivemedia.com, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer:Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Patrick Moorhead:
The Six Five is On The Road here at Open Text World 2025. We're in Nashville. We're talking about our favorite subject over the last two and a half years, and that is AI, Daniel.
Daniel Newman:
Yeah, it's been 40 years for me. It's been the most exciting thing since, no, I'm kidding. Yeah, it's been, it's felt like, you know, in some days it's felt like the last few years have gone incredibly fast, and then other times, Pat, it feels like, you know, we just, we're gonna slow it down, and there's so much happening. Yeah. You know, it was a great day today. I have been so focused on the opportunity in AI that sits beyond just the LLMs. And when you come to events like this at Open Text World, you hear a company that's really not talking about that at all. Sure, there are models that are required to be the underpinnings for agents to do certain tasks, but in the end, the idea that most of the world's data has not been unearthed to AI yet, that enterprises have all this proprietary value to unlock, and that companies have a real challenge to figure that out, that's exciting, and that is really the upside in this AI narrative.
Patrick Moorhead:
Yeah, and that is the big unlock for enterprises, and I like the way that OpenText characterized it. I think they called it the in-context AI or the contextual AI. and I can't imagine a better person to break this down than Savinay from OpenText. Great to see you.
Savinay Berry:
Good to see you both as well.
Patrick Moorhead:
Yeah, great job on stage. Thank you. I love the product energy that comes out. I may or may not have been an ex-product guy in a previous life.
Savinay Berry:
One product guy knows the other.
Patrick Moorhead:
No, no, this is just, I love it. It gets down to nuts and bolts and makes it work. So it was great to see. I like it better when Pat says he used to have a real job.
Savinay Berry:
Ah, right, exactly. Now you can talk about the product versus build the product. Exactly.
Daniel Newman:
And he has no responsibility. It's the best job in the world. I tell you, I want to get that one. Well, we'll talk after this. I'm just kidding. Well, you know, so right now, I think there was a big part of this kind of managing content to context, right? AI enables exponential scale. But to some extent, we've got to avoid AI slop, we've got to figure out how to get our data ready. What does this transition look like in practice? Because the question really is, making AI real and valuable in the enterprise.
Savinay Berry:
Yeah, exactly. And that's a great question. And I'll tell you, I think, if you think about what's happened in AI the last two years, it's mainly been you and I going out to a chat GPT or a cloud or something like that and asking a question and trying to get an answer back on that and paying $20 a month as a premium subscription to make that happen. That's what's been mostly the AI usefulness that people have had. And now that needs to get translated into the enterprise, which has so many more controls in place, so much more data. In fact, there's some estimate which says the publicly available data for all the LLMs that have indexed it today is about 10 to 15 zettabytes. Zettabytes, so one billion terabytes or so. And inside the enterprise, which is behind the firewall, across the board, there's roughly about 150 or so zettabytes.
Daniel Newman:
So, at least 10 times that.
Savinay Berry:
Now within just our own unstructured data stores, which we believe we have one of the largest unstructured data stores in the world, we have about a trillion files, if you had to just lay a flat file across the board, about a trillion files across the board that you can use for the right context for the AI applications. And that's what we believe, the right data with the right context is what's going to be needed with the right security for AI and the enterprise to succeed.
Daniel Newman:
We printed out a trillion, Pat. We could stack them up and touch the moon or something. Yeah, there you go.
Patrick Moorhead:
Something like that. Probably Saturn, but yeah, I hear what you're saying. So life is all about trade-offs, and so is business and any major transition out there. And I'm curious, how are you balancing enterprise readiness for AI with security and governance? Because on one hand, the people who do not embrace and activate agentic AI are going to be left behind. And there's a high likelihood that they will fall behind and maybe even go to business. We saw this with e-commerce. We saw it social, local, mobile. We even saw a client-server transition. How are you balancing this triad?
Savinay Berry:
So I was just talking to customers today after some of our announcements. One of them is a major CPG provider out there. We run the entire supply chain for them. And they're asking the question, okay, how do I make sure that I can get some agents on the platform which can detect any disruptions that are happening in the supply chain so that I can keep that going? We're going to have another customer, which is an automotive manufacturer. Again, same question. How do I get agents so that they can track everything in the supply chain, do a dual source, be able to do anomaly detection, and then be able to keep the supply chain going? Those are a combination of all the agents that have to be super secure and that has a material impact to these companies. It's not nice to have, it's a material impact to the company. So that's why you needed to have a single platform which is called agent lifecycle management. where you start off creating the agent, but then you also have the right data sources beneath it, but then you also have a way to govern it, to monitor it, to have the right accuracy for it. We introduced a product called Aviator EvalOps, which is a way for measuring the accuracy of these results, and we introduced Aviator Studio, which was a platform to build these agents and govern these agents. So security and flexibility kind of have to go hand in hand, and they have to be very specific use cases like I just mentioned.
Daniel Newman:
Makes sense. So another really interesting topic, just to kind of build on this a little bit, and the lifecycle thing's actually very interesting. We need to spend more time on this. I agree. Yeah, OK. We'll come back to that. Yeah. So we're living in a world, and I think I saw, I shared something the other day, Pat, and you talked about eliminating AI slop, and it was like 50% of the internet content, like articles, blogs, web, is now AI generated. Right. So very quickly, we went from like, You know, if you look at the chart, it was 99.1. And within basically two and a half years, we went to 50-50. What that means, and you can take that in context of anywhere AI is being applied, synthetic data is being grown at a huge scale. Because what's happening now is we're training off of information that was built entirely off AI. So it started with something human, started creating content based on something human, and then every, it's like telephone, remember the game?
Savinay Berry:
A reiteration.
Daniel Newman:
And so we're going to end up in this place where probably very little of what's actually out there is real, and most of it is going to be synthetic. Because even if people are still contributing, the scale of AI just keeps getting bigger. The population's not growing that much. And then, of course, you've got shared data. So you've got all these different data. How do you harmoniously make these things work together?
Savinay Berry:
Let me just kind of qualify that, because all of what you just said is totally true in the consumer world. That's where a lot of this is happening, because guess what? Most of those LLM providers, the model providers, have run out of actual data to index. So what do you do next? You create synthetic data. And they are literally doing that right now and providing that as service back to some of the model providers. That's the consumer world. But going back to the original thing, that's about 10 to 15 zettabytes. They're trying to take that 10 to 15 zettabytes and make it 200 zettabytes. I don't really look at that. I look at the 150 zettabytes that's already behind the firewall. How do we tackle that problem, where there's real data, there's actual real data today, and bring that to life. And we have been in the business of governing and monitoring our data for our customers and making it compliant for the last 30 years. And guess what we have done with that so far? Nothing. That's what's changing. And that's what we're doing now, and we're bringing life to that data, and making it so that it can be ingested with the right knowledge graph, with the right data products.
Daniel Newman:
It's really interesting, and again, in a world where, say, for instance, autonomous driving, getting it to the ultimate level of safety is really a best opportunity to blend real-world data and synthetic, right? Or engineering, designing buildings and bridges, you can design with a, and then it's going to continue to learn from the best designs, how to design. So we create a lot of synthetic data in B2B applications. But if I'm hearing you right, what you're basically saying is, we're not there yet. You don't even need to do that. Maybe outside of marketing, some of the functions that are very externally facing, most ERP data, CRM data, all of a sudden it's barely been touched.
Savinay Berry:
Most customers, all of this data, and the fact that I was having this conversation with somebody, When was cloud introduced? It's a rhetorical question. 15 years ago. It's been 15 years and the whole premise of cloud was we're going to have a single repository inside of one store somewhere that can go and you can make it compliant. None of that happened. In fact, if anything, it's become even more cumbersome now because you have so many different data stores on-premise, in the cloud, on your NFS, on your NAS, everywhere. So it's even more disaggregated. So you've got to find a way to first do that aggregation, pre-processing it, and then making it ready for you.
Patrick Moorhead:
By the way, the only caveat I would throw out there would be the physical world. You very much have companies that are in manufacturing and transportation who have to create data that they don't have for autonomy. And that's probably the one area that they could be one of these.
Savinay Berry:
Test data, I could see that. Test data, for sure, because then the need to have that as a benchmark, which they have to test with. But my point was that before even a conversation on synthetic data begins, there is 90% of all the other data we can tackle first, which can create products which are actually going to be super helpful for our customers right now.
Patrick Moorhead:
Of course, as an analyst, I go to the corner case. It's getting a lot of ink, it's getting a lot of discussion on that. In a sense, you serve your developers, and developers are a key audience. Over the last 25 years, we've seen the fractalization of apps, API-driven, and now even for data storage, you have MCP, and now for agents, right? You have agents talking to agents, you have super agents. How are you modifying or are you looking at your strategy as you roll out your products here?
Savinay Berry:
Great question. I think it's one of those things where what is the concept of developer now? it used to be that a developer knows how to code. Even better question. Yes.
Patrick Moorhead:
Reframe it.
Savinay Berry:
Right. So, you had a code developer who used to be able to code and then there was a developer who doesn't need to really code, just understand the API documentation and be able to ingest just the right scripts depending on whatever you're using a Node.js, Python or whatever else in your application and boom, you're done. And then there was a no code movement. Where you could add a bunch of things in a really cool graphical UI and be able to connect the dots in a workflow and boom, there you go. That's what became a developer. So what is a developer now? I think as a developer actually, the concept of the developer is not going to change. The tools they add are going to change. Most people will say, well, you don't need to code anymore. I completely disagree. You will absolutely need a code. You will absolutely need to even have something called a prompt markup language, which is going to come up in the future for sure. There is going to be a defined way to create a markup language for prompts, which makes sense. And you absolutely need to have the integrations. So the concept of developer, I don't think is going to change much. Maybe it will become a little easier. The system knowledge will become much more important. And to change all of that, will you still need to have a developer community? Absolutely. Yeah, it's interesting. Absolutely.
Patrick Moorhead:
You can't always lean on history, but I mean, when we went from machine code to COBOL, we didn't need any programmers. We went from COBOL to IDEs. We're not going to need any programmers out there. And you know, same thing for iPhone. The iPhone was going to get rid of photography, and it really democratized people. Personal Story Mode sends a developer to NVIDIA, and he's very much coding. He gets help from Cursor to help him and turn him into a super developer, but the memes are typically not real.
Savinay Berry:
They're about the same. In fact, I'm going to date myself, but I used to code in VHDL, which was an assembly level language where you could be controlling PIC controllers to monitor a real-time operating system. You look good. very good so I go back there and I couldn't believe that that's the knowledge because that requires a systems knowledge of how the hardware talks to the software talks to the application is now going to come all the way back right so that you can use it in today's AI world otherwise you're not going to go far.
Daniel Newman:
We've done quite a bit of research on the developer and it's just very bifurcated right now. The ones that are doing very well understand the power of AI and are getting exponential productivity, they're saying this is the best thing ever. There is probably some subset that feels very threatened by it, but like with every major industrial revolution, the weed out is going to be those that can embrace it. I think the same thing will happen. So you're the you know, you're the visionary of product at all things here at open text. So let's ask you that future forward question. What technology do you think you're going to have the most influence? You know this build out not specifically like what do you think? as it relates to AI. What's going to be the biggest influence of what the future of B2B looks like?
Savinay Berry:
The short answer is, whatever answer I give you is going to be wrong. But I'll tell you. It'll be good for six months, OK? Yeah.
Daniel Newman:
I love to say that.
Savinay Berry:
But I think for now, it's going to be something that most developers who are starting off new, completely new, have no knowledge about anything else. I'm more interested in how they think versus what they do. Because the kind of use cases that they'll be coming up with are things that I would have never imagined myself. And that's the kind of knowledge I want as part of the initial kind of what I call the early career talent that's coming into the companies. That combined with people who now start to understand the full systems thinking at a very senior level. who already know what tools to use, already know the entire, not just the models. Models, there's two types of people on the developer side. One who is actually creating the models, one who is using the models. That's right. The people who are creating the models are the ones who are the AI researchers, and you see all the crazy numbers thrown out for jobs and blah, blah, blah on that. Totally different. But the ones who are using the models, it's not just the models. It's also the RAG systems associated with that. It's the chunking, it's the embeddings. It's the full AI system. That talent is going to be critical. That becomes the new operating system in some ways. And you need to understand how that intersects with all the other existing applications. And that combined with the early career talent, which talks about the new way to use AI, That's a pretty magical combination, right?
Patrick Moorhead:
Yeah, data science students are very much in demand.
None:
100%.
Patrick Moorhead:
There might be a little drag on CS these days, but that totally fits into what you're saying.
Savinay Berry:
Which, by the way, I don't understand the whole drag on CS, because I think that's a myopic approach by most companies. Because if you want to really go with people who understand how the systems work, CS is the way to go.
Daniel Newman:
The biggest threat, and I know we got to go, but the biggest threat to all of it is in the near term, is always short term, breaking the long term. What do I mean by that? Well, you know, AI with doctors will be better. But if we don't have a system where a doctor can come in and train, because you know, everyone's like, well, the best doctors won't be influenced by AI. But it's like, yeah, but they've had 20 and 30 years of being in the chair next to other great doctors. Now you're saying, oh, just use AI. Right, exactly. So when do these get trained? When do lawyers get trained?
Savinay Berry:
When do engineers get trained?
Daniel Newman:
There's a real societal pressure to kind of figure out how we build talent up. Exactly. So that in 20, 30 years, we have great talent working alongside some of the most prolific and capable AI? It has to be that way. But in the near term, when you're constantly trying to churn quarterly profits out, people are sometimes... Yeah, they get confused by that.
Savinay Berry:
In fact, there's a saying out there in the SRE world, a long time ago, there was a saying called, zero toil, not zero people. Right. Interesting. The phrase is so relevant right now. It's going for zero toil, not zero people. And it's not about changing the people. It's about augmenting what they're doing. And I know that gets used a lot, but we believe that. And that's why we announced the AI data platform to augment the work that people were doing to make them a lot more productive.
Daniel Newman:
Right. Let's continue to have these conversations and hopefully you can augment yourself and get yourself to three, four, five different... You're kidding, actually.
Savinay Berry:
You're kidding. Let's talk next time.
Daniel Newman:
No, I'm actually not. I did make a prediction once that there is a CEO of a certain infrastructure chip company that will probably end up having a bot of himself that will run that company for the rest of eternity. And you may be right. I'll let you know, let you think about who that might be. We won't say it on air. Thank you so much for joining us here on the six five. Have a great rest of your show.
Savinay Berry:
Always great being with you. Thank you.
Daniel Newman:
And thank you so much for being part of the Six Five. We're here on the road in Nashville, Tennessee at Open Text World 2025. Hit subscribe just for all the coverage here at the event. And of course, all of our great content here on the six five for this episode that we got to say goodbye. See you later.
MORE VIDEOS

Why SONiC Is Powering the Modern Data Center: Dell’s Journey to Open, Scalable Networking - Six Five On The Road
Saurabh Kapoor, Director, Product Management & Strategy at Dell Technologies, joins Daniel Newman and Patrick Moorhead for an in-depth look at SONiC and open networking’s impact on data center scalability, reliability, and AI readiness.

Microsoft Azure Cobalt 100 VMs Momentum and Adoption - Six Five On The Road
Brendan Burns and Arun Kishan of Microsoft join hosts to discuss the momentum and adoption of Azure Cobalt 100 VMs, sharing insights on migration, performance, and how these custom Arm-based VMs are shaping the future of cloud-native workloads.

Inside Dell and Broadcom’s Next-Gen Networking: Tomahawk 6 and Enterprise SONiC Power the AI Fabric - From SC25
Jim Wynia, Director Product Management, Networking at Dell Technologies and Hemal Shah, Distinguished Engineer at Broadcom join host David Nicholson to discuss how Tomahawk 6 silicon, PowerSwitch Z9964, and Enterprise SONiC are advancing AI data center fabrics for scalability, cooling, and manageability.
Other Categories
CYBERSECURITY

Threat Intelligence: Insights on Cybersecurity from Secureworks
Alex Rose from Secureworks joins Shira Rubinoff on the Cybersphere to share his insights on the critical role of threat intelligence in modern cybersecurity efforts, underscoring the importance of proactive, intelligence-driven defense mechanisms.
QUANTUM

Quantum in Action: Insights and Applications with Matt Kinsella
Quantum is no longer a technology of the future; the quantum opportunity is here now. During this keynote conversation, Infleqtion CEO, Matt Kinsella will explore the latest quantum developments and how organizations can best leverage quantum to their advantage.

Accelerating Breakthrough Quantum Applications with Neutral Atoms
Our planet needs major breakthroughs for a more sustainable future and quantum computing promises to provide a path to new solutions in a variety of industry segments. This talk will explore what it takes for quantum computers to be able to solve these significant computational challenges, and will show that the timeline to addressing valuable applications may be sooner than previously thought.

