Unlocking the Power of Applied AI: The Rise of the Digital Knowledge Worker

How can enterprises embed AI to revolutionize knowledge work and drive competitive advantage? 🏃

At the Six Five Summit, hosts Patrick Moorhead and Daniel Newman are joined by OpenText's Savinay Berry, Executive Vice President, Chief Product Officer for a conversation on how AI is revolutionizing the workforce and knowledge work within enterprises to build the digital workforce of tomorrow.

Key takeaways include:

🔹From AI Hype to Practical Enterprise AI: Explore the critical transition from theoretical AI excitement to successful, real-world practical applications that deliver tangible business value within enterprises.

🔹Embedding AI Across the Organization: Recognize the vital importance of integrating AI deeply at every organizational level, rather than merely adding it, to truly unlock its transformative power.

🔹AI Governance & Trust in Deployment: Delve into the indispensable role of robust governance and unwavering trust in AI deployment, ensuring ethical practices and maximizing the reliability of AI-driven outcomes.

🔹OpenText's AI-First Strategy & Digital Knowledge Workforce: Discover OpenText's innovative AI-first strategy and its profound impact on building the agile, intelligent digital knowledge workforce essential for future success, leveraging cloud flexibility for competitive advantage.

Learn more at OpenText.

Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Patrick Moorhead: Hey everybody, welcome back to the Six Five Summit 2025. And our topic here has been pretty constant and it's about making AI real inside of enterprises. Daniel, it's been a great event so far. I mean, we're covering enterprise AI from silicon to SaaS and pretty much everything in between.

Daniel Newman: Yeah, it's been a great event, Pat. It is very exciting to kick off our fourth day. We know over the last couple of years, AI has been the central focus of the technology industry and basically now it's coming to every industry. Every business is focusing on how to unleash AI to make it part of their business, more efficiency, more growth, more productivity. It's going to be another great day.

Patrick Moorhead: Yeah, for sure. And what we've seen is a couple different approaches. I mean, we've seen AI as a bolt on to what people are doing, kind of splitting between this modernizing and AI. And then we've seen AI first where we've seen a lot of people come out and say, we are AI first, we're re-architecting as much as we possibly can to do that. And that's what we want to dive in here with Savinay from OpenText. Savinay, welcome to the Summit.

Savinay Berry: Well, thank you for having me. Very exciting to be here.

Daniel Newman: Yeah, Savinay, so good to have you opening up day four for us. It's been an amazing event, such an esteemed group of speakers and you, I expect to bring it for our audience today. So let's start off talking about a little bit of the theme that you heard from both Pat and I in the buildup here. We talked about how this kind of AI conversation is evolving rapidly. We've gone from this experimentation phase to now it's really implementation. Where do you, OpenText has had a lot of product it's been launching in this space services to enable and make this real for companies, but as you've been part of this leading product for all of OpenText, where do you think the enterprise market is right now in that journey?

Savinay Berry: Yeah, well thanks for having me over here, first of all, it's exciting to be as part of this conversation. And frankly it feels like it's one of those days, 20 years ago when we were talking about how cloud was changing and what was happening on the cloud side of things, it feels like the similar kind of transition, except instead of taking 20 years to get to where cloud got to, I think that entire 20 years has been compressed down to maybe two. And the pace at which that adoption is happening is also very interesting. I'll give you a quick anecdote. I was talking to a CIO recently and we were talking about exact same transition around cloud and what it meant. And they said, you know what? Cloud was interesting because the benefits of that were more behind the scenes. It was more about the ROI that you could drive. It was more about availability, it was more about resiliency, those kinds of things. And therefore it took some time for people to really recognize that they could create these multi-tenant applications and have it available everywhere, have economies of scale, be able to get better ROI over time. So it took some time to get there. Because AI was one of those things that it literally had a UI, you could not miss it. If you are talking to an agent, if you're talking to a ChatGPT interface and you get all the answers that you were looking to get earlier, took you hours to do it, now you can get it in minutes. It's real. It's in front of you. So it's not back office or it's not somewhere behind the scenes, it's right in front of you. 

So I think that inflection has become so much faster because of that reason, because anybody can get it, anybody can understand that. And as a result of it, what we are seeing is that, yes, there was a lot of experimentation that happened over the last two years or so. We're now seeing real tangible use cases. There's still, I'd say, what you would consider as mundane use cases, but those mundane use cases over time, they all add up. So let me give you an example. It's a very interesting one. I was talking to my team, which does simple things, product documentation, release notes. If you're in the software business, you do release notes. And when you do release notes, you typically go through a process where you have a writer who works with the product manager to try to see exactly what was developed, what was created, and you launch it and then it goes through a vetting process. It has the right tone. It looks at the different databases. All of that, pulls that together. Takes about five days or so roughly to write a good release note.

We have now of course introduced our own version of an agent, which is now able to take that entire five-day process, squish it down to a day. One day. And I think that one day is going to go down to maybe an hour very soon. And we do 200 release notes a quarter, so about 800 to 1,000 release notes in year. And you multiply that time on what it's going to save, small, trivial example, but it tells you the power of what this could be. Same thing with things like RFPs. Our sales teams, every sales team anywhere they do RFPs, they respond to RFPs. If you're in the enterprise software business, you respond to RFPs. And those RFPs typically take sometimes weeks, sometimes months to figure out how to respond to that. Because now you have an agent which can go look at a database, there's a RAG pipeline built into the existing data stores for what we have. It has all the models built in and it can go off, capture all the things that were answered for previous RFPs with a similar sounding set of questions, pull all of that, and then now you suddenly have an RFP response, which sometimes took weeks now literally in the matter of a couple of days.

So it changes the game where you'll start looking at these small tangible use cases, how you can create value for the work that happens on a daily basis, and I can keep going on. But now it's real. It's tangible. It's not anymore an experiment where it's about creating the best model that you can think of with the best accuracy rate, with the best latency. I think we've done that. And now good enough is getting to good enough. And now it's about truly creating the ROI for these sets of use cases. So hopefully that sort of gives you a little bit of flavor of how this is moving and what we are seeing.

Patrick Moorhead: It really does. And these are the types of conversations that are the most important. I mean, I love technology. I've been in the technology industry I guess this month, 35 years, I was part of the dot-com boom and the dot-com bust, cloud, local, mobile, social, pretty much everything. And this is the biggest force multiplier that I've seen in productivity. I mean, the good news is we're debating is it a 10X multiplier? Is it 100X multiplier? And a lot of the research that we're doing is around this concept around the digital knowledge worker where it's funny if you even talk to the HCM vendors, they're talking about giving them an employee badge of some sort. But I'm curious, what does the concept mean to you and OpenText and how has it changed the role of the workplace, I've heard discussions that were going from roles to workflows?

Savinay Berry: Right. Right. So interesting. Right? I mean, let's take a step back for a second and talk first principles. So what is an agent? I think that question has been debated quite hot and heavy over the last 12, 24 months. And if you go look at just the English dictionary of what the word agent means, it's something which has some agency. That's it. That's what it boils down to. It has agency to do something on its own. That's what a word agent means. So that agent can do a bunch of things, which it can do autonomously or it can do things where you have rules, guidelines, and you've given them the authority to go do that. But ultimately it's the agency that you have for them. And what is it? It's a piece of software. And the piece of software in this case happens to be something that is taking data from a publicly available LLM, it's probably taking data from some private databases which are there within the companies to make it a much more accurate representation of the answers. And then there's some guardrails and rules, which are all written in a no-code type of an interface, whether you're using AutoGen, whether you're using LangChain, whether you're using Crew or other things, it's all kind of built into that flow. That's the new AI stack that's emerging.

So how do you then create it and then scale it? One agent, fine, you can figure out a way to force fit that within the workflow. But when you start talking about armies of agents where we believe that every single knowledge worker, you and I, will have an army of agents working for us on a regular basis when we come to work. So how do you create that synchronicity, that orchestra, if you may, of these agents so they're not competing and they're also have access to the data that they should think about the world of IAM and roles and permissions that happened 20 years ago when identity and access management became a thing where now each of us had our own roles, permissions, access control, behind the scenes. Of course it was an active directory, an LDAP or something like that. And then there were granular permissions about what data you have access to within the enterprise based on your role. That exact same thing is going to happen with the agents too. So it's not too far-fetched when you think about the fact that those agents will have their own employee ID. I know it's provocative statement, but it's somewhat true that they will have an identity which will have access to certain databases with certain profiles, with certain rules that they can go only access this data stores. And then over time those might change because they might retire because they're not doing as effective work. 

So they might have to get retired. And then there are new agents that take their place and then so on and so forth. We call it the secure runtime for the digital knowledge worker. And we think that the secure runtime for this digital knowledge worker is probably going to be one of the hardest problems to solve for companies like us and others in our space to scale it so that it's available for mission-critical work. We are serving mission-critical companies for mission-critical use cases, and that's what you need to make it real.

Daniel Newman: So Savinay, just as a quick follow up, because I want to pivot a little bit to risk in a moment, but if I'm hearing you right, it sounds like you see a strong augmentation with agents versus maybe a replacement strategy because there's a lot of debate on that.

Savinay Berry: Yeah, it is. I think, look, there's going to be a shift that happens starting with augmentation. That's the first thing. History is a good teacher. So if you go all the way back into the early 1900s the first time the automobile assembly line process started and we went from being an agro economy to more an industrial economy, the exact same questions were being asked that, hey, listen, I am working in the fields now. All the jobs are going on the assembly line. Am I going to lose my job? And what's going to happen? Well, some people did, of course, but then over time there was re-skilling that happened as well. So yes, there is going to be augmentation to begin with, but then over time I do see areas which are going to be more autonomous, and the people who are working in those areas, they will be re-skilling themselves to enable those autonomous workflows. So yeah, I do see a path on this, Dan. But right now I'd say the most immediate thing is augmentation followed by complete autonomous.

Daniel Newman: I think that's a really nice way to describe it. Personally, I do believe there will be some real displacement. It's going to be upon us as humans to continue to invest in ourselves, to grow, to use the tools and become better. And by the way, for everyone out there in the audience, if you listen to the recent one of our other day openers with Aaron Levie from Box, we talked a lot about this with Aaron as well. He sort of sees this too about how economics grow, how productivity grows, how roles change. But it's a little bit of a scary time. It's a little scary just because we don't know a lot. It's not scary, it's bad, it's scary. There's just so many unknowns. And another unknown is really about risk and the whole thing about trust. You talked about that kind of operating system and what they get access to and what agents don't get access to. But with all this, the faster we move the more risk we create. Private data finding its way to the wrong place is a big problem. I'd love to get your take on this about how companies should think about being responsible with AI at scale, and maybe just any thoughts on how to build a framework for that and what you are learning in that area?

Savinay Berry: Yeah, great question. Because goes back to the notion of compliance and security. Those things will not change at all. If anything, they'll become even more important as we go into this new world of both agents as well as digital knowledge workers and human knowledge workers together. I think it's going to become even more critical. And when the cloud, again, I'm going to go back to that kind of metaphor. When the whole cloud change happened, it gave rise to this concept of a chief risk officer, a chief compliance officer, which became even more important because now your data was spread out in multiple places and you sometimes didn't have any visibility into it. So how do you keep track of that? How do you make sure it's regulated? I think you multiply that problem 100 times, maybe 1,000 times, because now it's not only about where the data is hosted, it's about how it's being used and who is using it. Humans? Sure, they make mistakes, they create issues, but you can track them. But once you have agents, then how do you do that? Is there an audit trail? Is there something which has a timestamp for every single action that was done by the agent so you can track exactly what data they had access to or what they didn't? 

So I think there is a framework. There's some sort of a new concept of this compliance and security for the agent workforce that will need to be created. We are working on it internally ourselves as well. And we believe a standard, a more broader standard that can be applied across the industry is probably going to be a good way to think about it. But I'm talking to a few of our partners about this exact same situation as well because without that, the concept of trust won't exist with our customers. They might hear one thing from us, they might hear another thing from some of our other vendors, and you don't want that. Trust and compliance is one of those things, it needs to have a common language. So I do believe there's an opportunity to really create some sort of framework to enable this trust and security, not just for the world of cloud, but also for the world of AI.

Patrick Moorhead: So Savinay, it seems like we can't go a single day without a new generative AI tool coming out. I can tell you it has kept my analysts very, very busy. It's good for analysts, by the way.

Savinay Berry: Sure, of course.

Patrick Moorhead: And we're using them on the consumer side. And whether it's search, whether it's deep research on ChatGPT, I use summaries every single day. But doing it in a consumer fashion or a small business fashion is very, very different from a larger enterprise applying AI across business data. A lot of theories on what the next frontier is. I've talked to a lot of different CIOs and groups and what they're doing. But what are you seeing as this next frontier?

Savinay Berry: Yeah, I think again, goes back to some of the use cases. Ultimately, it boils down to where you see value. I'll give you one example of the kind of things which none of us would've thought about, but it's a great example of, duh, of course, it makes sense. Testing. Developer testing. There are so many companies out there, whether they're tech companies or they are financial services companies or retail or healthcare, they all have IT arms. They all have developers that are working to create software for themselves that they can use, regardless of who you are. When you do that, you test a piece of software. And when you are in a highly regulated space like finance or healthcare, not only do you need to test it, you need to keep an audit trail for all of those tests. Why? Because when the auditor comes in, you want to be able to show that this is exactly what happened.

Right now, there are literally armies of people, manual people who are keeping track of all of those tests and putting up spreadsheets, which can be tracking when this test was done, when it passed, did it not pass, and if it passed, then what time stamp it passed, all that kind of stuff. That's all happening. So people are investing in that. Now, here comes a very simple elegant solution where an agent can come in, literally take all of the data stores for where the tests are happening, get some training based on that data, and then is able to authenticate exactly what tests are passing or failing based on prior databases. So that changes, again, the game of months and perhaps weeks of time down to a couple of hours. That's mission-critical. That's compliance work. Otherwise, they won't pass the audit. And there are some penalties associated with that. So I think this is a good example of where the same concept of the same model underneath which was being used for a consumer use case can now then be used on the other side for a much more mission-critical enterprise type of use case.

Patrick Moorhead: Yeah, it's actually a very novel one. You think you've heard all of these use cases and then you hear a new one and think, oh, I hadn't thought about that one, but this makes perfect sense. There's a lot of data that goes into it, and these new technologies work better with the more data that they have. Very interesting. I've seen you talk about this idea of do and done related to applying AI in the enterprise. Can you talk about what that looks like and how does it fit into the context of real business workflows?

Savinay Berry: Yeah, sure thing. So what's interesting is that whenever you have something which is as nebulous as GenAI, when it started to begin with, and it became even more confusing when people started to use it for their own purpose and they said, how is it going to be applied to this? And then they had to distinguish between what an agentic workflow is, which was the buzzword in the market, to what an actual AI is. What's the difference? So when you had all these buzzwords flying around, you sometimes have to kind of dumb it down and make it super simple to understand what is it that we are dealing with. So our method or our framework was fairly simple. It was search and summarize are things that you would do on a regular basis. You mentioned you kind of do that today where you look at data and you summarize some of the commentary. You probably consume a bunch of papers from a lot of places, and you now summarize that together into this. I call that search and summarize. That's a everyday AI use case where you need it, you have it, and now the models have become good enough for you to get those summaries and you don't have to waste too much time. Sounds great.

But then you get to the next step where the agents are doing the work for you, and then it's not just they're doing the work for you, it's just done. So you are completely hands off. And at that point it's called unattended agents or it's called autonomous agents. You can put a lot of buzzwords on that. But ultimately it's about just do and done, and you don't have to do anything at all. You are hands off the wheel completely, kind of like the autopilot thing. So it's really a framework, Pat, that was helpful for us to inspire teams, not just within our own company, 24,000 people, when you're trying to inspire them to get to a certain destination, you have to dumb it down, make it easy, and then, but also do the same thing with our partners and our customers and it started to resonate quite a bit. So that's what it is. It sits very simple.

Patrick Moorhead: Yeah, you're just making way too much sense here, Savinay. Don't you know we love our acronyms and we need this to be confusing for everybody. This do and done thing is just way, way too straightforward.

Daniel Newman: Pat, that's a way that you can try to drive more work for your analysts, right?

Savinay Berry: There you go. There you go.

Daniel Newman: Using-

Savinay Berry: Some more acronyms to understand that. There you go.

Daniel Newman: But let's bring this all home. The real enterprise environment, there's a reason that I've seen stats as much as 99% of enterprise data has not touched AI yet. Now we can debate exactly how much it is. But we've seen consumer and kind of these frontline tools be deployed very quickly, new LLMs, new scale, new rollouts, billions if not trillions of dollars of infrastructure being deployed. But the enterprise has not moved as quickly. It's not that it doesn't want to try CEOs boards. They all want to get behind this thing, but it's taking time. And a lot of the reason it takes so much time, Savinay is because the real environments, the real application environments, the real infrastructure environments of these companies, the edge, the cloud, the on-premises, all these things going on is complicated. So, take us home by talking a little bit about your thoughts around flexibility, openness, cloud, hybrid, on-prem, all these things being addressed. How important is it for enterprises to understand and have the right kind of choice in how and where they run, deploy, and implement and build a sustainable AI strategy?

Savinay Berry: We can go days on that answer to be honest with you.

Patrick Moorhead: That's what I was going to say. That's a big question.

Daniel Newman: It's a big question, but I know in just a few minutes you're going to crush it with a great answer.

Savinay Berry: And the bottom line is, look, we are smack in the middle of this, right? Because we have 120,000 customers, most of the Fortune 500 of our customers, so we talk to them. I talk to them on a regular basis to understand how they're thinking about the data. What I do know for a fact is we have data sets that we have from our customer bases across all of our product lines that I know if we were to extract it and do the right kind of ETL on it to simplify it, to make it secure, and then be able to use it in a way that we are training and implementing some of the more RAG pipelines on existing LLMs, then a hybrid model, which takes the publicly available data with the data that we have inside the firewalls, behind the firewalls, will create a much more accurate representation of the answers by the agents and make it a lot more productive for our customers.

But having said all that, it's a really difficult problem to solve at scale. Why? Because you cannot make a mistake on the privacy and the security of any of those data sets. It's not a technical problem anymore. You can take large data sets, you can do ETL on it, you can create the right pipelines, you can create the right data preservation, the retention models, all that stuff. It's been solved. We know how to do that. But it's about then how do you make sure that none of that leaks over so that you're trying to create the model for a one customer and kind of leaks over to the other customer either. So I think it's a foundation of what needs to happen over time, and we are working on it. I know others are working on it as well. And it's going to create the right accuracy and confidence and trust from the customers as well. But we're just scratching the surface on making it scalable and doing it in a secure and trusted way.

Daniel Newman: Yeah, I think first of all, I want to just thank you so much, Savinay, because you brought a lot of pragmatism to this conversation. Pat's a former product guy that used to have a real job. I'm just starting to say the things he used to say, and now he's an analyst and he likes to say that. But there is so much of this, we talk in these grandiose kind of high ways. You had me all pumped up about those agents that are going to just work 24/7. Actually, I really love it. But there's a lot of work for us enterprises that have important proprietary customer data to get from where we are to where this can work for us every day. This was an absolutely wonderful way to kick off day four here at our Six Five Summit. I want to thank you so much and hope you'll be back with us for some more conversations, Savinay, because I want to track the journey. I'm sure there's going to be a lot of good stuff happening. Let's do it again soon.

Savinay Berry: Looking forward to it. Thanks for the conversation.

Daniel Newman: Thank you to everybody out there for joining us for this day four. That was a great opener. Great way to start, talk about unleashing AI, very practical, real stories from both their customer zero, what they're doing inside OpenText to how they're helping and partnering with their customers. Stay connected with us on social and explore more conversations at sixfivemedia.com/summit. More compelling content ahead. Stick with us.

Disclaimer: The Six Five Summit is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Speaker

Savinay Berry
Executive Vice President, Chief Product Officer
OpenText

Savinay Berry is the Executive Vice President and Chief Product Officer for OpenText. He leads the Product & Engineering organization to define the vision and strategy for OpenText products including Content, Experience, Business Network, IT Operations, Application Delivery, Analytics & Legal Tech, and more.

Savinay was most recently with Vonage, part of Ericsson. He joined Vonage as EVP, Product and Engineering where he was responsible for global leadership of Vonage’s engineering, product management, and security teams. Focused on driving the company’s technology strategy, he led the innovation of the Vonage Communications Platform (VCP) and a portfolio of solutions to accelerate the world's ability to connect. Berry was part of the executive team that drove significant growth at Vonage which led to the acquisition of Vonage by Ericsson in 2022.

Leading with an entrepreneur mindset in the enterprise world, Berry has more than 20 years of experience in technology and software with a track record of developing highly innovative products and spearheading results-driven transformational change. Prior to joining Vonage, he was Senior Vice President, Cloud Services at OpenText, responsible for all cloud services and strategy, including infrastructure, service delivery, managed services, and developer services. Before that, he also served as OpenText Vice President, Engineering and Products leading cloud applications, design, performance and security.

Prior to OpenText, Berry was Vice President, Product Management at Dell EMC, where he was responsible for the content management product portfolio. He led all SaaS, mobile, web and on-premises products, including Documentum, Captiva and other enterprise productivity applications and was part of the executive team responsible for acquisition of Documentum by OpenText.

Before Dell, Berry served in product and engineering leadership roles at Intuit where he led the development of advanced products, such as wearables, mobile and intelligent systems. Earlier, he was Vice President of Product Management at Empowered Inc., acquired by Qualcomm, where he defined the vision and direction for the industry’s first mobile-only social learning platform on the iPad, through a partnership with UCLA.

Berry holds both a Bachelor’s and Master’s degree in Electrical and Computer Engineering and an M.B.A. from Kellogg School of Management at Northwestern University.

Savinay Berry
Executive Vice President, Chief Product Officer