Home

Securing the AI-Driven Enterprise - Six Five On the Road

Securing the AI-Driven Enterprise - Six Five On the Road

Muhi Majzoub, EVP of Security Products at OpenText, joins Six Five On the Road hosts to discuss how AI is making an immediate operational impact on enterprise security, redefining "secure AI," and shaping the future of cybersecurity platforms.

How is AI changing the playbook for enterprise security—from daily operations to long-term strategy?

From OpenText World 2025, hosts Patrick Moorhead and Daniel Newman are joined by OpenText's Muhi Majzoub, EVP, Security Products, to break down how AI is transforming the cybersecurity landscape. They explore how security teams are adapting to AI-driven threats, what “secure AI” really means in practice, and why deeper alignment between security and data disciplines is becoming mission-critical. They also highlight emerging innovations shaping the next generation of enterprise security platforms.

Key Takeaways Include:

🔹Immediate AI Impact: AI is rapidly shifting both cyber offense and defense strategies, bringing automation and advanced analytics to enterprise security operations.

🔹Defining “Secure AI”: In practical terms, “secure AI” varies across industries, but baseline controls such as robust data governance and algorithm transparency are becoming standard requirements.

🔹Privacy and Compliance: Effective implementation of AI in security requires rigorous controls to mitigate data privacy risks and meet evolving compliance standards.

🔹Cross-Disciplinary Collaboration: Closer alignment between security and data teams is essential, with integrated models emerging as key enablers for managing risk and scaling AI-powered operations.

🔹Future Innovations: The integration of AI into security platforms is driving new capabilities, including adaptive threat response and automated security operations, likely to reshape platform delivery models.

Learn more at OpenText.

Watch the full video at sixfivemedia.com, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five On the Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript

Patrick Moorhead: The Six Five is on the road here in Nashville at Open Text World 2025. Daniel, it's been a great show so far, talking about agentic AI, imagine that, and also heavy-duty data discussion here. It's been great.

Daniel Newman: Yeah, there's a lot of that, and as we talk about AI, we talk about information management, we talk about the future of enterprises maximizing their data, and then, of course, things like sovereign cloud and agents. Pat, it also leads us to another really interesting discussion.

Patrick Moorhead: That's right. That's security. That's right. Every inflection point that I've been part of, whether it was social, local, mobile, cloud, e-commerce, gosh, now we have agentic AI. It opens up a huge opportunity for business results, but also security issues. And it's my pleasure to introduce long-time watcher of The Six Five, Nuhi, great to see you again. And thank you for the commentary of watching our show. You're welcome.

Muhi Majzoub: Thank you so much for hosting me here.

Patrick Moorhead: Absolutely.

Daniel Newman: It's great to talk a little security. You know, as AI rolls out, any technology that proliferates at this speed, super cycles like this, also is going to change the security you know, surface, the attack surface in a meaningful way. You think those text messages we get now, the ones that are pretending to be our friends and the ones that look like DocuSigns and all those things that we're getting right now are bad. As agentic and AI get better, the sophistication of these black hats will get better too, which means we need to go on offense, ourselves and not just play defense. Let's start there. Where are you seeing AI and its role in cyber, and where are you seeing enterprises getting the most immediate benefit in their security teams?

Muhi Majzoub: Yeah, and you're spot on. AI is the double-edged sword. It gives an edge for us in the security protection space, playing defense. But we need to also use it to play offense because it's giving an edge to the counterpart on the other side, the bad actors who are trying to penetrate and steal data or penetrate and hack systems. The way we see it, one of the biggest areas AI is helping us. The data and the event logs that we see through the different security tools and systems have grown with so much noise. It is hard, I'll give you an example. Our security operations center, my analysts and my engineer there, it is very hard for them to sift through all that noise as humans. For me, that's where an AI agent can do great benefit, sifting through three million events in a log and telling me the seven I need to put a human eye and finger on the keyboard to look at.

Patrick Moorhead: Yeah, and I mean, AI and security is nothing new. It's just the implementation of it, kind of went hardcore machine learning to, okay, not everything is generative AI and agentic, but those that should be and can be are being swiveled over to that. I hear a lot of the times in our industry conversations, secure AI. I mean, who doesn't agree that we need to be more secure with AI? What does secure AI mean to you and OpenText?

Muhi Majzoub: So for me, and I believe for OpenText as well, secure AI is twofold. One is AI for security, and two is security for AI. And let me give you an example of both. AI for security is the ability for us to embed security in everything we do internally, and the ability to build data connectors to where we could correlate information and bring it together to present better outcomes or better analysis of that AI agent that is presenting it to a human. On the other hand, on the flip side, the agent now can not only summarize data and give us a recommendation, we want the agent to start acting on those recommendations. So instead of me having to act on 50 items on a security operations center shift, I want to act on the one or two top priorities and the other 48 are acted on by the agent itself, where the agent can do instead of just telling me what I need to do.

Patrick Moorhead: How do you know, at what point do you say, okay, I want you to do this on your own now, right? You've watched my decisions, my human in the loop does this. How do you know when to say, okay, you're on your own now?

Muhi Majzoub: That's a great question. One of the things that we try to do and we're continuing to do in our threat detection and response is to build the rules that allow the agent to act or the agent to pass it to a human being based on a score. And we correlate data from multiple areas. We do behavioral analytics of that data. and then we present a score of that incident, and if the score we could define in our rules at the back end, if the score is over 75, pass it to a human, or if the score is over 90 on this system, pass it to a human. and we can alter through configuration, not customization, and allow customers to define their own priorities and security of what they want to see or not see.

Daniel Newman: It's a little bit like fine-tuning training on models, except now with workflows, right? Yes. You show it's a cat, it's a cat, it's a cat, eventually it knows it's a cat. I'm guessing it's similar to knowing the right way to handle a certain security protocol. It'll get better and better. But of course, it's more complex in the sense of those parameters. An interesting question too is, as companies are implementing AI, I use the word threat surface. I know that's a popular word in cyber, I don't know, maybe the best way, but everything we introduce creates new risk, right? So how do you sort of balance that, that you don't add new privacy risk, new security risk, new compliance risk, because introducing AI for gains in sales enablement, for gains in marketing, it's like this is all great, but every one of those opens up doors, you're sending more emails, you're having more customer interactions, you're potentially opening up environments that have data that's a mix of regulated, secured data, CRM data, your P data, with data that's fine to share. Like how do you sort of not?

Muhi Majzoub: That's a great, great question. There are several things that we are doing and we want to continue to do. One of the first ones is that's why from day one, we made a commitment to our customers that we'll allow you to bring the model off cloud. And if you tell us what model you want to use, we will go certify our product and then you run the model internally. If you run the model externally, your only other option is contractually with your hyperscaler or your model provider is to get them to agree that they will not use your data to train their model for anything other than the work you're hiring them to do. or they put you in a private zone where the model is only trained on your data and presenting prompts for your data, nothing else. They also need to agree contractually that zero copy data outside of your area to other areas. We give that in writing to our customers. We believe it's the right thing. OpenText was one of the first companies, seven or eight years back, if memory serves me right, that offered GDPR compliance to German customers, even before Google, before AWS, before Oracle, before Azure. We gave them in our Munich and Frankfurt data center GDPR capability. We gave them in writing. We commit the data is your data, you could take it out anytime you want. Your data is not touched by anyone that is not a European citizen operating in a European citizen country. And so those are some of the things that need to be done to protect and ensure the model is only trained on our customer data. We bring the model in-house, or we use some models in the cloud, but those models are in private cloud data zones, and we get agreement from the vendors, the partners that we work with, that they will not train the model, use our data to train the model for any other purpose.

Patrick Moorhead: Are there certain models that are best of class now to align the data and the security teams together?

Muhi Majzoub: The honest answer we don't know yet because it is becoming everybody is building a model. There is not a single week, I kid you not, that I don't get a LinkedIn message saying, we developed this great model, it's great for your supply chain, we would love to present it to you. Or great for your experience media product. We don't know yet. My gut feel tells me in the next couple of years we're going to see model consolidation. And there will be few that will rise and few that will sink and disappear. And we are continuing to experiment. Today we partner with Microsoft on their models. and we integrate co-pilot, we partner with Gemini, Google to support our customers, and we partner with AWS for different, we serve customers in all three of those markets. And we're committing to continue to explore any other partners like OCI and Oracle if customer demand is there and our customer asks for that capability.

Daniel Newman: So I guess as AI is embedded across all of the organization, how do you think this is going to impact how security platforms are built and delivered in the future?

Muhi Majzoub: There are two ways. There's the approach we took in our threat detection and response where the AI engine is embedded in the product. It's not separate, it's part of the code. we publish to the cloud for our customers to use. And no one can manipulate it, no one can touch it, no one except our engineer. The other way is you integrate, but then if you integrate, you will have to then address another concept, which today you heard and you heard our chief strategy officer, they both talked a lot about data and the importance of data. Data and AI have to coexist, but then today, if you look in the enterprise, data is sitting under one team, and AI is sitting under another team. We have to figure out how those teams collaborate, how those teams reuse some of the work, or agree to a set of rules and responsibilities and rules of engagement. Otherwise, you create chaos, and so we have to protect against that.

Daniel Newman: And I can actually argue that data sits in one, AI sits in one, security sits in one, and oftentimes infrastructure and networks sit in another. So you end up with a lot of fiefdoms that all are sort of codependent. And not necessarily even having a clear hierarchy, meaning which controls which. I mean, I think our research kind of at board CEO level says, of course, that AI is the driver of gains and productivity. And the other three you maybe would say are sort of sub-themes, like secure AI, data-to-power AI, infrastructure to enable AI, but in the end, that's not necessarily how enterprises operate. It seems like you have a real opportunity, a real challenge to bring these organizations together to make sure that A, they're successful with their AI, but successful never at the expense of keeping it secure, governed, and private.

Muhi Majzoub: Agree. And today, AI is under Satne, security is under me, infrastructure, databases, and network, and machines are under Shannon. But we have figured out a way how we collaborate and how our teams use program office functions to drive certain projects where all three organizations are engaged and contributing. And that has worked for us.

Daniel Newman: Well, Muhi, I want to thank you so much for spending a little time with us here. Great event. I hope it's a big success for you. Thank you. Enjoy the town. Maybe get a pair of cowboy boots and a cowboy hat. Check out the town. Nashville's a great place. Not Austin great, but it's pretty darn great. And we'll be looking forward to tracking the continued progress of Open Text and talking to you again soon.

Muhi Majzoub: For sure. Thank you so much. Thank you. It's great to see both of you. And yes, I hope to enjoy Nashville today.

Daniel Newman: Good. All right. Good. And thank you everybody for being part of this Six Five. We are here in Nashville, Tennessee on the road for Open Text 2025. Subscribe, check out all of the content here at open text world. And of course, all of the Six Five’s content. It's great stuff. We promise you but for this episode, we got to say goodbye. See you all later. 

MORE VIDEOS

Why Rack-Scale Architecture Matters: Preparing Data Centers for the Next Wave of AI – Six Five On The Road

David Schmidt, Sr. Director Product Management at Dell Technologies, joins hosts to discuss why rack-scale architecture is critical for data centers adapting to AI demands, with insights on operational priorities, cooling, and deployment lessons.

Building an AI-Ready Enterprise - Six Five On The Road

Shannon Bell, EVP, Chief Data Officer & CIO at OpenText, joins Six Five to discuss strategies for building an AI-ready enterprise, including bridging the AI ROI gap, embracing unified data platforms, and cross-functional management of digital agents.

AI Cluster Power: Liquid Cooling and Exascale-Ready Solutions from MiTAC at SC25 - Six Five In The Booth

Raymond Huang, GM and VP at MiTAC, joins host David Nicholson to share how MiTAC’s latest liquid-cooled, exascale-ready AI clusters are redefining data center sustainability, scalability, and performance—a must-watch for SC25 attendees and tech leaders.

See more

Other Categories

CYBERSECURITY

QUANTUM