Home

How Regulated Industries Are Making AI Work - Six Five Connected

How Regulated Industries Are Making AI Work - Six Five Connected

Chris Wolf, Venkat Balabhadrapatruni, George DeCandio, and industry experts join Diana Blass to reveal how private and hybrid AI strategies are redefining what's possible for regulated sectors balancing innovation, compliance, and security.

How are banks, hospitals, and government agencies redefining AI adoption to balance innovation with strict regulatory demands?

In this episode of Six Five Connected, host Diana Blass takes you on a journey to find out how regulated industries are making AI work. 

The episode centers on private and hybrid AI strategies, on-prem architectures, compliance, and new protocols—keyphrase focus: regulated industries AI featuring Broadcom’s Chris Wolf, Global Head, AI & Advanced Services, VMware Cloud Foundation, Venkat Balabhadrapatruni, Distinguished Engineer, George DeCandio, CTO, Mainframe Software, along with Per Kroll, CEO, Kroll Management LLC, Nick Patience, VP & Practice Lead, AI, The Futurum Group, and Falk Steinbrueck, Director, AI & Platform Simplification, IBM Z & LinuxONE, IBM.

Key Takeaways Include:

🔹The Shift to Private and Hybrid AI: Regulated sectors are increasingly preferring private and hybrid cloud models to enhance data security, compliance, and cost predictability.

🔹Data Residency and Model Governance: Enterprises, including banks and hospitals, prioritize keeping sensitive data on-premises, utilizing AI where the data lives for maximum control and privacy.

🔹Innovation with MCP and Specialized Hardware: Adoption of the Model Context Protocol (MCP) and next-generation processors like Telum and Spire enables secure, efficient AI access and real-time processing directly on legacy systems.

Watch the full video at sixfivemedia.com, and be sure to subscribe to our YouTube channel so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five Connected is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript

Diana Blass: Banks, hospitals, government agencies, they sit on some of the world's most sensitive data and yet even they're using AI.

Per Kroll: When your gen AI solution crosses your firewall, so do your secrets. So how do you protect against that?

Diana Blass: It's a question that has long haunted the world's most regulated industries. AI can transform nearly every business operation. But for many enterprises, the risks have been impossible to ignore.

Per Kroll: Privacy issues. 

security implications, 

Per Kroll: Compliance.

Diana Blass: And when you add in flawed enterprise integration. Adoption and results have been slow to follow. A recent study by MIT Nanda finding that despite investing 30 to $40 billion in Gen AI, organizations are getting zero return. But that may be starting to change.

Chris Wolf: They took two weeks of work. They were able to compress it down into like three, four hours in an afternoon.

Diana Blass: So how are the world's most sensitive industries doing it?

Chris Wolf: There's going to be considerable demand to be able to bring AI models to where the data physically resides.

Diana Blass: I'm Diana Blass. Let's get connected. Not too long ago, the public cloud dominated compute. It lowered the barrier to entry, scaled elastically, and gave enterprises immediate access to the best infrastructure without the cost or complexity of building it themselves. But for highly regulated industries, that meant moving sensitive data into environments where leakage bias and compliance risks ran high.

Chris Wolf: The notion at that time was that all AI use cases would have to belong in a public cloud because you needed gobs of GPUs to do anything with AI. And we started seeing that there were use cases where you didn't need hundreds of GPUs to run an AI service. Sometimes it's tens of GPUs, sometimes it's less than 10 GPUs, sometimes it's, you know, a very small number of GPUs.

Diana Blass: That awareness marked a turning point. Enterprises realized they no longer had to rely on a public cloud only approach. Instead, they could design an architecture that truly fit their needs. Aligning AI investments with specific business problems, not just broad ambitions.

Venkat Balabhadrapatruni: So starting with the business problem that you want to tackle will now set the path to what is the data needed? Do I have right access to the data? Do I have the right security controls around that data?

Diana Blass: That shift from where can we run AI fastest to where can we run it safest and most effectively is fueling today's push towards private and hybrid AI strategies. A cloud reset, according to Broadcom's report. Among 1800 senior IT leaders surveyed, the report found that 66% prefer to run container and Kubernetes based apps on private or hybrid cloud and 55% said they prefer private cloud for AI model training, tuning and inference. So how exactly do you define private, private and hybrid AI?

Venkat Balabhadrapatruni: Private AI is in this context that we are talking about from an enterprise perspective, is AI models that are running on prem to make sure that the crown jewels that they have is not being leaked or made available in the public cloud. Hybrid is the notion of leveraging the power of what is available in the cloud as well as what is available on prem to satisfy a business need.

Diana Blass: We spoke to analyst Nick Patience from the Futurum group to learn more about the trends behind these strategies.

Nick Patience: Yeah, I think the, you know, while every, every wave we have of this kind of AI innovation, public cloud is often where, where the wave starts. And then, you know, a lot of experimentation's done in the public cloud. And then as implementations need to occur and it gets to the point of production, a lot of companies want to have more control. And so, you know, I think that that drives a lot of interest in private cloud adoption on prem. Then there's other issues which are as much to do with geopolitics as anything else. And that's sovereignty. And that while that used to be the preserve of Europeans, especially continental Europeans, I think every country frankly is now, is now thinking about that.

Diana Blass: And now the market is shifting, vendors rolling out new solutions aimed at making private and hybrid AI a reality. VMware was one of the early movers leading the charge.

Chris Wolf: There's going to be considerable demand to be able to bring AI models to where the data physically resides. And that's what got us into what we were defining as private AI, which is taking a different architectural approach where you're really centering privacy and security of your data around the artificial intelligence service, as opposed to having to export your data to some type of proprietary cloud service just to gain the benefits of AI.

Diana Blass: The need was clear. A VMware survey found that 36% of enterprises site compliance clients as a roadblock and 55% avoid certain Gen AI use cases entirely due to security concerns. Interestingly, a private or hybrid approach to AI can still tap into the public cloud while easing those concerns.

Chris Wolf: There's a mining company we're working with where they need to have the intelligence really adjacent to where they're generating data. So I can make decisions at my different mining operation sites. So I could have a data center, I could have on prem, I could have a cloud. And I might use the cloud for doing things like maybe some of my office productivity tools. It's just easier just to use the embedded cloud services or maybe I want to fine tune a particular AI model or service. I'll use the burst capacity of the cloud. But then when I want to run that service, then I can go ahead and bring that in. Or I might want to work with a regional provider or a telco that can host this type of service for me.

Diana Blass: And that flexibility to pivot pays off not only in compliance and security control, but also in the bottom line.

Chris Wolf: Because today if I'm doing a token based billing model, my costs can be really unpredictable. Like we are consumers in my team of OpenAI, we use it for research and we might hit our quota that I put in place in terms of a ceiling for our budget. Maybe the third week of the month if I just pull and share my infrastructure and I start running open source models. Well, my cost is basically the cost of my infrastructure. I don't have a per token cost to access the model anymore. So therefore I have not just a lower cost, but I also have a very predictable cost month after month, which is really good for the organization as well.

Diana Blass: So let's recap what we learned so far. First, it's critical to start with a business problem that will then give us a better understanding of how to structure the architecture, whether it's private, hybrid or public AI. Now it's time to dive into the infrastructure and industry advancements, enabling those strategies to work. Let's start with the Model Context Protocol, or MCP. MCP creates a single standard for how AI models connect to enterprise data. What used to be messy and fragmented with custom setups for every tool and database, is now streamlined into one common language. So in practice, an enterprise can deploy a MCP source server, which will then act as a secure gateway between the model and the data.

Per Kroll: You can plug that information source, that data source into any GenAI solution. May it be a public cloud, or may it be a private cloud solution. Private AI. And you can apply it to many different use cases. So it really plays a key role in allowing companies to future proof their investment in GenAI. So you build something that can be reused in many contexts.

Diana Blass: Running all servers on MCP ensures governance rules apply consistently across all integrations and role based access is easier to enforce. Add to that the rise of high performance GPU servers in private data centers and suddenly private AI is no longer theory, it's practice. But there's one more breakthrough of pushing this forward. Advances in silicon chips, so powerful they're bringing AI to machines many thought would never see today, like the mainframe, a computer behind the world's Most critical industries that holds 70% of the world's most critical data.

Falk Steinbrueck: We now have the next generation of AI hardware right, built into the machine itself. So you can use that hardware to run machine learning workloads and algorithms right on your box where your data resides.

Diana Blass: Well, it all comes down to these chips, the Telung processor and the Spire accelerator chips that turn the mainframe into a real time AI factory. What's that mean for customers? Banks can stop fraud the instant a transaction is made and hospitals can check claims in real time without having to move data to the cloud.

Falk Steinbrueck: Think about the implications of transferring that data into a cloud environment. There are security implications. You really want to make sure that every layer, every transaction is secured when you move your data off the platform. There are privacy complications, there are compliance complications potentially.

Diana Blass: It's one of the best examples of the value a private AI strategy can bring to some of the most regulated companies in the world, while also breathing new life into legacy infrastructure that dates back to the 1950s.

George DeCandio: The big challenge with the mainframe, obviously, is that you have decades and decades of COBOL applications. AI can provide a very useful assistant to help them understand applications that were written sometimes even before they were born, and be productive in maintaining and enhancing them.

Diana Blass: AI is also unlocking the real jewels hidden inside the mainframe stores of data that can finally be dissected, understood and operated on in entirely new ways. But the real excitement that comes when you look at what it means for customers.

Falk Steinbrueck: A ton of innovation that we have not seen on the mainframe before.

Diana Blass: You see, the next chapter of AI isn't about chasing the biggest model. It's about deploying the right model in the right place for the right problem.

Chris Wolf: Somebody thought that the work that they were going to need to refactor this application was going to take them about two months. They did it in like two days.

Diana Blass: But the bigger story?

Chris Wolf: What excites me about tomorrow as, you know, somebody leading an engineering team is that there's all of these problems that we still haven't solved.

Diana Blass: What will we wake up to next? You'll have to stay tuned to stay connected till next time. I'm Diana Blass.

MORE VIDEOS

The Six Five Pod | EP 281: Inside the Week in Tech - Oracle, Salesforce, TSMC & the AI Cloud Race

On this episode of The Six Five Pod, hosts Patrick Moorhead and Daniel Newman discuss recent developments in the tech industry. They cover Oracle's financial analyst day, highlighting the company's impressive growth projections and AI infrastructure plans. The hosts also delve into OpenAI's partnerships with Broadcom and other tech giants, exploring the implications for the AI cloud market. They touch on Apple's latest chip announcements and debate Oracle's potential to challenge the top cloud providers. The episode concludes with insights on TSMC's strong earnings and its impact on the semiconductor industry, as well as a brief discussion on ASML's performance. Throughout, the hosts provide candid analysis and occasional humor, offering listeners a comprehensive overview of the week's most significant tech news.

Adobe Acrobat Studio: The Future of AI Document Productivity - Six Five Media

Michi Alexander, VP of Product Marketing at Adobe, joins the webcast to discuss how Acrobat Studio’s AI-powered tools are transforming document productivity for enterprises by automating insights, enhancing collaboration, and delivering polished outputs.

The Six Five Pod | EP 280: Qualcomm, AMD, & Intel: Navigating the AI Revolution's Key Players

On this episode of The Six Five Pod, hosts Patrick Moorhead and Daniel Newman discuss AMD's groundbreaking deal with OpenAI, Cisco's advancements in AI networking, Intel's foundry progress, and Qualcomm's strategic acquisition of Arduino. The hosts also explore IBM's Tech Exchange event, debating whether we're in an AI bubble, and analyze Dell's ambitious growth forecasts. With insights on Oracle's AI economics, XAI's funding round, and the rise of AI utility companies, this episode offers a comprehensive look at the rapidly evolving AI landscape and its impact on the tech industry.

See more

Other Categories

CYBERSECURITY

QUANTUM