The New Rules of the Cloud: AI and Multicloud
How are leading companies redefining the standards for cloud infrastructure in the AI era?
At The Six Five Summit: AI Unleashed 2025, host Daniel Newman sits down with Karan Batta, SVP at Oracle Cloud Infrastructure (OCI), to explore how OCI is reshaping the cloud landscape with its innovative approach to AI, multicloud strategies, and distributed cloud frameworks. This cloud infrastructure spotlight dives deep into OCI's methods and technologies that are setting new standards for cloud computing. Learn how OCI is building the next generation of cloud infrastructure—and influencing what’s possible for enterprises around the world.
🔹Pioneering Multicloud Innovation: Explore Oracle's leadership in forging groundbreaking multicloud partnerships with major providers like Microsoft Azure, Google Cloud, and AWS, and how they're refining their application stack for superior service delivery.
🔹OCI's AI-Native Infrastructure: Delve into OCI's ambitious goals for integrating AI directly into its infrastructure, detailing unique challenges and differentiated solutions, including advanced networking and the high-performance OCI Supercluster.
🔹Transforming Business with GenAI Agents: Discover the power and potential of Generative AI Agents in revolutionizing traditional business operations, powered by Oracle's innovative GenAI agent platform.
🔹Strategic Global & Distributed Cloud Expansion: Gain insight into Oracle's distinctive approach to rapid global expansion, emphasizing an extensive regional footprint and the novel Oracle Alloy initiative for customized OCI regions and enhanced managed services.
Learn more at Oracle’s multicloud solutions and artificial intelligence services.
Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Daniel Newman: Hey everyone, welcome Back to The Six Five Summit Cloud Infrastructure Spotlight. Excited to have this conversation with Karan Batta, SVP at Oracle. We're going to talk a little bit about multi-cloud and AI. Karan, good to see you again. It's been a minute but I always love having our conversations.
Karan Batta: Yeah, thanks for having me Daniel. Excited to talk about all things cloud.
Daniel Newman: So I've spent a lot of time over the last few years having conversations. Oracle went through this pretty significant transformation in its own strategy. The cloud business is been explosive among the fastest growing each and every quarter. We track that very closely. But for a while it was kind of like among the peers. Everyone was sort of running their own race. It was like AWS, Google, Microsoft, Oracle. We're going to get as big as we can. We are going to try to dominate our markets, we are going to convince the world that everyone only needs our cloud to do everything. And then all of a sudden there's a bit of an aha. And Oracle had a big aha, you know, over the last couple of years and last year in a really big way where you went kind of all in on multi cloud, made some big partnerships, you know, announced with all the major hyperscalers and I mean, I'm sure it had a lot to do with how important your database is to so many of their enterprise clients. But you know, I just want to kind of start there because, you know, I don't think people necessarily really stopped and kind of smelled the moment when that took place. But you were one of the ones that kind of went from that. No, we're going to be very much evolved garden to know we are all in on multi cloud happening really, really quickly. As you expanded all your partnerships, what drove that and how do you see that kind of just in the near term evolving into the application space?
Karan Batta: Yeah, yeah, definitely. Thanks. I think it's been a journey, right? I mean multi cloud, I think for us, I think it's simplistic now when we talk about databases, but really we started this journey five years ago. We didn't even think about databases at the time. What we really wanted to accomplish was in the old days when you used to go build a private cloud, you used to be able to go buy different parts of things from different vendors. You could buy a switch, you could buy a host, you could buy some storage, you could buy some different pieces of software, you could collaborate together and build a private cloud with many different vendors. That world didn't really exist in the cloud. Right. It was sort of like everybody was playing Hungry Hungry Hippo like you said. Everybody was like, well if you are going to come to my cloud, you should use everything from my cloud and I'm going to make it extremely expensive to move out of my cloud. You have to buy everything and you have to be all in on my cloud. So it became very hard for the customer to pick and choose one or two, you know, features or services from one cloud. As a decision point, we wanted to break down some of those barriers. Right. We started with, let's say networking as an example. Customers should have the freedom to move their data around. They started moving their data around. So you know, we signed an interconnect agreement with Microsoft. No money exchanged hands, but it was the ability for customers to treat two regions as a singular entity so that customers could move data around. You know. Now fast forward to now. You know, we now have a host of data services now available through all three clouds. So I think that journey has taken a long time. It's been breaking down small little barriers. But the goal really was to enable choice for our customers. Of course our database is the market leading database out there, but also to give customers the ability to pick and choose the best to breed from every cloud and just use whatever cloud they want. So that means interoperability, integration, data movement. I'm not even talking about databases yet. Right. So it's all these other things that combine it to make it a multi cloud experience for a customer.
Daniel Newman: Yeah, I'm still playing Hungry Hungry Hippos in my head. Thinking back to my youth. No, but it's interesting because now we're in this inflection with AI. So you spent the last, you know, almost decade building out OCI OCI Gen 2. I think Gen 2 was such a pinnacle moment, you know, where it really started to turn and that great growth ensued. But then AI came and AI is like, oh, we're going to build a whole new data center. You know, you have a great partnership with Nvidia but you listen to Jensen and it almost makes it sound like everything you've done in the past is almost like stop, pause, cut off point. You've now got to build these new AI factories from the ground up. New architecture, new applications, agents. You know, talk a little bit about that because you've gone big, you've got Stargate, of course, you were mentioned as part of that project. You know, talk about the approach there because this seems like it's almost a moment to take everything you've learned, but to almost start over and try to become the, you know,, the cloud company of the AI era, which almost seems like a new race.
Karan Batta: Yeah, no, absolutely, I think you're absolutely right. I think if you look at our past history, you know, it'll be 10 years for OCI next year, like almost a decade. And you know, we started with our core fundamentals of what makes OCI, you know, really great for infrastructure customers, which was extremely high performance for very affordable economics.
We wanted to marry the best of what you get with a private cloud or your on premise clusters and marry with the benefits that you get from a cloud. That was always the sort of, the, the kind of a core tenet of how we built OCI. So when we built OCI, that's the same thing with Exadata, right? We have IB and Exadata as an example, where you have a database cluster. We use the same technology to enable HPC several years ago where customers could come in, they could launch HPC clusters and then the, you know, the AI advent sort of started, you know, whatever it was like or you know, when Chat GPT came out and suddenly that became a thing and we used similar technology and expanded that, right. It became, we were already invested down that path, which was extreme performance. So I think the scale is just different now, right. We're talking about, you know, 120, 130,000 GPUs per a single cluster. When you're building something that big, the focus goes away from just sort of, you know, one part to the entire thing from all the way down from construction, construction of the physical buildings, how they're interconnected all the way up to things like liquid cooling and sort of the loop that goes around in the buildings, all the way to the network, the network build out and then even software, software based, you know, operation. So you know, when you're running 100,000 GPUs in a cluster for a single training job, you have to make sure everything runs, you know, cool. Everything runs at the right time. Everything is running without any issue. Because one GPU goes down, your entire training job can go down. Right. So there's a lot of work and effort being put into software, operations, automation, making sure things are up and running. What your RMA process is going to be, you know, if a GPU goes down, how are you going to be able to replace it quickly and restart the job. So, yes, there's the sort of the core engineering aspect of, you know, working with your partners like Nvidia, but then there's an ecosystem of technology that we have to invest in. So there's a lot of work to do. And the scales continue to grow. You know, of course, we've been talking a lot about. Our cluster sizes are growing. You know, with Blackwell GPUs, there's liquid cooling, so there's a whole set of new problems to solve in the next several years.
Daniel Newman: There are, and it's, it's exciting. It's creating a huge opportunity on a global scale. Things that you've been doing, like regional, like zones, like OCI, you know, seem to really be perfectly aligned to where we're going to have to head with sovereign AI, you know, and it's just kind of the way it's been thought about, there are some parallels. And of course compute that's done on the CPU is not going to disappear overnight. I mean, in fact, there's still this very symbiotic relationship between CPUs and GPUs. But there is very clearly a new architecture. The way it's being built, the way software is being designed, the way power has to be considered, thermals, everything that a data center used to be and what it will be is going to be different. You know, one of the interesting things about Oracle though too is you're not just an infrastructure company. You are not just a database, you're also an applications company. And Gen AI was kind of the rage for a few years. It still is, but you're sort of seeing this transformational moment now, another inflection where generative and agentic, it's sort of, it's, it's morphing into, you know, we're going to have basically agents working alongside us. And then maybe if we're lucky, next year, you and I won't even have to come for this conversation. We'll just know you, it'll know me, we'll do this. It'll, you know, come on, a couple of guys can dream. We could be out at the racetrack. Right? Whatever it is we want to do. But talk a little bit about how Oracle is thinking about generative AI agents across your stack.
Karan Batta: Yeah, I think you pointed out a really important thing, that Oracle is not just an infrastructure company or a cloud company. I mean, we're a cloud company, but we're also a applications company. That's something that our competitors don't have is we have the applications and we also have the infrastructure side by side, our fusion apps and our vertical applications such as healthcare with Cerner or retail and Micros and finance. So the advantage that gives us is number one, all those applications can run on the same infrastructure side by side, your clusters that are also running the training. And then on top of that we have the world's most popular database where all of your data sits. So, you know, imagine a world where as a customer, as an enterprise customer, you can use our platform services. We have a Gen AI service where we offer things like Llama Mistral, the usual set of models we recently announced Brock as well, you can essentially do a rag point your database to it, retrain your models and then you can also build agents using our agent platform as well. There's the external customer component, but also internally at Oracle. Daniel, what we're doing is we're injecting all of that intelligence into our applications too. We've got hundreds of agents that we've already built across all of our applications. Fusion SaaS like an agent to help you with supply chain, an agent to help you with patient data, an agent to help you with public safety. These agents are already embedded in our applications and they allow for much, much more optimized use of those applications. So we've sort of got a multiple pronged strategy with the apps, the infrastructure and the platform depending on who the user is.
Daniel Newman: Yeah, and I like how you kind of brought that together on our side. And we've been evaluating the agent space from the very beginning and there's sort of these different approaches of where agentic will be adopted in the organization. And so, you know, if you kind of look at first blush, a lot of the early applications came out. It was like we're going to build an agent inside an application and you're using this SaaS platform or this enterprise tool, we'll give you an agent. And then there's a bit of the other side is we'll deliver it from the infrastructure level, build the agents at the platform level on the infrastructure a little bit more agnostically to whatever application because companies don't have. If you end up having to buy agents for every software you have in your enterprise, that probably doesn't scale super well. Right. So it starts to look a little bit like almost how we did, like virtualization and stuff, like where you kind of want to have more of a platform in a middleware layer. And then you have these companies that are kind of saying, well we'll do it just that way. We're not really either infrastructure or app, but we'll be the agent layer. So it's going to be kind of an interesting thing to watch how this sort of diffuses, because what I know for sure is like, people are going to want to streamline this. They are not going to want to have an agent with every application and every cloud that they're in, they're going to want to get that down. So that's going to be really interesting. Oracle being Full Stack sits in a pretty good situation where you can come at it through apps, you can come at it through data, the database and ERP layer, and you can also come at it through infrastructure. So that certainly positions you well. Makes you a bit more of a wild card to, to really have an even bigger role than maybe what, you know, some of the names that kind of are. Are sort of synonymous all the time with AI. Let's talk a little bit about. Your global scale. I sort of teased out sovereign AI. You guys have a massive sort of modular approach. I think you're at like 200 regions or something at this point and you've done it in a pretty short period of time. You're sort of going zones, regions happen in a very short period. How have you kind of approached this? How have you done it so quickly and what's your sort of strategy to address your regions and zone and how you're picking?
Karan Batta: Yeah, yeah. I mean, I think regional availability is a. Probably one of the most critical parts of number one. You know, it's really a big portion of why we are where we are today. When we started in 2016, in OCI, we could have all the greatest services in the world, but if they're not available where the customers want them, we would have not had a seat at the table. We had a need to build, deploy regions as fast as possible in the locations that our customers wanted for various reasons, whether it's latency and performance, whether it's security data. Sovereignty needs lots of different reasons. And so we invested over a decade's worth of engineering time in actually scaling down our region footprint, right. So we had a need for ourselves. We wanted to deploy lots and lots of regions everywhere, right? Think of a world map and like there's dots in every single city on the planet, right? Not just big hubs or big regions and short number plates, like two regions in the US and two in Europe. No, we wanted a region, an OCI region in every part of the world, in every city of the world. Right. So for us, we had to spend a lot of time scaling down our regional footprint so we could fit our entire public cloud into a single rack, let's say as a Northstar goal. Right. We're very close. Now, 10 years in, we can do it in three racks. Right. We announced dedicated Region 25 last October, which basically means a customer can deploy three racks and get our entire public cloud footprint. Now, of course they can modularly scale that out. Of course we did it for ourselves to deploy more regions, but it just so happens that customers love that and they want to be able to do that themselves as well for themselves. So we ended up deploying Dedicated Region and then we built on top of that. Right. Because customers were like, well, you know, with sort of the geopolitical space that you mentioned in Europe as an example, a lot of the countries are making these workloads, you know, government workloads or mission critical workloads, sovereign. Right. So if you're, you know, in Germany and you're running an automotive company,, you cannot run in a public cloud that is, you know, a U.S. company, you have to run a local public cloud. Well, we have Ally, so we could take the DRCC footprint, give a partner resell rights and they can be a wholly owned subsidiary inside that country and provide a cloud. Right. So truly it's actually opened up a lot of different avenues for us from not just DRCC Alloy, our own regional footprint. Edge no longer has to be scaled down to just a few services. Why can't Edge be the whole cloud as an example? Right. Why, why can't you deploy everything in the Edge as an example, also multi cloud, Right. There are some regions where we may not have a parent site where our partners are. Well, that's great. Now we can just deploy a very quick region. So it actually opens up a lot of different avenues, not just sovereign security, data sovereignty, but also just the reach of all of our clouds.
Daniel Newman: Well, Karan, we have just a couple minutes left. I want to just stay on this distributed cloud strategy a bit more, more to kind of give some nuance out there. Because one of the things that I've been tracking with is that, you know, you have dedicated regions, but you've also now announced Alloy and I think you were alluding a bit to that in your last answer, but kind of would love for you to just sort of break that down. How is that different? Maybe double click into that than say the traditional managed service offering.
Karan Batta: Yeah, I mean, the sort of the idea behind Alloy, Daniel, was that, you know, if you look at something similar, cloud is becoming a really important part of critical infrastructure in general, right? Similar to telecommunications or utilities. Right. These are very regulated markets where there's three or four providers in a single geography, right. Like, you know, if you look at, in the US like we have Verizon, AT&T and T-Mobile, right. And they're regulated markets or utilities or energy. I think in the future we truly believe cloud infrastructure is going to be critical infrastructure, right? Like your, your core banking services, your, you know, major emergency systems are all running on the cloud, right? That is critical infrastructure that needs to be regulated. And so what's going to happen is you're going to have countries across the globe that are going to want to regulate this, that are going to want to foster competition and somehow control it and have the right boundaries. And so really what we found is, you know, every country is going to end up having three or four cloud providers, right? And today. Well, at least there wasn't a model before Alloy where you could just buy a cloud and run it yourself. There was, you know, people tried OpenStack, people tried other different types of things, but it didn't really work out. Right. You need a commercial capability of an existing big four cloud provider like us. But you need to be able to have your people run it, operate it, manage it. But you also need tools around the cloud to be able to manage your business. How do you price it, how do you sell it, how do you support it, how do you operate it, right? So what we do is we take, we take our DRCC footprint, we give you resell rights, and then we add other parts of our business, which is why it makes sense for us as an application company. We bundle Fusion with it so you can price your products, you can discount them, you can invoice your customers, you have a support system. So you essentially, in a box, get a cloud that you can manage, run and operate and sell to your customers. And the value that you're providing is the, is the fact that essentially, you know, you as a customer, the end customer, is mandated that you must use a local cloud provider. Right? So that's, you know, that's been a, that's been a big success. You know, we have I think over 20. Don't quote me on the number, but I think we have over two dozen Alloy customers now. Some are live, some are not yet. We announced this a couple of years ago. You know, for example, Fujitsu NRI actually is a great example where they're, they started with a DRCC for their internal workloads and then moved on to Alloy to build their financial applications and offer it to the government space in Japan. Right. So it's a great stepping stone for those customers and partners.
Daniel Newman: Well Karan, that's really great progress and I see, you know, I've mentioned a few times with, with the AI shift, with sovereign cloud shift, the ability for them to have sort of all the pieces that an Oracle can offer but at the same time really own it and manage it, not trying to build it and run it like an open source project gives kind of all the tools required and I could see that being really, really successful. I do have to stop there. It's been a lot of fun chatting to you, Karan. You know, congratulations on all the progress. It's going to be great to watch these next few years as Cloud Gen 3 in the serums of the next era perhaps, you know, continues to roll out, continues to scale and continues to meet the customer, meet the market, meet the world where it is in such an exciting and rapid transition. Karan Batta, SVP Oracle Cloud Infrastructure thanks so much for joining me here at The Six Five Summit and this Cloud Infrastructure Track Spotlight keynote. We'll look forward to seeing you again soon. Sending it back to you in the studio.
Disclaimer: The Six Five Summit is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Speaker
Karan Batta is Senior Vice President of Product at Oracle Cloud Infrastructure (OCI). He is responsible for strategic product initiatives including GPU, AI and Multicloud. Karan’s organization also supports OCI’s largest and most strategic customer’s ensuring that OCI is delivering the right services and features to those customers as well as implementing the operational workflows necessary to make Oracle’s customers successful.
Karan joined Oracle in 2017 as one the of first members of OCI working to define the core product portfolio including compute, storage, and networking. He has held many roles within OCI that expanded OCI’s product portfolio, including leading roles running compute, GPU, platform features, multicloud products, and OCI’s GTM strategy.
Prior to joining Oracle, Karan worked in the core engineering team at Microsoft as part of Microsoft Azure Compute where he worked on AI infrastructure such as GPUs and FPGAs along with managed batch services. Prior to Microsoft he was an early part of a startup called GreenButton in New Zealand that provided visual effects services on multiple cloud providers. GreenButton was acquired by Microsoft. Karan is based in Seattle WA.


