Home

The View from Davos with MBZUAI’s Eric Xing

The View from Davos with MBZUAI’s Eric Xing

From Davos, Patrick Moorhead speaks with MBZUAI President Eric Xing about why world models may define the next frontier of AI research, how to recognize progress beyond narrow intelligence, and what role universities play in balancing innovation, openness, and responsibility.

As AI research pushes beyond pattern recognition, foundational questions are resurfacing about what progress should actually look like. Growth for growth’s sake alone is no longer a sufficient answer.

From Davos, Patrick Moorhead sits down with Eric Xing, President of Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), to challenge the assumption that bigger models alone will carry AI forward. The focus shifts to why real progress depends on systems that can reason, plan, and interact with the world, not just absorb more data. Xing also presses on the stakes for academia as AI becomes entangled with geopolitics, regulation, and national strategy, arguing that universities now sit on the front line, defending open research, reshaping education for an AI-native era, and deciding how innovation advances with intent rather than inertia.

Key Takeaways Include:

🔷 World models signal a shift in AI research priorities:
Progress toward more general intelligence may require systems that model the world and can reason about it, rather than simply scaling language models.

🔷 Today’s AI remains fundamentally limited:
Despite rapid advances, current systems still lack the planning, reasoning, and adaptability that define more general forms of intelligence.

🔷 Open science is under pressure:
Rising geopolitical tensions around AI sovereignty and compute are reshaping how universities and research institutions operate globally.

🔷 Responsible acceleration is becoming a leadership challenge:
Balancing innovation speed with societal impact now requires deliberate choices, not default momentum.

🔷 Higher education must adapt quickly:
Universities face urgent decisions about what to stop teaching and what to prioritize in preparing students for an AI-native decade.

Learn more at MBZUIA.

Listen to the audio:

Disclaimer: Six Five Media’s The View from Davos is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript

Patrick Moorhead:
The Six Five is on the road here with a view from Davos. It's been a great show so far. Pretty much everything you would expect from a World Economic Forum and more, right? It's really the intersection of technology, AI technology, a lot of policy discussion, and there's even been a few tariff discussions. AI over the past two and a half years has pretty much dominated all of the technology talk out there. It started off with models, proprietary models, but the industry is really moving in a bifurcated fashion. You have proprietary models, you have open models, and by the way, there's different versions of open models. Everybody has their versions of what open means. One of the leaders in open AI models is MBZU.AI, and to discuss world models, which is really looking toward the future, we have Eric with MBZU.AI. Welcome to the show. 

Eric Xing:

Nice to meet you. Thank you for having me. 

Patrick Moorhead:

It's been great to get to know you over the past couple of months. It just seems like a flurry of new models that are coming out. And what I love about the research that you're doing is that you're shooting ahead of the curve. And I know there's a lot of research that does that. But it is interesting as it relates to world models, how you called this two years ago, and only now is everybody talking about it.

Eric Xing: 

Yes, our model is looking beyond what the current language model is able to do, which is delivering what I call the book intelligence, meaning that your knowledge comes out of the book and also delivered to you in the form of like a book, because it is about theory, about conversation, about solving puzzles, but who walks the last mile? You have to basically deliver these functions and execute them in the real world. That's where the current language model is not able to do. And the word model gives you that kind of a physical intelligence. You understand the real world. Accordingly, you plan your action and you execute the action through a robotic platform or other platforms like in the game or other systems. I think word model is fundamentally a different concept compared to large language models or the type of chatbot systems that we see because the type of reasoning they do and the type of capability they deliver is very different. Word model is really about simulating all possibilities. in real world. This is like our brain, how it works, right? When we do complex planning, we don't actually do calculation or solve a one outcome out of many possibilities. You do thought experiment, you try a lot of things before you do it within your brain, and then you probably, you know, select a few of them until the best of them emerge when you are doing that. And that is basically what model is delivering. It gives you that kind of a facility and the capability to do thought experiment inside a robotic brain. They simulate possible outcomes. They allow your action to be conditioned on, to be executed midway, so that they can take turns, as you will, to deliver different outcomes. And more importantly, There is a capability that is unique to our model, which is very, very long-term reasoning without resorting to video and text. Imagine how you can shoot a video for a year long, but we can plan for a year long.

Patrick Moorhead: 

Ask you just for the sake of the audience here, can you explain the applications of a world model? You talked about a few of them, but we're talking robotics, we're talking video games, essentially anything that you can see in the physical world, you're pulling that into the digital world, and to be able to put that to work, after it's been processed without having to, let's say, drive 10 million miles or have a robot walk 10 million steps?

Eric Xing: 

That's an excellent example. A robot walking for 10 million steps for what? And then to where would it go? You see Boston Dynamic Docs able to walk that many number of steps, but without necessarily a purpose, without necessarily knowing why it does it, and also what if the situation changed. So the world model allows a robot with that kind of intelligence to now autonomously do their thought experiment and then come up with the best movement. I'll give you one example. The other day, last week I was spending my time in the Alps doing my hiking. I did all what I can do in Googling, in chat GPT, in all the manuals and tips and everything. I was told this and that. I get fully prepared. Guess what? When I'm on the mountain, I'm alone. OK, I cannot basically read those books and those things and tell me what to do because everything changed. The snow is higher than what it was told. And I wasn't equipped with all the snowshoes and all the equipments. I didn't decide, do I walk? forward or turn back and try again. So these are the kind of things that the charge EPT book knowledge is not giving you. You need to have the smart street smartness and also these are physical intelligence to do real life and real physical planning.

Patrick Moorhead: 

So are we beyond book smart? Are we in this uncomfortable zone between book smart and physical reality smart? Is that where we are on the map right now?

Eric Xing: 

I think we are stuck maybe in the middle of that, or maybe at the beginning of that, because people aren't yet fully converged on how to achieve physical smartness. For example, there are models that are generating a lot of videos already, making you feel that they are already physically smart, because they generate pretty regular videos. But the trick is that you can keep asking them to generate. Generate not just 10 seconds, but 1 minute, 1 hour, and 5 hours. You will see that suddenly they lose track of their train of thoughts because the consistency required for very long video processing is not there. And then the fundamental question is that why you even need to generate the video? If you do planning, if you do long-term kind of reasoning, what you need are symbolic knowledge rather than pixel knowledge. Therefore, that flow of pixel may not be even necessary to do that. So people are still right now exploring exactly what it takes to deliver physical intelligence. You heard, you mentioned about, for example, the World Lab. And in their definition, it is about being spatially intelligent. That is absolutely one of the most important elements in physical intelligence. But that's not enough. You don't have embodied intelligence. You need to basically touch the temperature and feel the pain or the exhaustion. There should also be additional social intelligence because your physical world contains also people and other agents. How can you actually go beyond only understanding the physics but also emotion and other agentic behaviors? So I would I'm trying to say that there are lots of layers of intelligence beyond what we see now that is not being delivered or properly implemented. Of course, I haven't even mentioned the ultimate intelligence, which is you have the curiosity of discovering more and producing more knowledges. And that is never yet the task or the expectation we have on any of the current AI systems.

Patrick Moorhead: 

My first thought is how powerful but also how hard yeah And it's getting in harder in with all the geopolitical tensions. I think for universities. It's it's like hey, how do I? how do I have my cake and eat it too? How do I design for my region, but how do I make it available and do, and I'll call it open sourcing it, even though there's a better term for that.

Eric Xing: 

Yeah. Well, there are geopolitical challenges to carry forward the research, but also there are ecosystem challenges in carrying out the research because you already see this echo chamber effect that the dominant players in the current AI space is basically amplifying their own approach and also their own capability without being explicit about the boundary and the limitation. Therefore, you feel like, oh, AGI is like two or three years away from us, and we'd better just jump onto this particular form, which is ALM, and then try to drive business out of it, but stop investing additional frontier, such as new architectures and the new methodologies. And that's where I find academic institutions particularly valuable and responsible and in fact, it was almost like a duty to be that voice, to speak out the new needs, the new frontier, and the limitation, and also to be candid about the boundaries. Then about the international collaboration, again, we need to separate. science and product and also maybe solutions. Because the science is a piece of knowledge that can only benefit and amplify and thrive through collaboration and through very, very open exchange and debate. And it's not about who benefits that. Benefiting from the knowledge to deliver either value or harm requires additional things that can be checkmarked and checkpointed or regulated in other way. but people don't regulate science. I think in that sense, I see geopolitics to be less of a real kind of limitation. It's really the people's mindset and also people's perception about the value of scientific research. I think in university, we're trying to be in the game of a frontier foundation model and AI system research, but doing it in a way that complements what the big tech companies are trying to do in protection of their business and strategic interests. I think for academia, we find a unique opportunity and also a responsibility to really make the research more transparent and also more subject to reproduction and the evaluation so that people can have a better trust in terms of the safety and reliability and the honesty of the research that we deliver.

Patrick Moorhead: 

Well, and your models are not just open weights, they're open weights, open data sources, pretty much open everything. In fact, at least based on my research, I think you have the most open models out there, or close to it.

Eric Xing: 

Indeed, we have the most open model out there, and also not just open, but also performant. Again, openness is not a religion. If you open, but open something that is not useful and interesting, people care. Our foremost goal is to make our model performant. So for example, our recent series of the K2 models is comparable to the Frontera models of comparable size, maybe even bigger size, but delivered at a lower cost. but our world models and our models for virtual cell and digital organism are actually even ahead of the game because we want to use our very limited resource but the immense brain power to not only dominate the research landscape or take over all the credit. It's really about setting up a direction, inviting the community to study them together, and then promote collaborations.

Patrick Moorhead: 

Yeah, and I appreciate it. I don't want the thought to get lost. Essentially, research, university research, is the Switzerland. Here we are in Switzerland. of providing this and letting other people maybe debate the productization of that. And generally speaking, society has done a lot better when universities are let to cook, right, and do their thing, and then people can figure out what to do with that.

Eric Xing: 

But I want to add a catch on that, which is that the place that hosts great universities, usually are the first to benefit from it. Look at Silicon Valley, right? With Stanford, with Berkeley, they become now the world's best ecosystem for startup economy and for innovation. By having a university, it's not about we are philanthropist, we want to be generous for the sake of being generous and just spread wealth to everybody. No, that's just one of the many possible outcomes, but there are other ones because as a university, we produce knowledge, we also produce people, we also produce startups and entrepreneurship. For example, in Abu Dhabi, you know, the company just mentioned, like the Jam Bell, which is now prioritizing, you know, the AI system for biology, is actually spin-off from the university. And we're going to spin-off more companies, you know, in the space of word models and language models. And when you are in that kind of vehicle, of course you can pursue financial interest, you will do some customization with proprietary data. We want to separate, basically, the fundamental research, which is public and open, versus the, the more commercial-oriented productization and the research, which of course need to protect customers' interest.

Patrick Moorhead: 

I want to shift the conversation to speed, okay? Some people are saying we're moving too quickly, okay? And others are saying, well, we're going too quickly because it could lead to job loss, all these dangers, basic doomerism, okay, if you don't mind me using that term. And then the other is we're going too slow. What all these benefits that society can have from all of this great AI capabilities, what is the responsibility or what is the role that universities play in this?

Eric Xing: 

Yeah, we want to be careful in terms of making judgment on our faculty and researchers' work. You never tell a researcher to slow down, to do one thing over the other thing, and so forth. They are free, intellectually, academically. That's the holy grail of university spirit, intellectual freedom, and also an open space for doing their research the way they want to do it. Now, university provides the runway and also the ecosystem and also the resource and of course the incentives. I think in that we do have a North Star in the value and in the direction, right? Of course, you know, all research need to ensure that they are safe. They are not leading to harmful outcomes, just like when you do virus and a biological study. You need to have that kind of a safety environment to make sure that your toxic materials are not leaking. Well, in the digital science, like AI and the software engineering, you do want to make sure that they don't produce uncontrollable leak of you know, electronic viruses, you know, or intrusive programs that is going to harm people. So yeah, we have existing regulations. In fact, many regulations are exactly built for that already. I don't think the need of adding on top of it. So people are well regulated and managed and educated to have a good conduct. On the other hand, I worry about things going too slow. Because the society really have the ability to absorb technology when they actually see technology. You don't want to really prepare them for a hypothetical technology before you even build it. So I'm a believer of organic growth, of societal capability and competence. In fact, in the United States, 100 years ago, maybe more than 90% of people are farmers, and now maybe 3%. Are they happier or maybe worse? No, I think AI, of course, every technology we build is going to create a capability that is going to free people out of a less interesting and more demanding job. the freedom gives them basically more choices and options. That's how I see it. So I think AI, or maybe society, can benefit by seeing a faster movement of technology, including AI. So my university, I'm less worried about misconduct or accidents from our faculty. I'm more worried about them being held back with unnecessary regulation to suffocate their productivity and creativity.

Patrick Moorhead: 

Eric, this has been a great conversation. I want to ask one final question. You're at a university, obviously, and you're at the epicenter of what the future of AI is going to look like. What are you recommending, like if you were to recommend what students should learn at this point, getting back to the AI can do it better, what would it be?

Eric Xing: 

Or how should they think about it? It's a very interesting question. I can tell you that everything that I do now were not what I learned 20 years ago. Of course, not even what I learned 10 years ago. I think the greatest capability that I can teach or I can pass to the students is the ability to learn. You learn how to learn, basically. And that means you don't worry too much about, hey, exactly right now the thing that I learned is able to create a particular outcome. It's really about how you can learn it better and fast, and then move on with new problems. So in that sense, the university is trying our best to set up a new type of curriculum that is going beyond pure, say, book intelligence or academic competence. You need to also have the social competence. You need to have the way to understand other people, understand the world in terms of their needs, discover problems, then communicate your solution and your question to your collaborator and to your investor in case that you want to build a startup, you need to also raise money. And I thought our students need to know how to raise money. So there are a lot of knowledge that is not now taught in a conventional academic program, both at undergraduate and graduate level. So we try to now bring in Not just the STEM, the programming skill, the math skill, but also your communication skill, your design capability, and your product management capability, your team work ability, that kind of curriculum into their training. So I thought, you know, this is what I can hope for the best. It's not like the students walk away with a full brain load of equations and knowledges. It's really that confidence that they can survive in a changing world. They can learn new things even not taught in school.

Patrick Moorhead: 

I've asked that question about five times this week to various people, and I think that was the best answer yet. Eric, I want to thank you for having this conversation with us. And by the way, congratulations. You've got a five-year anniversary coming up. You have a celebration coming up as well. And I appreciate the opportunity to attend. Thank you.

Eric Xing: 

Thank you very much. I look forward to seeing you on campus. Yeah, we are very excited about celebrating the 50th anniversary. We also have a new plan to be unveiled for the next five years.

Patrick Moorhead: 

Oh, I love it. I love it. I appreciate this. So this is Patrick Moorhead with the Six Five with a view from Davos. That was Eric Zing, president of MBZ UAI. And heard some huge aspirations from Eric. Not only is it the most open models out there, and as an analyst, I got to watch what I say, and that is an absolute, but also cutting edge on the latest generation of models, world models. I will be in attendance at the five year anniversary. in Abu Dhabi. I'm looking forward to it. Stay tuned to my social media to learn more about that. Take care.

MORE VIDEOS

The Six Five Pod | EP 295: Vibe Coding, AI Infrastructure, and the Future of the App Economy

AI development is getting easier, but building production-ready systems remains a challenge. From vibe coding experiments at Mobile World Congress to shifts in AI silicon, networking infrastructure, and the evolving app economy, Patrick Moorhead & Daniel Newman explore what’s actually changing inside enterprise technology on this episode of The Six Five Pod.

Intel’s Telco Commitment: AI in the Network and the Path to 6G

At MWC 2026, Intel’s Kevork Kechichian and Cristina Rodriguez join Patrick Moorhead and Daniel Newman to discuss Intel’s renewed telco commitment, the evolving role of CPU in AI-driven networks, and how operators can prepare for 6G without forcing a hardware reset.

The Six Five Pod | EP 294: AI Capital, Sovereign Cloud, and the Infrastructure Arms Race

AI funding rounds are getting bigger. Infrastructure bets are getting steeper. And the SaaS model is back under pressure. On episode 294 of The Six Five Pod, Patrick Moorhead and Daniel Newman break down the $110B OpenAI raise, Amazon’s expanded role, AMD’s $100B Meta deal, sovereign cloud momentum, and whether or not the SaaS premium is being permanently eroded.

See more

Other Categories

CYBERSECURITY

QUANTUM