Home

How Agentic AI Is Transforming Mainframe Workforce Training

How Agentic AI Is Transforming Mainframe Workforce Training

Daniel Newman and Greg Lotko speak with Darren Surch, CEO of Interskill, about how agentic AI is reshaping enterprise IT training. The conversation explores why mainframe organizations still need deep knowledge, structured learning, and human judgment even as AI becomes a larger part of the workflow.

AI is accelerating execution while tightening the margin for error. In mainframe environments, that shift is redefining the role of the practitioner, from operator to validator, from executor to outcome owner.

On this episode of The Main Scoop, hosts Daniel Newman and Greg Lotko sit down with Darren Surch, CEO of Interskill, to explore how agentic AI is reshaping workforce expectations and why training is becoming a strategic control layer. The conversation reframes AI not as a replacement for expertise, but as a force multiplier that increases the need for it.

As AI becomes embedded in enterprise workflows, teams are no longer just completing tasks. They are interpreting outputs, validating decisions, and managing downstream consequences in real time. Darren highlights a growing disconnect: organizations assume AI will fill skills gaps, but in reality, it amplifies the risks of shallow knowledge. Especially in mainframe environments, where reliability is non-negotiable, this creates a new operational pressure, speed without sacrificing precision.

The discussion also examines how training itself is evolving. AI-enabled content creation, adaptive learning models, and faster delivery mechanisms are expanding access. But without structure, scale introduces inconsistency. Organizations that treat training as a continuous, applied discipline, not a one-time event, are better positioned to deploy AI responsibly and confidently.

Key Takeaways:

🔷 The mainframe role is shifting toward oversight, validation, and outcome ownership as AI becomes embedded in workflows
🔷 Foundational expertise remains critical to interpret, challenge, and guide AI-generated outputs
🔷 Assuming AI will compensate for skill gaps introduces operational risk in high-precision environments
🔷 Structured, organization-wide training programs are essential for consistent and responsible AI adoption
🔷 AI is improving how training is delivered, but human judgment remains central to how it is applied

AI is not reducing the need for expertise, it is raising the bar for it.

Watch the full episode at sixfivemedia.com and subscribe for more conversations shaping enterprise technology.

Disclaimer: The Main Scoop is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript

Greg Lotko:
Hey folks, welcome to the next episode of The Main Scoop. I'm Greg Lotko from Broadcom, and I'm joined here by my co-host Daniel Newman, CEO of Futurum. I love that.

Daniel Newman: 

You know, in all the episodes, I don't think I've ever gotten a formal introduction. Now we don't even need a lower third.

Greg Lotko: 

Well, I was thinking about it. You're the CEO of the future. You're not stuck in the past. I am not. All right. So visionary, we're always looking forward. Always. Talking about what's next, how it's impacting what we do every day. Yeah, well, you know, I seek to continuously get better. I'm constantly training myself to improve. Ah, and that's a nice tie-in because our topic today is all around training and education. And, you know, one of your favorite subjects, 

Daniel Newman: AI

Greg Lotko: Al, you said it. So there's the question, you know, you wonder with AI, do we need to keep training humans? Do we need the same level of education? Or does, you know, you could say that in its entirety, or you could say are the portions of things that we're not going to have to know how to do or how to do as well, because AI is going to take it up. What do you think about that?

Daniel Newman: 

You know, there's, there's a quote, I don't, I won't get it exactly right, but there's basically. We could as in AI. Yeah. But the successful people of the future, it will be determined by those that are able to sort of learn, unlearn and relearn, you know, um, and. Adapt. It's the adaptability thing.

Daniel Newman: 

And so with AI, there's a lot of sort of this FUD and panic that, you know, it's going to just replace everything. And then, of course, there's the other side. I call them the bubble bears, the ones that say it won't do anything. And they're 100% wrong. I promise you that. I agree with you. Then there's this kind of continuum of in-between. And then there's comments like, well, we won't need college anymore. I mean, that's a serious conversation that's going on. Like, you know, you look at AI, you see, like, new ways that people are learning, you know? And by the way, because I had children over, like, two and a half decades, right? I've got my youngest is nine, my oldest is in her mid-twenties. Like, I'm seeing the difference in how each learned. One went to school when there was, like, no AI. One's going to college with now having AI and the other's learning and doing things with AI as a third grader, you know. And it is changing a lot, but I don't know about you, but even in the workforce, Greg, everything I see is this is like a 10X productivity opportunity for us. Meaning as humans, the opportunity to up-level, up-skill, put more energy and attention into things that we're passionate about and do them at incredible scale. But I think that we're still kind of having this fight within ourselves about how much are we working with it? How much are we working against it?

Greg Lotko: 

You know, what can we do for it? But there's a lot there. You're talking about, you know, what we have to learn, how we'll learn. You said something very provocative about whether or not we'll even need college. I actually don't think we ever did need college. We don't… Social construct. We all do need to learn, because if we don't, I mean, there's consequences. And college was born out of what became the most efficient and effective way to train or educate a bunch of people. But, you know, there's experts broader and bigger than us on education. So let me introduce our guest today. We have with us Darren Surch, the CEO of Interskill. And your whole business model is about training people, educating them, skilling them up. And I imagine on the one hand, there's big opportunity in, hey, how do I teach people about AI and how to apply that to technologies? But the other where you get questions of, hey, with AI, why do I need to teach any humans anymore? Yeah.


Darren Surch:
Yeah, thanks for the invite, gents. Pleasure. Looking forward to chatting with you today. Yeah, we're mainframe specialists in the skill learning, so mainframe's all we do. And we're hearing that from the industry, the sort of rumblings of, well, we need to train our people if they're all using agentic AI.


Greg Lotko:
Why bother? AI is going to do everything.


Darren Surch:
Yeah, and that's pretty dangerous to do that. Who knows if I had a crystal ball five years, ten years down the track, maybe AI gets it perfect every time, but it's definitely nowhere near that yet. So the same things that have been critical for the last 50, 60 years, those core mainframe skills, are still just as important for people to learn as well as the AI. Like you said, it's going to be a massive productivity tool, agentic AI, but you'll still need to be able to know your stuff, know what's going on under the hood in the mainframe to be able to look at the output from the agentic AI and say, yeah, no, that's not quite right. Let's look at that again.


Daniel Newman:
So I get a hundred percent your point. I think right now we're still in this inflection where there's, even if you like what it does, it could be code. It could be, you could be collaborating with it on strategy. Creates a lot of good stuff. Yeah. But anyone that reads it thoroughly, you know, we do research, like I've, I've created prompts and ingested a bunch of data and said, right. It's like, and sometimes it's like, it's really good, but it's like 80% good. Yep. And so what I see that as, like, OK, so it's a last mile thing, like 10%, 20%. And by the way, that tends to be the critical last mile, right? Making sure everything's factual or making sure the code has been fully QA'd and QC'd. But I guess, like, how do you see the role? Because that's kind of the interesting thing is, like, I'm hearing you saying training's more important. The roles are still needed. But like, what does the role look like when some of those first 80% things are no longer necessarily being done?


Darren Surch:
And absolutely, the role will change. I mean, look at IBM's laid some people off and is hiring a lot of new junior people and they'll bring them up and train them to be the generation that works with this agentic AI. And it'll be using that as a tool and then looking at the output and then working with that. But because mainframe is so mission critical, like you said, 80% close enough is not good enough. And the mainframe, it has to be right.


Greg Lotko:
Yeah, I mean, even when we talk about that, I mean, it sounds like somebody could listen to this and sound like it's a point in time. The idea that AI only gets you to 80% or call it 70, call it 90, call it 95, whatever. I think one could listen to that and think about it as the last mile and only having to worry about looking at it in a point in time. But you think about the idea of being able to monitor or be able to supervise, even if you get to the point where you believe AI can do 100% of an operation, And it's not even just about trust, as in, is it right or wrong? But is it everything I want it to be? So I do believe you're always going to need the human to be trained in the technology to be able to say, hey, I get that you're right in what you're observing, and I get that this action will have the right result to solve that portion of the problem. But is it going to impact anything else? Is it going to happen exactly the way I want it to happen? Right? So there's that supervision component and you can't supervise the AI if you don't fundamentally understand what it's doing. A hundred percent.


Darren Surch:
Yeah. No, that's absolutely spot on. I mean, it's going to accelerate things massively. It's an incredible tool. It's going to change the face of the mainframe industry. But I suppose the point I'm trying to make is they're still going to need that knowledge. And there still needs to be things like mentoring and coaching going on in the organizations. You still need the senior mainframers to pass on the the history of why things are done a certain way at an organization and why they were built that way in the first place and so forth, just to give the next generation of mainframers that context, again, to be able to look at the output from agentic AI and call it and say, yeah, that's not quite right, or that's going to break some rules, or that's going to be putting us out of compliance, or for whatever reason, you have to have those skills. There was an MIT study done recently, and I laughed when I read it because when I do stuff on chat GPT, it all comes out looking nice. But if someone asks you a question about it an hour later about something you wrote, you sort of sit there and go, um, and you don't know. And the MIT study said something like 80% of people just really don't know, even, even though they worked on the prompt to generate this stuff, they just don't know it. Your brain doesn't go through that, that logic.


Greg Lotko:
It's like modern navigation.

Darren Surch: Absolutely. 

Greg Lotko: You're driving the car. The nav is telling you turn right here, turn left here, go here for 10 miles and everything. So you did the act of driving. You used your eyes. You saw the roads. You saw the signs. You get there. And if somebody asked you, how do you get there?


Darren Surch:
You don't know. You don't remember which way you came.


Daniel Newman:
No. I think the worst one is when you actually know and you see your destination and the map tells you to turn and you actually have some sort of conflict in your own brain. Because it's like, what does the GPS know that I don't? And you know, you're like, I see the entrance, but it's saying turn here. Um, but it is kind of interesting though, because like, uh, you know, we know this stuff will get better. Like in my way, it's happening like in a highly fast, because like a year ago today, like it can't, it couldn't cope. Like, and there was, by the way, a lot of the code community, you know, Boris, the guy that leads it for all of Anthropic, Carpathy, he's one of the most well-known in the world, you know, they basically said AI will never replace. And now they're basically saying AI, you will know, there's no longer a need for code. You still need engineering. You still need systems. It's not, that has not, tooling and all that stuff is still a big thing, but like, There has to be some thoughts about how you change trends. That's the evolution. And that's kind of what I'm getting at. By the way, your story about IBM is interesting, because bringing in young people, there's kind of this, no one's going to hire young people. But at the same time, hiring people from the onset that are going to use this stuff seems opportunistic.


Darren Surch:
This is naturally the way they work. You don't have to retrain other people, which is a brilliant move as far as I'm concerned. Yeah, I mean, it is the future of working in IT. You know, there's no doubt about that. Agentic AI will do a lot of this stuff, but you'll, again, sorry to nag on the point, but you need to know your stuff. You can't just blindly take whatever agentic AI puts out and say, okay, that looks right because I don't have the skills, I don't understand the way the mainframe works, I don't have the basics. Even the basic mainframe skills that are being taught will have to change.


Greg Lotko:
This is a constant, I believe it's a constant reality and I'll, my best analogies are always cars. So I'll use that analogy. I mean, the first time we were making wheels, we were using a chisel and a hammer and we were carving out a round circle of stone and figuring out how to affix it to an axle. And that was the tools you needed to accomplish it and you needed to understand how to do that. Then wheels became wooden, then wheels became wooden with rubber around them, and then they became metal. And you could make a metal wheel with hand tools or cutting out things with a plasma cutter. Now all that, when you want to design a new wheel, you put it into a computer and you put it on a bench and it cuts it out and makes it. There's still a human involvement of the desire to have a wheel, the desire to have a different shape, and decide whether or not that's pleasing. So, is AI going to do more and more of the coding and more of the, even some of the engineering, for sure, but you're going to need people to be able to interact with that to describe what it is that they want. And it may get easier and easier for people to describe it, but those who will use the tool most effectively understand the engineering behind it and can avoid the pitfalls of the mistakes.


Darren Surch:
That's the new skills, yeah. And I suppose the other point I'm trying to make is that, especially in the mainframe space, this needs to be looked at by the corporations that run the mainframes. You know what I mean? You don't want your people just randomly going off and learning AI and how to do things themselves and figuring it out. If the organization wants to protect these mission-critical systems, then they need to make sure that everybody has the same standards of skills. So that there's no skill gaps.


Daniel Newman:
Yeah. By the way, your analogy is good, but it's also something, there's two sides to it. We create great efficiency because how many more wheels can we produce now with how many less people? Sure. But we also created much more demand. It's the kind of Jevons paradox. So like the other side of this is there's so much more demand that's been created now that there's exponentially more economic product being created. And so where does that kind of come from? So, you know, kind of interested, like, what I'm hearing here is we can make organizations more scalable. You can make each individual person involved in mainframe more productive in terms of helping generate, keep these things up. I mean, where do you, like, you know, kind of like, how do you recommend as one of the leading training companies–the leading training company? They come is work alongside AI to invest in train and develop to get to that next destination of growing economic product. That's what it is.


Darren Surch: True. And there's, and there's a definite speed component. You were talking about it before, how, how quickly things, yeah, quickly things are moving. I mean, back in the, back in the day, it was classroom training and, uh, you know, you'd, you'd be working on a project and you didn't have the skill and you'd have to wait two months of red tape and requisitions and so forth to eventually get to the class. And so you're either doing something unskilled or everything ground to a halt and weighted. You know, training these days is much, much faster, obviously e-learning, but there's going to be more and more chatbots themselves for instant mentoring and reference material and that sort of stuff. So training's going to evolve along to try and keep pace with generative AI, because everything's going to happen almost instantly.


Greg Lotko:
Even the idea of simulating experiences or what's going on in the environment and how do you react? What tools do you bring to bear?


Darren Surch:
It has to change. But again, to that point, the organizations that run the mainframes have to manage this stuff. They have to make sure that all of their people are high tide raises all boats. All of their people are at that same skill level and they know what they're doing and they're using it in the same way. And they're going to find their sources of training from the same places. This is an organizational responsibility. Rather than just letting people randomly figure it out for themselves, you're just open to a whole lot of change.


Daniel Newman:
I think to be successful in the future, and for everyone out there listening to this, I think that the key to being able to sustain yourself through this AI inflection is intellectual curiosity. I think genuinely like playing with this stuff. So I do think like you kind of are giving, there's like a structured and unstructured data. The unstructured play, you know, the great basketball players go out in their front yard and they shoot and they dunk and they go to the court and they pick up games. But then of course, mom and dad take them to play on the travel team and they do all the extra training with the special coach. That's kind of your point is it's a little bit of both. 

Greg Lotko:

Were you a big basketball player? 

Daniel Newman:

I'm short. No, no, but my son is now so he's into it. So I'm thinking about I was a soccer player. 

Greg:

Oh, I knew there had to be something.. 

Daniel Newman:

There was a relationship too. Yeah. So he's just started playing travel and I laugh because you know, he goes outside, he shoots hoops all the time and he's gotten pretty good. And it's like, but then he goes to practice and it's like, they're working on all these really structured drills. And I'm kind of thinking to myself, like, that's kind of the analogy of what this is, is like, those that are going to kind of play with the toys and use it all are going to be able to continue to provide a lot of value to their enterprise. But then companies like what you're doing can give the path, put the structure behind it, and provide, you know, the paths and the tools. Yeah, so you haven't really talked about it, but I'd be super interested. Like, how are you actually, because you're pivoting your own business. There's no way you're not. How are you helping change? Like, how different is the trend? What are you doing with AI right now to help train more mainframers?


Darren Surch:
Well, our courseware is modular and self-paced. So it's asynchronous training, but it's always available, which is that speed. It's always at the fingertips of the people doing the work.


Greg Lotko:
So you wake up 2 o'clock in the morning and you can't sleep.


Daniel Newman:
But is AI like, you know, in terms of correction, in terms of the course pacing, helping people learn it? Like, are you guys starting to implement it?


Darren Surch:
We use it with developing the courses. We still have… seasoned, you know, 30-, 40-, 50-year career mainframe system programmers, application developers that know the industry and really know their stuff that write most of the courses. But obviously, we can churn out a lot more content because we use AI. We use it in the QA process to go through and look for any inconsistencies and flag them. Again, we come back to the person, the human in the loop, to go through and verify. So, it's definitely making an impact.


Greg Lotko:
I love that that's ambidextrous on both sides. So, you were talking about the idea of being able to learn structured and unstructured. He's talking about the way you build the content, that it's ambidextrous, that you're using the human element to do the input, you're using AI models to train, then you're using AI to analyze, hey, do I have any inconsistencies in what's going on here? Which is really, what we've found with technology is when you combine the best of everything, you end up with the greatest outcome.


Darren Surch:
And for the foreseeable future, that's what it needs to be. Who knows whether AI is going to become 100% accurate and where it's going to get better. It will. It is getting better.


Greg Lotko:
The thing that I always think brings the human back into the element is you know, that supervision. The idea of, you know, when everybody talks about the car analogy of, you know, you're driving down the road and you're either going to hit a bus with 50 kids on it or you can hit the telephone pole and it's going to take you out. The human element has that shade of gray. And as we've talked about, even quantum, having those different states. I don't believe that the answer is AI for everything or AI is never going to get there. AI will get all the way there and do a lot, but the actual result or the way we want things to happen is going to require growth.


Daniel Newman:
We create 20, 30 trillion of gross domestic product growth. That's what it comes from, is the AI increases productivity exponentially so that we continue to have more to do. We just become more productive, which, by the way, we've done in every transformation historically. True, isn't it? Yeah. It's just this has been so fast, it's less evidence. There's a lot of fear because, like, in historic you know, technological shifts, it takes, oh, five years or 10 years. This has happened over, like, two or three at an incredible pace. But one thing that is really interesting, and I mean, I just want to say is, like, I don't know if you've ever heard of, like, Alpha School. It's, like, this next-generation learning here in the United States. And the idea is now they're taking students that used to, you know, you go to school for eight hours, and you learn at the same pace. as everyone else. So the asynchronous thing's really interesting, but now with AI plus asynchronous, the opportunity to kind of have AI in front of you to help one student who's incredibly quick, learning, has tons of potential, the pace in which information, implying that into what you're doing is going to be super useful. Because the point is, in alpha school, they're like, it's two hours a day. And the students are like, some third graders will end in sixth grade by the end of the year, because that's how fast they're able to learn.


Greg Lotko:
What I think is interesting about that is what's old is new, right? We think about that as a new thing. What, you know, when I was young, they actually did call, they had concepts like this, and they called it different things. They had things like gifted and talented programs that celebrated tracks. And what they would do is recognize that there were mainstream things that they wanted to have you have the socialization and be with the class for the whole day. but where they would recognize special accelerated learners, and they would pull them out of class for a certain time every day, and they would get an accelerated different experience like that.


Daniel Newman:
So… I was never pulled in my day though, by the way.

Greg Lotko:

What I don't believe is, I don't believe it should all be about accelerating somebody through the traditional education at a higher rate to a higher speed where you end up with, you know, the young Sheldon in college or Doogie Howser, but it could be about expanding the model and giving them, you know, somebody who can drink through the fire hose faster, having them be able to absorb more material, and they may take a different path. That may not be the same path to get there quicker. It might actually be an entirely different path.


Darren Surch:
It's not all just content either. I mean, Education needs to be coaching and mentoring. People need to go to conferences just to meet people, experience, get ideas of context and how other companies are doing things and so forth. There's any number of ways that they should train and it should always be mixed up. and getting them in the lab, getting them hands-on. Absolutely, hands-on. And then that's what you were talking about before. I mean, I know there's some uncertainty there, but in controlled environments, these are the best technologists, so the ones who are intellectually curious and want to play and play and see how far they can push something. They're the ones that come up with the prod stuff.


Daniel Newman:
Boys, you've got to go in the training environment to do that. They'll push it. Don't break it on prod, darn. Well, thank you so much for joining us. It was a lot of fun.


Greg Lotko:
It's been fabulous. Certainly been educational for me.


Darren Surch:
Yeah. Good dad joke. Nicely done. 

Daniel Newman: 

He's 100% for a good dad joke in every episode. Let's have you back sometime soon, and we look forward to keeping up with the journey at Interskill.


Darren Surch:
Cheers, gentlemen. Thank you very much. Fabulous. 

Greg Lotko:

That was fun. I think that was great. I mean, I realize that the world's going to accelerate. It's going to morph. We're going to learn different ways. There's going to be things that AI is going to do. At some point, it may be AI doing The Main Scoop with just some generated avatar of  DanBot and GregBot
Daniel Newman:
Hey, uh, by the way, you know, it was kind of fun to hear back in the 40s or 50s when you went to school, like you were telling me a little bit about that, you know, because my school wasn't that different. It was like 40 years later.


Greg Lotko:
It was not the 90s.


Daniel Newman:
We had gifted. I was never invited.


Greg Lotko:
I would be OK if I was 80 years old and was looking like this. You look great. I could do that. You're the best looking. Why don't you bring us home, Dan?


Daniel Newman:
All right, everybody. As you see, Greg and I like to have a little fun. We appreciate you all for being part of this episode of The Main Scoop. Be part of our community. Subscribe. Come here to learn. Come here to grow. Come here to be scooped away. See you all later.

MORE VIDEOS

Modernizing Manufacturing Without Disruption, How SMBs Move from Visibility to Autonomy

Manufacturing SMBs are under pressure to do more with less while navigating workforce challenges and rising operational complexity. AI is shifting the industry from visibility into more autonomous decision-making.

Compute Wars, AI Reality Checks, and the Infrastructure Breaking Point

AI is now an execution race defined by infrastructure. Patrick Moorhead and Daniel Newman break down how compute shortages, energy constraints, and security risks are reshaping the race from building models to actually running them at scale. From chip supply and hyperscaler strategy to AI-native security and the growing case for regulation, this episode maps the pressure points defining what it really takes to turn AI investment into production reality.

The Six Five Pod | EP 300: Frontier AI Risks, Model Power Shifts, and Market Signals

Episode 300 marks a milestone moment for The Six Five Pod as AI shifts from innovation to consequence. This week, Patrick Moorhead and Daniel Newman unpack the risks of frontier models, the growing complexity of AI deployment, and the market signals that reveal where tech is heading next.

See more

Other Categories

CYBERSECURITY

QUANTUM