AI, Human Risk, and the Future of Cybersecurity: The Mimecast Perspective

How can organizations leverage AI in cybersecurity while effectively managing the evolving human risk? ⚠️ 

We're thrilled to present one of our Cybersecurity Spotlights featuring Marc van Zadelhoff, CEO of Mimecast. He joins host Daniel Newman for a deep dive into transforming cybersecurity measures to be more resilient, proactive, and board-ready with data-driven insights.

Key takeaways include:

🔹AI's Dual Impact on Cybersecurity: Explore how AI is transforming defense mechanisms with unprecedented potential, yet simultaneously introducing new and complex challenges for organizations.

🔹Proactively Managing Human Risk with AI: Delve into strategies for leveraging AI to actively manage human risk and significantly reduce breach incidents, emphasizing influencing user behavior over mere detection.

🔹Combating AI-Driven Threats: Understand the critical need for robust security strategies that account for human risk, especially in the face of emerging threats like precision phishing and deepfake social engineering.

🔹Governing AI for Ethical Security: Gain insights into the crucial considerations around the governance and ethical use of AI in cybersecurity, particularly in light of increasing regulatory scrutiny and the paramount importance of empowering and educating people as part of your security strategy.

Learn more at Mimecast.

Watch the video below at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Daniel Newman: Welcome back to the Six Five Summit. In this Cyber Security Spotlight, we're joined by Marc van Zadelhoff, CEO of Mimecast, to explore how AI is reshaping the threat landscape and what it means for the human side of security. From precision fishing to generative AI risks, we'll dig into how organizations can reduce complexity and manage human risk and build more resilient defenses. Marc, welcome to the Six Five Summit. Thanks for joining us.

Marc van Zadelhoff: It's great to be here. Daniel, thanks for having me.

Daniel Newman: So you're pretty new in the role you just took over about a year and a half ago. You know, quickly before we dive in, sort of. What are your kind of early observations? I know you're a veteran of the industry, but just since joining mimecast?

Marc van Zadelhoff: A fantastic company, I took over for the founder who was here for 21 years. We have 40,000 customers around the world approaching a billion dollars in revenue. But like with anything you take over from someone who's been 21 years. Peter, the founder is a good friend of mine, but there are definitely some things that you get a chance to remodel, fix up, redo the kitchen, move a wall, redo a bathroom, like, like moving into your parents house. So it's been an honor to move into that house and do a bit of remodeling, but on a very, very great foundation and pivoting the company in a very exciting direction, I think.

Daniel Newman: Well, as a founder I know those goals that you're targeting are really significant and I know sometimes trying to take over for a founder is maybe a somewhat impossible job in some ways. But of course it seems you're going to bring a lot of great expertise and get to that billion. That's such a big milestone. So congratulations on that. We'll be watching that. So let's talk a little bit about, you know, AI and its impact and transformation in the cybersecurity space specifically. How are you seeing AI reshape it and what do you think that means for organizations today?

Marc van Zadelhoff: What I love about cybersecurity is you have competitors and customers in any SaaS space and in cybersecurity you have this third variable which is the hacker. So I would just point out first and foremost before we all the cool Cyber folks and MBAs and podcasters and the like of the world started using AI, the hackers were using it. The hackers always used the technology first. In fact, criminals going way back always pioneer technology. So you remember, I mean one of the businesses we're in human risk, we're in insider risk, we're in email security. Remember the old email you got from the Nigerian prince with grammar errors and you know, using British English, if it was even spelled correctly, as opposed to American English. Now those hackers can make perfectly crafted emails using AI that target you specifically. And of course with a little more research, they know what country club or golf club you're a member of or where you like to hang out, and they tailor it perfectly. So the first thing to remember is hackers got to it before we all did with our clever approaches to using AI. And then on the defense side in cybersecurity for sure, we, you know, seeing lots of opportunities, I would say to keep it simple. There's two areas that I'm seeing us use it. One is in improving detection. So we can detect, for example, things that are written by AI engines. We can detect people using shadow AI or using the wrong things. And then the second is for productivity. Like all these cyber products are pretty complex. Any software product is complex. W give people kind of assistance on better using the product, being more productive using the product, spending less time on alerts and false positives, false and negatives, all that stuff in the product using AI.

Daniel Newman: You know, you said something profound, but there was a time in the past where someone of decent intelligence could sort of look at that email and be like, eh, that just doesn't look right. And I know that there was always a continuum of who could be duped and who couldn't. And obviously some of that was about technical knowledge, some of that was about, you know, experience and awareness. We've seen it hit the next level. I mean look, even me, someone who has a lot of technical depth. Marc, you know, at times I forward emails sometimes or I'll screenshot a picture of one and be like, this looks like a real docu sign. And it looks like it's from someone that we do business with. I'm like, but there's just something that smells off about this to me. And I can't tell you how many times our cyber team has come back and been like, good thing you didn't click that, don't click that. They've gotten a lot better. But in the end, you know, most breaches are still human error. It's like 95%. What are you sort of teaching? Because, you know, given the technology, I think this has a lot to do with kind of zero trust architectures, is that if we trust nothing. It reduces the chance for humans to make mistakes and then you kind of have to validate everything. But we also know there's a push pull on how efficient that is and how efficient that is. But overall, kind of what do you recommend with now you have AI, everything's going to get harder. Like you said, the English is better or whatever, language is better, things look more legitimate. And once you click that wrong thing, it's hard to turn back. I mean, it's, there's not a lot of room for error.

Marc van Zadelhoff: Yeah ,agreed Daniel. Hackers don't break in, they log in. That's kind of the mantra of, of our strategy here around human human risk. And so there's been so much cyber technology layered into the infrastructure, into the technical aspects of every company's infrastructure. But the end of that, the end of great network security, endpoint security, application security, identity security, application security. You have a human, an employee still sitting there, millimeters away from the keyboard and mouse, and they're going to do something, they're going to click on the email, they're going to open the attachment, they're going to do something untoward, sometimes malicious, by the way. I mean, I keep thinking about, you know, our current government employees without trying to get political at all, but how many disgruntled employees are there in the US government right now who may not be exactly excited about their employer and what's happening, what, what may they do in that situation? So I think you have to think about the disgruntled employee, the careless employee, or the one that just gets owned by the hacker because the hacker is so good. And what we're doing is, you know, also leveraging AI. And that is, I think the days of sitting down and watching an hour long awareness training video are over. But what we can do is real time. Insert a nudge, insert a block, insert a reminder to someone saying, hey, you know, you shouldn't really be moving that source code into your personal GitHub. I mean, I know you might think that you own that because you created something so cool yesterday, you know, but it's company property or you shouldn't be clicking on that email, or you shouldn't be using that kind of language in Slack Teams and zoom, right? There's just ways of tuning the behavior with reminders, sometimes with a block, sometimes with a smack on the wrist to tell those employees you're taking risk that is not matching what the company would like you to do. And here's just some reminders or blockers or corrections that make that much more difficult. And I think once you do that a few times, we always say that 80% of the risk is from 8% of your employees. It's just you don't know which 8% they are. Once you can measure the employee risk, start to train and nudge them and then block it. When it still happens, you're going to improve your posture a lot.

Daniel Newman: I imagine AI, by the way, can figure a lot of that out if it's sort of tracking the behavior. I still remember the old Marissa Meyer coming into Yahoo and VPN-ing and tracking everyone's activities. And I know everybody was super unhappy about it. But you know, when people are working remote and you know, nowadays the way it stands, there's so much BYOD, we're moving stuff between our devices we're running work applications on. Personally, I mean, even to your point, like having that kind of nudge that, hey, like you probably aren't even aware, but like, you really shouldn't have this app on this device because it's not being, you know, there's a lot of easy opportunities. I think the way you have that 95%, then you have that 80, 88 rule that you kind of talked about is kind of a snowball effect of the way technology is diffused these days. Because very rarely are employees on just one work device. It's, you know, we all have an iPad or our own laptop and a Mac and a company a phone, and it just creates so much opportunity.

Marc van Zadelhoff: Yeah, I think employees, I would say we're throwing around our percentages, but a large percentage of human error is not malicious. Right. Most employees, even honestly when they copy that source code over or a couple of addresses because they want to keep in touch with that customer even after they leave the company, they're not thinking of that in a malicious way, but it is against company policy. It is violating your intellectual property, the safety of your intellectual property. So usually a reminder is enough. Again, some people are really out to get their employer and for them you need to take. Be a little harder. But people, I think to your earliest point, the definition of privacy is slipping and changing in our society. And I think people are much more comfortable and understand that they're. They're going to be watched a bit by their employer, by, you know, by even more than that. But I mean, people are getting watched all the time on Instagram and all these apps that they're using. So they're kind of used to it. People will be taking a look at what they're doing and giving them a reminder when they're doing something weird.

Daniel Newman: Yeah, you got to kind of assume these days that we all sign all these terms and policies and don't read them. We've pretty much granted. You know, I was having a good laugh just recently with my wife about, you know, just the ads that we get fed based on. I didn't talk to anybody about this, but it was like I was in this place and I tried clothes on and I'm getting, you know, it's like, it is so precise. And I mean to that point, I mean, AI is, you know, if I'm reading the room right, Marc, I mean, in your space, AI is just sort of a. It's like a turbocharger, it's an accelerant. Right. So everything. Not just kind of like the language and stuff, but just volume. Right. The volume of which they can attack is also exponential.

Marc van Zadelhoff: Yeah, volume and accuracy. And you know, if you want to get into the extreme version of that, obviously it is the deepfakes and those types of attacks that are taking that to a whole new level.

Daniel Newman: So let's talk a little bit about governance. Remember when ChatGPT came out, there were a couple of companies that apparently had, there were some great stories about, they literally had like fed some of their best data into ChatGPT, like company strategy docs and stuff, to summarize that. And this was a great example of like super proprietary company content that got dropped in. And I'm sure small businesses do it all the time, don't even think much. They're like, this is such a great tool. I'm going to use it to help me write, I'm going to use it to help me summarize. I'm going to use it to help me create. But they didn't have enterprise, they didn't have anything sort of, you know, any partitioned off. It was just literally using the public application. I mean, that's just one example. So, you know, you have accidental data leaks and generative AI. That certainly opens the door for some attacks. You have model injection, you know, where people can do things to models, to mess them up. We saw the early days of bots that were created and then hacked. You know, people created those. I mean, there's so much going on. But like, if you're a CISO, how are you thinking about creating governance and ethical use because you want your people using AI. Like, I think companies that don't have their people using AI are putting themselves at significant risk of Falling behind. But you need them to use it intelligently. So what's the CISO policy strategy that Mimecast is recommending to you?

Marc van Zadelhoff: Yeah, I think a lot of it is actually quite similar, even though the technology is so much more advanced and sophisticated. The CISO policies we recommend really are the same ones we recommend around data security generically. So anytime you're using a cloud provider and you're going to put data in the cloud, you're going to ask a whole bunch of questions and. And also the policies around Shadow it. Right. So for years we've been obsessed with Shadow it often because we're worried that people are just going to be spending money with corporate credit cards. Boy, this is. I need Canva to do my job, so I'm going to swipe a corporate credit card. Before you know it, a thousand people have Canva or Splunk. Back in the old days, they would put Splunk in place. So I think those two mindsets of data security and shadow IT governance are kind of what you need to combine for AI. We have a product that detects usage of AI during. To your point, not just ChatGPT, but the Week that we had Deep Seek, we saw in a number of our customers that their employees, I mean, Deep Seek, right, they were taking confidential client documents up to Deep Seek just to see how good it was and it was experimenting with it. So that's Shadow IT and that's a data security violation. So what we recommend on the data security side is you need to ask the usual questions of your provider, your AI provider. What are you doing with the data? Are you training the model? Is the stuff that I upload going to be kept in my domain or shared broadly? So it's all those data questions. And on the Shadow IT side, you need a product like the one that we have that can detect the use of Shadow IT, and can block it. If you say we're using Chat GPT but we're not using Deep Seek, you have to be able to enforce that with a nudge or with the block. So I think those are the two policies that you don't really have to reinvent the wheel on this. You just have to apply them to this AI problem and force it because it is more viral than any other Shadow IT problem I've seen.

Daniel Newman: Yeah, it sounds like it's really substantial and you know, there's been a lot of kinds of companies getting value from tools that they invest in and a lot of work has been done to help adoption of tools that companies invest and buy for their enterprise use. There's a little bit of that turning that on their head. You know, that whole idea is like, how do we make sure people are using things correctly? And, you know, the idea of basically kind of prompting people along, that's like, yeah, you shouldn't, don't download that, or hey, don't, don't, you know, run that application on this piece of hardware.

This stuff's important. Because like I said, I think what people don't know is probably one of the biggest challenges here.

Marc van Zadelhoff: Yeah.

Daniel Newman: Your point about the small number of malicious. Those people are going to be very creative and they're going to try to work all around and up and down your software. But the ones that are doing things by mistake, that can create huge vulnerabilities, hopefully those are the ones that you can move along. Let's kind of wrap this up, Marc, with a broad question. You know, if. If you're a, you know, board of directors or a CEO, maybe you're not even in the tech industry. Maybe you're in manufacturing. Well, every company is a tech company now, but, you know, the tech isn't your every day. You're not waking up every day thinking about tech and AI, you know, what are you. What is the kind of best piece of advice that you're giving to organizations to navigate all this complexity? Because it feels like it happens so fast that it just feels almost impossible to ever be out in front of this.

Marc van Zadelhoff: Yeah, I guess my mantra lately has been on all these fronts:, focus on the people, focus on your people, focus on the humans. Even when it comes to adopting AI across the function so that we become more sophisticated, forget about our product, a lot of people think that's a technical problem, it's a human problem. If you want a whole department to start coding with AI or doing customer support with AI or doing marketing with AI, a lot of that is a human issue, not a tech issue. How do you train them? How do you make them understand the risks? How do you get them productive there and not feel threatened by it? Right. Let's face it, most employees out there right now are super scared. Their job is about to be lost to AI. You have this conundrum that we're doing to our employees and expecting them to follow along. So I sat down with my amazing head of human resources here, and I said to her, I said, don't forget this AI thing. You probably think it's a CIO problem, it's an HR problem. And I think the risk is the same way to focus on humans. When it comes to the risk, what is the human risk? Again? People are not breaking in. They're logging in through the humans that make mistakes or are malicious and they're taking the data by logging in and moving laterally. So how do you train your people or secure around your people so that they don't become the point of entry for hackers leveraging AI?

Daniel Newman: Marc, it was great chatting with you. There've been so many conversations about it. I'm glad you brought up the human challenge because frankly this is going to be one of the most profound times. I, I stay optimistic. I hope you do too. With AI we will see growth of productivity. Every industrial revolution in the past has created more jobs. I've never seen such a kind of confluence of impact where you have both blue collar, white collar knowledge, entry level, advanced roles all kind of being somewhat targeted at one time by technology. But I hope it raises the sort of human capacity, the human condition into identifying and finding the next big opportunity of growth. It's an encouraging time but it is definitely going to be interesting.

Marc van Zadelhoff: It's, I'm optimistic but as a cyber guy I'm also paranoid and it was great to discuss all this with you Daniel.

Daniel Newman: It was really great to talk to you Marc. I really appreciate you joining us. Let's have you back on the Six Five very, very soon. Thanks for joining me here at Summit. All right everyone, a big thank you there to Marc and the Mimecast team for joining us. As AI continues to drive both innovation and risk, it's clear that empowering people and understanding their role in cybersecurity has never been more critical. Stay with us for more insights here at The Six Five Summit.

Disclaimer: Six Five Summit is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Speaker

Marc van Zadelhoff
Chief Executive Officer
Mimecast

Marc van Zadelhoff joined Mimecast as the company's second CEO and member of the board in January 2024, leading all aspects of the business and guiding the company as it enters its third decade of operation. He is an accomplished executive with 25 years of experience in the cybersecurity industry. Prior to joining Mimecast, Marc was the CEO of Devo Technology, a cloud-based logging platform that he led through an intense growth phase where they raised $350M and tripled revenues. He also spent time as COO at LogMeIn, and was GM /CEO of IBM Security, an IBM Business Unit that he co-founded. Under his leadership, IBM Security grew to 8,000 employees and $2.5 billion in revenue. Marc is a curious and committed leader working alongside Mimecast’s 2,200+ employees to ensure its customers can secure human risk. He was born in the Netherlands and currently resides in the Boston-area with his wife and three daughters. He holds an MBA from The Wharton School of the University of Pennsylvania and obtained a B.A. from Bowdoin College.

Marc van Zadelhoff
Chief Executive Officer