Welcome to the Data Defense Forum: Defining DLP in the AI Era
Aman Sirohi: Welcome everyone. We're kicking off our Data, data Defense Forum, and in this forum, our first session is going to be on defining DLP in the AI era. All right, gents, we've been around the block a few times, and in this journey we've seen the software world. Mm-hmm. The cloud world. Sure. And now the generative AI world.
And back in the software world, I think one of the famous quotes that came out for Mark and recent was software is eating the world. I think we can all agree that generative AI is going to take over eating the world and much more, right? I mean, the way we're headed, this is gonna be a whole new space that we're all entering.
John Loya: Yeah.
Aman Sirohi: So the way I look at this is the modern data security platform is evolving in ways that we have never seen before. Vectors are coming out that have not been defined, and there are the good guys and the adversaries, and there's gonna be a battle on who's going to outsmart the outsmart, the other one.
Meet the Panel: Wiz & AWS Perspectives on Modern Data Security
Aman Sirohi: So today we have two of our friends, distinguished guests, who are going to help me decipher this as we go through the, as we go through the session. Nick, do you wanna introduce yourself?
Nic Acton: Yeah. Hey everybody. I'm Nick Acton. I am a senior solutions engineer at Wiz, primarily focused on the public sector.
Clayton Smith: Awesome. Yeah, I'll go. So I'm Clayton, I'm with AWS, I'm a principal security specialist within the Worldwide Specialist organization, so thanks for having me, appreciate being here.
Aman Sirohi: Uh, of course. A great view, great place. Absolutely right. Alright, so let's kick off the first, the first prompt, as I call it. We have about six to seven prompts that we'll go through and, uh, as we go through them, we'll just have a live discussion about this.
Clayton Smith: Sounds great.
Aman Sirohi: Alright.
The Big Mindset Shift: Cloud + GenAI Scale Changes Everything
Aman Sirohi: In your experience, what do you think will be the most important mind shift that security practitioners and leaders are going to have to make in protecting their data?
Nic Acton: Yeah, uh, I mean, I think you have to give yourself permission to understand that maybe what you're doing today isn't going to cut it in the landscape where we've got cloud that is growing both in like, you know, size as well as breadth.
Yep. We're doing more in the cloud, we're doing more networking. We're doing more compute, we're doing more data and AI in the cloud. And then of course with Genai that's growing exponentially as well. So we have more agents, we have more non-human users that need access to the eye in order to be or need access to the data in order to be valuable And.
Like I said, let's just give ourselves permission today to say we need to take another look at this problem and understand that maybe what we're doing today isn't gonna cut it for this massive new problem that we have.
Clayton Smith: Yeah. I, I think look, data is gravity, right? Yeah. And, and data is the currency of a lot of the businesses that I'm sure you and I work with kind of on a daily basis.
I think it really still starts with, I think people tend to try to attack this problem first. You know, let's protect everything before they've actually had an opportunity to inventory it, to even know what data they have. So it's very common, at least in my experience, to go and talk to CISOs, talk to security teams and just notice they don't even know where their confidential data is.
They don't know where their PII is. So I, I think especially as we start talking about AI, having a really true understanding of what is that protected data and having those unprotected data classes, I think that's imperative because to your point on ai, we're going to continue to see this go much more quickly.
And as a result, that data exfiltration problem can become very big, very quick. If you're not pointing it at the data, you're not doing the right. Im principles that go around along with that. So I think first step, and this is what we talk about a lot, is it's not necessarily, you know, sexy or interesting, but it's like, catalog your data, make sure you have data discovery, right?
Aman Sirohi: Absolutely. You know, it's funny, I was thinking when I was looking at these questions and deciding, you know what to say, I came up with my own answer too. And my answer was the number one priority I think for leaders is gonna be on the architecture. Yeah. Right. The legacy architecture is not gonna be able to keep up with what you both said.
The modern architecture is built on something that we've not seen. So I think the legacy architecture is gonna struggle with keeping up with where data is resigning, where it's going, how it's been used, and I think in the next couple of prompts will kind of unwind and unpack some of this conversation.
Clayton Smith: Well, and I mean, data's growing. Yeah. I mean, I, I don't know the exact industry rate, but I mean, to say exponential I think would probably be fair.
Aman Sirohi: Fair. Yeah.
Clayton Smith: So I, I think that's the other thing that we're dealing with.
What DLP Means Now: From Detection to True Prevention (and Access)
Aman Sirohi: Alright, so the next prompts that, that you know, that I came up with was I, what does the data loss prevention mean to you today?
And how do you think the definition will evolve in the world of generative ai?
Clayton Smith: You know, I always, when I, when people talk about DLP, I always love to focus on the P part, right? It's, it's that prevention part, right? And, and that's, that's the area where I think a lot of, a lot of folks are failing today. So, um, I mean, I, I think step one is to start to, you know, engage with a company that can actually do that, that can actually put the prevention.
'cause I, we see a lot of technologies that are good at detection.
Cameron Galbraith: Yeah.
Clayton Smith: But the difference between detection and prevention is, is pretty massive. It's a pretty massive gap. I mean, just simply knowing someone has done something, guess what? They've already got it, right? So, so I think that it's, it's really that the p and the DLP story,
Nic Acton: right?
I mean, I think the P is definitely gonna be the important part. I think also the d the data is gonna be an important part as well. I think, um, almost maybe what we're saying is like, let's go back to why we are trying to solve this problem in the first place. And it is less, let's buy some fancy thing, let's go, you know, try to make something look fancy for the executive suite or whatever.
And what is it that is actually powering our business? It's going to be our data, it's gonna be our understanding of the customer, it's gonna be our understanding of the marketplace. And when you focus on that goal, instead of, you know, let's get distracted frankly by all these other things that are going on in the industry that allows you to strategize and focus and.
Ultimately take this all, all this noise and focus on a strategy that's gonna be a lot more clear.
Clayton Smith: Well the other one is accessibility, right? Because, you know, the perfect security would be lock all your data in a vault and don't let anyone have access to it. Right? Access. Yeah. But then, you know, then you can't do the things that make your company special.
Aman Sirohi: Yeah.
Clayton Smith: And so that data has power, that data for many is really their true business. So I think when you couple all those things together, you've gotta have a plan that prevents the data from getting out, yet still allows it to be accessible in the ways that make your business special. Right.
Aman Sirohi: So the phrase that I came up with when I was thinking about this was, we gotta stop, stop stating the obvious and stating the news.
Mm-hmm.
Aman Sirohi: We gotta understand where the data comes from and we gotta protect it. Yeah. Right? Because today we just say, oh, we have data and well, it's a lot more than that now as we as move as we move forward. Alright.
New Exfil Vectors in GenAI: Accidents, Attackers, and “Volunteered” Data
Aman Sirohi: If we keep going down this path, um, and we talk about data protection and we define data, where do you see the most, uh, unexpected or concerning vector in the data XFL process in the generative AI world?
Nic Acton: So I thought a lot about this when, when, you know, we kind of pre-brief before and got an idea of how to think about it. So, uh, interesting model that I've thought of is up to this point, there have really been two vectors. That people have really been focused on when it comes to data loss. The first one being you've got a malicious actor.
They wanna get the data, so they're gonna try all these malicious actor things. They're gonna try to attack your APIs, they're gonna try to get in through the internet, get that data. That's one vector. Another vector is kind of what I call the accidents vector. It's the, someone got phished, someone, um, you know, from some of my customers in previous lives.
Uh, someone printed things out, put it in a folder, walked in the cafeteria, came back and it was gone. So that's unfortunate. Accidents. The third vector that I think is becoming a little bit scarier is what I call, like the volunteering of the data. It's, you know, I think that there's some new gen AI capability or something that is really gonna make my job easier.
All it needs is for me to give a little bit of this data, and I'm more focused on the goal than any of these data, you know, protections or anything. So that's what I think is most concerning to me, is those instances of, oh, this seems easy enough, or this seems, um, harmless enough. I'm just gonna volunteer my data to it, and.
Not really care what happens to it.
Clayton Smith: Yeah, I, I agree with all of those. I, I think what I would add to it is the insider threat component. Mm-hmm. So, you know, with ai, oftentimes companies, one of the first things that they've been doing, at least the pattern that I've noticed, is they go out and they create internal chat bots.
And again, it gets back to that, if you don't know what data you're feeding to your AI tools, then you don't know what confidential data now an employee can get out. So, you know, the, the stereotypical one is we use, uh, like a hospital. So yes. Could a hospital have a chatbot that makes a meaningful difference in their life?
Absolutely. Does that mean the front desk person should be able to look up what your blood pressure is? No. So, but we got, we have to start by understanding where is that data and then putting in the IM principles that actually go along with that. So I think the insider threat is, is kind of a big thing because then you start looking at let's keep going down this path of the medical company, you know, being able to then extort money from you because I now have your information.
I'm making, you know, minimum wage working the front desk. It's really tempting if suddenly I ask AI a question and it gives me back a response and I'm gonna ask it some more questions and try to figure out the way to, to take that and extort money from someone. I think that that's, I think that's a real possibility.
And that's why we've gotta be careful not just simply trusting everyone that's inside the organization. Right.
Aman Sirohi: No, I think, I, I, I would tell, I would agree with you wholeheartedly on this because I think threat actors are becoming more, uh, intelligent and not going after people like us. But going after employees who have certain access to the data who might be a minimum job or might or might have a discreet, uh, job that's not really classified as important and they're going after them and basically doing data XL through them because for them that a hundred thousand dollars or whatever value you wanna put, it's so much more meaningful.
And I think it's gonna become more and more important because Data XL is gonna become something that we're not prepared for.
Nic Acton: We're, we're even finding those situations where, you know, the nation state actors are trying to participate in the economy by just finding people that can get an American job.
Aman Sirohi: Yep.
Nic Acton: Say put, you know, some kind of remote, you know, desktop or something, or some kind of remote connection to it, and then we'll just do the job. So they're getting money, they're getting access, they're getting data, they're getting intel, and unless you know this then and are tracking these behaviors, then that's very, very hard to identify.
Clayton Smith: It's a great, it's a great vector for corporate esp.
Aman Sirohi: Yeah, it's, we've, we've seen this a couple of stories recently too.
Clayton Smith: Yeah.
Why Traditional DLP Falls Short: Velocity, Context, and Milliseconds
Aman Sirohi: Alright, so we've talked about data as data and where it's generated from the protection side of it, where we think it's going, what's important to us. Let's take a step back, let's go a little bit back now.
Why do you think the traditional DLP solutions are falling short? And what impact is that gonna have happen in any organization?
Clayton Smith: It really just goes to velocity. It goes to the, the speed. I mean, if you think about what you're trying to do, um, in being able to identify data, classify the data, understand where it's going, into whom it's supposed to go to, and to then add a prevention layer on top of that.
Like, it's just, it's a lot. And I'm not saying it's impossible. Obviously companies like yours are doing a great job of solving this, but um, it, it, it's a lot more difficult problem than it just sounds. But if I had a big stack of paper here and I said, Hey, there's some secrets in here, then you take 'em away from me and say, don't use those then,
Nic Acton: right?
Yeah.
Clayton Smith: But that's not the same thing. And oftentimes we find that even when customers, so I'll, I'll use outta my experience being other to AWS Wolf customers quite frequently that say, oh no, this, this has three bucket can be public, or this doesn't have, um, confidential information in it. We'll run a scan against it and say, yeah, it does.
Like, and, and it wasn't because they were bad practitioners, they didn't do anything wrong. It's just what happens when we're dealing with these, I mean, absolutely massive data sets. And I think, you know, for the viewers out there, you know, just think about your own data footprint, how big it is, and now, you know, look at that across the industry.
Like it's just, it's almost impossible to imagine. Yeah,
Nic Acton: I mean, I think absolutely velocity is gonna be key. You need a solution or a strategy that is gonna keep up with the amount of data coming in. You know, putting side gen, ai, just cloud, and even just the. Volume of data that could be coming in for customers, for anything else that's going on, you need to be able to keep up with that.
But I think, you know, what we say is context is key. Context is queen to the situation. And you know, the reality is that your business is ingesting and manipulating and utilizing that data at the same velocity that it's getting it in.
Josh Stabiner: Mm-hmm.
Nic Acton: So you need a solution that is gonna be able to, you know, you talk about accessibility, actually provide value to your customers, value to your internal customers and everything without blocking everybody and basically just taking in all the signal and saying, Hey, this is all valuable.
No, that's not true. It's the instances where, you know, this is a far better indicator of compromise or far better indicator that you have an xFi event.
Clayton Smith: You know, I, I wanna jump on something you said, and this is actually a story from inside AWS We did have a solution, and I won't get into all the details about it, to where data would be posted and then we were going to do a malware inspection against it.
To your point though, even that, like second or two that we were talking about running, that the customers came back to us and said, no, we need access that data sooner. We, we, we can't, we can't leverage like the milliseconds and everything. So I mean, when we talk about data at speed or business at speed, like we are literally talking about like millisecond based decisions.
So I, I really, I love that point that you're bringing up. Like literally when the data comes in, it has business value and the business wants to utilize it.
Aman Sirohi: So let's peel this question. Let's go to the next layer down on this.
Lineage, Behavior & Intent: ML Baselines and the Unstructured Data Explosion
Aman Sirohi: So we've talked about context, we've talked about lineage, we've talked about user behavior, we've talked about intent, right?
How do you think these three or four criteria are going to come into play when you're making decisions within your organization and how are you gonna enforce that? So the, lemme I repeat a couple of things again. Data lineage, user behavior and intent, right? Thoughts around that?
Nic Acton: Yeah, I mean, so it's funny, we're talking like data lineage, relatively like easy problem to solve.
We know where it's coming from, everything, the user behavior, the context, and all those other pieces. Those are very difficult problems to solve if you're looking and utilizing older architectures where everything is just flat tables or you know, just kind of old mechanisms to look at all this data or, you know, analyze it.
So, and, and even with new mechanisms and everything, you're taking a technology problem that is actually also kind of a psychology problem as well. So really what I think I'm trying to say here is like, think about solutions, think about strategies and everything that allow you to look at these problem sets differently and recognize that there probably isn't going to be some like.
Turnkey solve all your problems. You do need to think intelligently, utilize your business context to apply into the system and ultimately, you know, make it work for you and make it work for your customers.
Clayton Smith: Yeah. I might take this question a slightly different angle. I, I hope that that's okay, but I think the other thing that we've gotta consider is there's baseline behaviors and there is machine learning that can actually leverage this.
So, um, one of the things we've had a lot of success with at our company, and, and I know I'm sure you guys have as well, um, has been baselining the, you know, average day in the life of a user, a workload, um, a data set, a, a volume, whatever you wanna say. And then saying, okay, when we see something that's out of balance from there, let's try to get some signature on that as to what that might very well be.
And so getting back to the really, the heart of data exfiltration, you know, there are times where something may not look nefarious. Like you going in and grabbing something from accounting may not look very bad, so hey, no big deal. But what if your job title changes? Now you go in there and something has changed about that.
Now, now we're saying there's a behavior that's different and now there's a mismatch on the rule. So I think that as solutions continue to go, continue to grow, you know, I know AI is the really sexy term to use, but frankly like. ML is playing a huge role in what we're doing in the security space right now.
Yes.
Nic Acton: And these are reliable performing models that have been battle hardened over the past decades. Oh
Aman Sirohi: yeah, of course.
Clayton Smith: Absolutely.
Nic Acton: I I
Clayton Smith: almost feel like ML'S become a bad word. It's, it's really not like it serves a really specific purpose.
Aman Sirohi: I almost feel like that's the foundation that everything's built on.
Clayton Smith: Agreed.
Aman Sirohi: Right. So, so when I thought about this too, when I talk, when I thought about lineage intent behavior, one thing I think all three of us agree on is gone are the days, days of structured data. It's the world of unstructured data that resides everywhere. It's in your slack, it's on your message board, it's on your email, it's on some text group.
Right? And that's what's getting really interesting when you're talking about data XL to the reason you just gave, I'm a finance person, I'm gonna take down some documents, I'm gonna send it to both of you. You are part of my finance team and you are part of the marketing team now that intended use changes, right?
And that's unstructured data going from different places and us, for us to be able to find that digital footprint, that digital breadcrumb trail becomes more and more important. Because that leads to what you said before, protection.
Rinki Sethi: Yeah.
Aman Sirohi: How you can protect it. And then you said it, I mean, you both said it, access, right?
So I think this world of unstructured data is going to become something that we all gonna be. Blindsided with if we don't get ahead of it.
Clayton Smith: No, you're absolutely right. And confidential data is spread all over the place. Like, I mean, just think of your Sure. The, the viewers out there. Like, think of your daily life, think of the things that you share about customer relationships, about, you know, um, accounting and all those sorts of things.
You, you brought up Slack. Like, I mean, Slack's a goldmine of confidential information. Right. And I don't think that we, as a general rule, I don't think the general public thinks of it that way.
Nic Acton: I, I think, I think you bring up an excellent point, which is like, this problem has already been difficult before Gen AI came out.
Clayton Smith: Yeah.
Nic Acton: And with Gen ai, a lot of people I think, are looking at this problem and they're thinking, okay, how do, how do I get this capability into the hands of everybody here as quickly as possible? Well, let's just give it a lot of access. Let's just give it everything it needs. There are strategies and there are architectures and everything emerging to safely bridge that gap to make sure that people are using the lms, using the Gen ai, but only on the data they need.
And that's hopefully where individuals like us are coming in to educate and provide these capabilities to our audience.
Clayton Smith: Well, data exfiltration has become more dangerous as a result of AI as well. You know, before, if I, let's use your slack. If I had somehow gotten all of the Slack messages for a company trying to find that needle in a haystack, which could be that confidential information I wanted.
I just upload it into AI and ask it a simple question like, you know, give me the top five customers that this discussed. Right? Yeah, exactly. And so, so I do think that becomes really important. 'cause now we're even starting to look at, you know, data exfiltration today. What can it even, even if they can't get into that data now?
So we talk a lot about this with like post quantum, uh, cryptography. Cryptography. Yeah. You know, if they can't gain access to it now, can they gain access to it in a year, three years, four years? Just think about the capabilities of AI against that data as well. And kind of the same way, like what type of needle on the haystack could I find if data's exfil today and it's leveraged tomorrow.
Aman Sirohi: That's true.
Guardrails vs. Blockers: Coaching Users and Building a Security Culture
Aman Sirohi: Alright, so I'm gonna do an audible, I'm gonna take two of our prompts and kind of combine them because I think that's where we're headed.
Clayton Smith: Okay.
Aman Sirohi: Let's talk about trends in, in, in two ways. One is we have blocking capabilities, alerting capabilities, real time coaching capabilities, and then how does that impact our culture, right?
Our technology culture, and the people that we live with, right? So I think we've talked a lot about these two different topics, but I think it's important to kind of look at what is the customer impact, the user impact. When you're trying to do a blocking and alerting and real time coaching, and what's that impact on our cust uh, uh, culture as as, uh, employees?
Clayton Smith: Yeah, so I, I do a lot of conversations actually on culture of security within AWS and, um, I can tell you what CISOs are struggling with right now is they want to protect their data.
Aman Sirohi: Yeah.
Clayton Smith: They want to protect their users from doing something accidentally that they will do, but they definitely don't wanna be seen as a blocker to what's going on out there, because what's going on out there?
I, I think oftentimes we take AI and kind of set it aside and go, wow, this is cool, but it has like a real business application. And for, for two businesses that are in the same industry, it could actually be that thing differentiates them, right? It could be the, the winner versus the loser is who may figure out how to leverage AI more quickly and more effectively.
So the CISOs don't want to be seen as that blocker within the business. They want to be seen as business enablers. So culture's kind of a big thing, and I think this is where communication really kind of steps in. And so where we've seen the best organizations be able to build the culture, it actually starts from the top down.
Nic Acton: Top down. Yeah.
Clayton Smith: And, and it, and when I say top down, I think oftentimes people think, ah, the ciso No, no. I mean the CEO. Yeah. Like the CEO has to think about this from a data protection standpoint and be able to push that through the organization that this is important, something we need to pay attention to, and the way that we get to the customer outcome that we want is by doing it securely.
Nic Acton: Yeah. No, I mean, I think, I think the culture is absolutely gonna be important and I think. You know, speaking from my experience with companies that saw Gen AI and were thinking about how to interact with it and talking with customers, I mean, I think let's, let's talk about a bad way culturally to look at it.
I think a bad way to look at it culturally is going to be being just flimsy about it. Like, oh, let's, let's explore some gen AI here and there. We're really excited about it. Let's dabble all this other, all these other things. I mean, to be honest, if you all take a look at Gen AI and you say, you know, our risk profile lends to minimal adoption of this technology.
Be clear about that in your communication with the company. And if it's the other way, fantastic. And in fact engage with vendors like us, we're here to help. That's, you know, not just here to, you know, get our technology into your hands. And, you know, once you have developed that strategy, once you have developed that idea of where you want to go, communicate that to the company, clearly these are the tool sets we approve.
We understand that there's fancy tools that do a little bit better, do not use them. We are very clear about that, and I think that is a strategy culturally that I've seen work very, very well.
Clayton Smith: So, I'm not gonna let you get out of this without you answering a question. Okay. So you've been a CISO at a couple different organizations.
How are, how the boards responding to this conversation though about, you know, the need to balance security along with this desire to like, have velocity and, and make sure that the business succeeds in the AI world?
Aman Sirohi: So I'll take it right back. When we first started this conversation, we talked about the software cycle, the cloud cycle, and the AI cycle, right?
There were a number of leaders, CEOs, CIOs, board members that were not on the cloud wave. They were like, no, no. OnPrem, we're gonna stay on-prem. You could cloud people, do your cloud stuff, and the ones who adopt the cloud took off. Right? Right. It was like, it was like groundbreaking. This time around, everyone is saying, we're not gonna be left behind.
We are gonna adopt generative AI in any form, in every form at, so our velocity doesn't slow down. Our productivity is good and we're gonna keep up with the market. Right.
Clayton Smith: As a security practitioner though, aren't you a little concerned about the pendulum swinging too far? The
Aman Sirohi: other I have. So I mean it's, but I think as a security practitioner, I think you have to embrace it.
You actually have to take the view of like, look, we as security practitioners have to work on how do we enable this technology safely, right? And how do we do this in a way where the user is able to do it also in a safe manner? Sure. Because we're moving at such a fast speed, sometimes I feel like users will make errors that they're not even prevalent to.
Right. So the way I look at it is the user experience has to be as important as the technology we're putting into place. And this is where I think the ones that are gonna beat out are the ones that are gonna teach you in real time. When you make a mistake or you're about to make a mistake, right?
Because we're gone are the days of this is the unfragmented data going at lightning speed. And if any one of us is like, oh, I took this data and I'm gonna put in this LLM, oh wait, this is the personal bot that I, I, I should be using. We have a enterprise level that we should be using, we should be coached right away before we make the mistake.
Right? So I think the board members are like, look. We can't be left behind. I don't wanna be the board, and I don't wanna be the CEO, the CIO leader that was saying, Hey, I took too long to adopt generative ai. But at the same time, I think it's on us to make sure that we, as secretary practitioners put in the guardrails, but ensure that the user experience protects them from making the, making a mistake.
Nic Acton: Yeah. I, I think you hit on the key term that people should be like keeping in the back of their head, which is guardrails, right? You give your developers, you give your software development teams, your, uh, everybody in your organization, these fancy new laptops, you get 'em on the cloud. You got, you pay all this money for all these licensed tools and everything.
You know, it would be like putting someone in F1 race car. And if you just have gates everywhere, then it's like, all right, go a hundred feet, then we stop you. Then we go another a hundred feet, then we stop you again. But with guardrails, like, I don't know, a single F1 racer that's like, Hey, don't put any guardrails on the track because when I fall off the track, I want to go hit, you know, a whole crowd of people and everything.
The guardrails are wanted by everybody. They want to go fast. They want to deliver with velocity, but they don't want to leak your data. They don't want to introduce CBEs. So, you know, I, I think that a lot of people think there has to be that trade off and security and all these things have to be this like kind of gate.
And the reality is with, with clever technology, with new ways of thinking, with new architectures, that's not entirely the case anymore. Gates are certainly valuable where they, where they need to be, but. It doesn't have to be throughout the entire process.
Clayton Smith: Yeah. I think, I think throughout the whole cloud industry, we talk about guardrails as being a place that, you know, your developers are creative, right?
And, and we want them to be creative. So it's about setting up an environment to where they're not gonna get themselves in trouble. And I think that's, that's, that's a hard thing to do. Yeah. But, um, yet that's, you know, that's what our industry and that's what our, our companies are all trying to do.
Aman Sirohi: That's where we're headed.
So,
Clayton Smith: yeah.
Measuring Success in Modern DLP: Frameworks, Metrics, and the Talent Gap
Aman Sirohi: Alright, second to last prompt. What do you both believe success, success will look like in the Modern Data Loss Prevention protection program that leaderships are going to enable in their organization? What is the success that you think will look like and is it measurable?
Clayton Smith: Nick, here, here's the crystal ball.
Are you ready? Do you see it?
Nic Acton: I
Clayton Smith: do. You
Nic Acton: got it? I, I see it. We, no. Hey, good news, no leaks this year. I can tell. I, I mean, you pretty much nailed it. I, and that's the, that's gonna be the hard thing for anybody in the cybersecurity role, in the cybersecurity sphere is that, uh, you unfortunately don't get, you know, extra credit for all the things you prevented because you don't have the crystal ball.
Mm-hmm. Um, you know, I, I think certainly look at, look at what all, all of our vendors are doing. Um, we're putting out a lot of great frameworks for how to think about how to build your practice. Mitre, dsa, um, CIS, nist, all have great frameworks to think about the problems and everything and start to measure everything against.
And then in terms of measurement, I mean, you know, what I like to say is, is there's a lot of great measurement tools out there, like meantime to response, meantime to detection. Uh, there's gonna be other kind of frameworks and other things to apply on top of it. The thing that I would caution is to blind is to not blindly follow the numbers because, you know, an example that I, I've seen out in the wild is like if I went and said that, hey, our, our, you know, data X fill rate is, is effectively zero this month, then I'm like, well, either you are really, really good at your jobs or you are not finding a whole lot of data xFi that's occurring in our environment.
So I don't know whether to promote you or fire you, quite frankly.
Clayton Smith: Yeah.
Nic Acton: So think about the context, think about the, you know, nuances in your strategy for that.
Clayton Smith: His crystal ball is different than mine, which is good. Yeah. So what my crystal ball says is that I'm gonna take a very optimistic approach of this.
I think AI as it pertains to security, to data loss prevention. I think we could very well be looking at a way to finally solve this lack of security professionals. You know, and, and the problem for those of that don't know it is we've got far more security jobs than we have security professionals. Yeah.
What I wanna find are problem solvers, maybe not necessarily coders. And so before you know that that ecosystem has been fairly closed to a small group of people, I think AI can really open our aperture to new people to bring into the industry that we can use now AI to do some of the really kind of technical heavy lifting.
Yep. And let people start looking at this from a problem solving capabilities. So for instance, one company that I was at, we had threat researchers and we, we found that they were linguists, they were musicians, they were, I mean there was this whole like very dynamic group of people. The reason why is 'cause they were problem solvers.
I go back to that and that was years ago. I go back to that and I think, gosh, what if they had had AI and they hadn't had to go and like learn all the hard technical skills, but instead bring like that, that problem solving. You know, like artists for instance are really great at seeing in the abstract.
Cole Padula: Abstract. Yeah.
Clayton Smith: What if we could put them into a situation where they didn't have to know all the nitty gritty technical details, but they could kind of help look at the abstract there. So I think it's a really exciting time for security practitioners.
Aman Sirohi: Well, we're gonna see Lucas. It's gonna get more and more interesting.
Alright. Before we wrap it up, I thought I'll throw out a totally different prompt that we didn't really discuss. Nothing to do with,
Clayton Smith: we're out of tokens though.
Aman Sirohi: Oh. If you're
Clayton Smith: gonna go with this prompt. Yeah. Yeah. We've, we've exceeded our rag.
Aman Sirohi: I dunno. We, we'll, we'll, we'll, we'll figure, we'll, we'll, we'll figure out who pays for it.
Predictions: Personal AI agents & the rise of your “digital twin”
Aman Sirohi: Uh, if you had one prediction on anything that you think where generative AI is gonna be disruptive, it doesn't have to be the data, it doesn't have to be to do with the basic one data to data things. Uh, I'll kick it off, so I'll give you guys some time to think about it because I came up with this on the fly is, I think in the next 18 months, we will have a personalized agent that we will download as an app or something, or we'll be integrated into our phones that will know things about me, know where I need to be, know how to respond to my spouse's texts or emails, uh, my kids' needs and wants.
Uh, basically a personal assistant will live in my day-to-day being, and that is gonna change how we operate because it's gonna take away some of the menial tasks that you might forget or you have to set out time to do. We'll now be done for you. Right. So I generally do believe in the next 18 months we are either gonna have phones that are being integrated this way, or we'll be downloading apps that will be your personal agents.
Clayton Smith: So like a digital twin of a month.
Aman Sirohi: Yeah, pretty much. That's scary, isn't it?
Clayton Smith: I, how many copies are we gonna make? I gotta be really careful with this. Your, your company really needs to know what they're in for on this one.
AI everywhere… and the coming backlash (guardrails, trust, adoption)
Clayton Smith: Um, I think my prediction is, is kind of similar. I just, I see it already happening. AI is weaving its way into my daily life.
Yeah. You know, I, I don't think a day goes by where I'm not using some form of ai. I'm using it in my professional life. Um, my wife loves to plan trips. It's planned almost every one of our trips.
Aman Sirohi: Nice.
Clayton Smith: Um, we were overseas this summer. Every restaurant we decided to go to was researched using ai. I mean, I think it just becomes this big part.
The other prediction though, that I'll make is I do think we're gonna see some significant backlash against it. I, I think that we've went so far and I'm already starting to hear it, you know, some of those guardrails, they're not necessarily working all the time. So I think there's gonna be a little bit of a lull as we have some backlash against it before it really kind of takes off.
And of course, a GI is, what's the next step that I Yeah,
Nic Acton: yeah. I, I remember getting this question and I, I, uh, you know, went a little kind of funny, like, be careful what you wish for. So.
Black Mirror future: perfect recall via wearables + cloud acceleration
Nic Acton: Gen ai, LLMs are getting more powerful, but as kind of a secondary effect, all of the supporting infrastructure around it is getting even more powerful.
Yeah. Storage is getting faster and cheaper. The kind of, you know, semantic search engines and everything that are powering, rag or getting more powerful, more optimized. And, you know, third piece, we're seeing more of the glasses with the cameras, with the mics and everything. So, uh, we're kind of heading to that black mirror, like, Hey, we get perfect recall for everything we've seen, said, every conversation we've had, et cetera.
And I think it sounds on its face like, oh, that's interesting. I can, I can prove everything I've said, and I want you to honestly think, like, would it really improve your life to have factual evidence of every argument you've ever had with your family? Would it in fact make it better, or would it just make things like worse and more fight?
You can see
Clayton Smith: a lot of torn apart families. Yes. I, I will say, and obviously being at A-W-S-I-I, I know this is a little, you know, AWS slanted, but I also do think that AI is gonna continue to drive. More folks to the cloud. I, I just think it's very difficult to build on-prem infrastructures that can move at the velocity that we're talking about.
So I really do think that, you know, the cloud providers are gonna play such a pivotal role in this transition.
Aman Sirohi: Jent, thank you so much for spending the afternoon with me. Uh, we're gonna kind of summarize it. Uh, if I miss any other key highlights, feel free to jump in if something resonated with you, because I think it would be great for the audience to take away some key things, uh, to their own organization.
Sure.
Key takeaways: mindset shift, unstructured data, and user context
Aman Sirohi: Um, so the first thing I, I thought about when, uh, when we were discussing this was, uh, there's gonna be a meaningful mind shift that each organization, each security leader, each practitioner, is going to have to internalize as we move into the world of generative AI and how it relates to our data that we are owning within the organization.
Right? So that's one of the ones I think the meaningful mind, mind shift each of us will have to do. Second one is, I think gone on the days of traditional tagging, labeling of our data because we're in the world of unstructured data. It is not going to change. It's only gonna, that problem is gonna amplify, uh, even more, right?
So the essence of knowing your digital breadcrumb trail of your data is gonna become more and more important for every industry, whether it's medical industry, finance industry, retail industry, legal technology. It's gonna impact everyone. And the last one that I thought that's gonna be very, uh, important for us to kind of take, take key takeaway is going to be context to the user, right?
Because we want them to learn, we want them to understand, we want 'em to understand the contextual decisions they're making with their data with the world. We, we are gonna enter, I don't know if I'm missing something you wanna add on, but, uh, those are the kind of two or three things I picked up.
Clayton Smith: I think actually one of the important things that you said is you compared it to the move to the cloud.
I'm actually gonna go even further back like this is when, when the internet actually first started and companies were trying to decide, should I have a website? Should I not have a website? Like now it seems kind of silly, right? And I think as we look back on this time, you know, the companies that don't embrace at least some ai, and I'm not saying turn their whole business into an AI business, but some, even if it's for a small part of the internal productivity, I think, um, I would encourage anyone that's out there to be, be looking for those areas of opportunity to just make themselves more productive.
So, no, I, this has been a great session. Thank you
Aman Sirohi: so
Clayton Smith: much.
Aman Sirohi: Appreciate it. All right folks. Uh, thank you for joining us and I look forward to seeing you in our next session. But please continue to move forward to our next session, which we'll continue after this.
New session kickoff: Building a security-first culture (boardroom to business units)
Meghana Dwarakanath: Hey everybody, my name is Magna Wna. I'm the VP of Customer Experience here at Cyber Haven, and today we have with us Payman Arman, the SVP and CSO at xper. And we'll be talking about building a security first culture, all the way from the boardrooms to the business units where it gets implemented. Uh, with that, let's first welcome our guest today, uh, Payman.
Arman Payman. Would love an introduction.
Payman Armin: Uh, thank you Magna for first of all, having me. And, uh, so Payman Arman, I, uh, the CIO as well as the CSO for xper and the previous role. And historically I came from the infrastructure and software development and, uh, not security. And later on in the career got into the security.
So it gives me a perspective of both side of the challenge, both from the user point as well as the security standpoint.
Meghana Dwarakanath: Thank you for that. And with that coming from your background. What is your perspective around building a security first culture in the organization?
Culture in practice: trust, education, and balancing security vs employee experience
Payman Armin: The, the security first culture is truly enabling, uh, the employee and making them understand what the securities and why we are doing it.
Uh, without it just from a purely driving policy or, and expecting them to do it, it just doesn't work. Once they understand it, then they, or they become partners, they becomes, it becomes part of their day-to-day routine and think about it.
Meghana Dwarakanath: And with that, there is always this. I guess balance, right? And you having played the roles of CIO and cso, you very well know that balance there is the need for security, but at the same time, the employee experience needs to maintain a certain level as well.
How do you balance that? How do you control access and mitigate risk while making sure they have what they need?
Payman Armin: It's, I do. I, I really do because I'm, I'm sure that early on in my career as, as when I was trying to implement technology, I put drove many of my CISOs crazy because I was pushing the limit.
But, uh, over time I think it's easier to, uh, get to that balance. The tools are far more, uh, far more easier to manage and still, you know, insecurity still do it, uh, least, uh, at least access policy, uh, but still enable them to do it without adding additional work.
Meghana Dwarakanath: That is great to hear that technology is making the right strides to make it easier for everybody.
Board & governance: regulation, security councils, and measuring outcomes
Meghana Dwarakanath: And with that, you know, there's the business unit, there's the operational part of it, and then there's the board part of the conversation. Right. Uh, in terms of the strategic planning and the budgeting, how is security represented at those conversations?
Payman Armin: Uh, so I'll start with the, the latter with the board side from top down.
I think the board over time has, is easier, especially for public company if with any board member with tenure one way or another, over the last, uh, 10 years they've been through a security event. Also, there is so much regulatory stuff has come into play, uh, with the introduction of the SEC eight K ruling as well as the Biden executive, uh, order.
The board is very much aware of it. Now, as I said, these are all very much public driven or as you're dealing with the federal government ruling it much more stringent. When you get on the smaller boards and, uh, non-public, uh, unfortunately you're gonna end up getting into positions that the board's attention to security sometimes is as good as the duration from the last security event.
Meghana Dwarakanath: I'm sure a lot of our listeners are gonna relate to that because I'm sure they're going through similar conversations with their boards as well. And with that payment in terms of, you know, you've got the board buy-in, you have this program. And then you start rolling out your security program. What do you see as the biggest cultural or almost behavioral gaps that you have to overcome?
Payman Armin: It's building the trust and education of the business. It's, uh, it's gonna be key. And we have implemented, uh, we have a quarterly security council that we meet with a cross-functional, they are part of the core council of the, the security. And they know what my roadmap is and they know what, I know what their roadmap is.
And so we, there is that alignment at the top business function. So that becomes, that's a key, but then you have to continue it and build that trust and build the, the education. So they are, you are invited, you are invited as part of the, you know, the security by design part of the initial phase of any of their initiatives.
So you can ask the right question. And, uh, this becomes, even nowadays is so important. You look at from a data and data privacy, uh, there is so many regulatory stuff around the globe. Every country has their own, the flavor of it.
Meghana Dwarakanath: Thank you. Thank you so much for that. And then, you know, you've set up your program, you know, you've talked to your people.
How do you measure that this is working, right? How do you measure it as, hey, this is the success. Uh, do you measure it at a policy level? Do you measure it at a outcome level?
Payman Armin: Uh, it's more of the outcome level, more of events more. And you do training, you do exercises, you know, the sim I'll pick on this most simplest one that
Clayton Smith: Yeah,
Payman Armin: all of us do fishing, exercise.
How often do you do it? And, uh, you know, and we, you know, we have a, a, you know, we actually measure how many times how, you know, sub, you know, how many times our frequent clickers are. Uh, and the thing, and the reality is the more you train, the more you talk about it, and at all, all levels of the organization, the outcomes becomes better.
And, uh, and, and, and that's the most simplest one. Uh, then, you know, then when you talk about whether it is in product or any kind of a, uh, on the business side, we hold customer data. Those are actually, you know, as important as employees. You know, it's just, uh, the, we are custodian and we have to protect that, and we have to educate the employee base.
Yeah. Why is that important?
Meghana Dwarakanath: That is true.
Securing AI adoption: vendor assessments, privacy, and supply-chain risk
Meghana Dwarakanath: And, and when you talk about data, there's another buzzword that's going around today, and that's ai, right? Everybody wants to use AI or they're building AI into their products. How has your security education and governance evolved to kind of encompass this?
Payman Armin: Yeah.
We actually do have a, a secure, uh, AI council. We started that because every single technology out there, whether they do or not, they have AI in their, uh, product line. So, uh, so we actually partner, we, we use a security, privacy AI assessment with, for every vendor coming in. Mm-hmm. And it is very important.
It is not, you know, the AI is not just about. The outcome of the ai. And we all know what are the, you know, the bias and all the other things that can go with that, but it's how are they using the data that you're giving them? How are they keeping that secure? How are they using it for training purposes?
Uh, what the, which one of the major ai uh, uh, AI powerhouses they're using. So the, you need to ask those questions. Uh, and, uh, there is a lot of innovative, fabulous AI technology out there on the newcomers. The challenge is, you know, or do they have the security posture for them to be the custodian of the data that you're sharing with them?
And that's important.
Meghana Dwarakanath: Absolutely. It's almost like we all have to update our risk assessments to make sure the AI technology we are bringing in through the supply chain is as secure as we want them to be. Yeah. And, and with all of these changes and, uh, this rapid evolving of AI and really how we access and use data, uh, do you have any pivotal moments that you learned from that really influenced you in the recent past?
Payman Armin: Uh, I'm sure there is and I'm, uh, sure that, I'm hoping there will never be, but, uh, I, I, I think there is events happening within the, within the industry, literally on a daily basis. There is, uh, there is, and uh, what we do and, uh, from a security practice, we actually, whether it is reading the, you know, SCC eight K for the, uh, for any, uh, uh, announcement of the, of security events or any of the publication, those all can be learning because, you know, uh, we are, no, no, no, security is perfect.
No security is a Fort kn. So understanding what is the new trend of the, you know, what security vector, attack vector they're using and, uh, being, trying to mitigate, be ahead of it is a key. So that's, that's I think is an ongoing, is not one event is ongoing maturity and security. Posture maturity.
Meghana Dwarakanath: Absolutely.
Driving alignment: gamification, insider risk, and just-in-time training vs “Big Brother”
Meghana Dwarakanath: And talking about coming back to the business units again, uh, I'm sure you have your security oriented, favorite business units in the organization, and let's say you have business units. That need a little more push to adopt certain postures. Right. How do you bring that cross-functional alignment across these business units?
Payman Armin: Uh, as I mentioned before, actually, uh, I'm privileged, uh, that in my current, we started the security council early on. Uh, I've been with the company, uh, for, uh, three years and we've been doing it and we are maturing as a security council. But that is truly a cross-functional, and it is both, you know, it is not just about the engineering and development is practices that we do in, in our sales, in our, uh, uh, finance and all aspect of the company.
And, uh, having that alignment and it's important. And, uh, as you have the buy-in from different function, it feeds on each other. Everybody, you know, we are aiming for the same goal.
Meghana Dwarakanath: Yep, that is true. Do you gamify at all? Do you, uh, have rankings for your business unit and post them?
Payman Armin: I have had in the, in the past, uh, I have had the wall of shame and wall of fame at the executive level and, uh, made it a competition.
Meghana Dwarakanath: Awesome.
Payman Armin: It it works. Yeah. Competition brings the best.
Meghana Dwarakanath: It does. It does. And, uh, especially in the tech space, we are all super competitive. So, uh, well, with that, uh, payment, I wanted to ask you, looking ahead, right, what is the next frontier for you in security? What do you want to incorporate next into your organization?
Payman Armin: Uh, as, uh, you know, our bad actors keep on evolving. We need to evolve and we need to, uh, I actually, when I, uh, you know, brings up a, uh, the. E-E-S-P-N-I believe, uh, had a interview with Bruce Lee, uh, a few years back and it says, be like water and we have to be, you know, and I like that saying because, you know, we are adaptable.
We have to be adaptable, uh, to the, what the trends are happening and how the new technology, introduction of new technology evolves and we have to adapt to that. But at the same time, you know, just like water, it has to be strong enough to, you know, to smooth out the edges, to, to be able to, uh, you know, move mountains.
Meghana Dwarakanath: That's a great point. You were being piman about bad actors evolving. Are there any nuances in terms of how you handle external bad actors versus internal bad actors?
Payman Armin: Yeah, actually, uh, it's a good point. Uh, you know, external is more publicized and it's, it's easier to see, uh, or react to the internal and is less about malicious insider threat or insider, uh, from an employee is more about, you know, in all good intention, they will ex they will make the wrong decision.
And again, phishing is the wrong one. Now, you look at the, the introduction of all the new AI technology, they will try, they will click and, you know, they will by, you know, because they wanna write the best email, they will put that some, you know, internal data just for, uh, purpose of review. External and those are the things that we have to educate and we have to be able to enable them to, you know, and educate them what's good, what to use and what not to use.
Meghana Dwarakanath: That's a great point. You bring up our man in terms of, hey, most of our people are well intentioned, they just don't know how to do the right things. Definitely our executives are very well incentivized to consider security and put thought into security for their business units. How do you bring that same responsibility and accountability to the individual contributors who, like you said, are just trying to do the right stuff and uh, probably doing things that are potentially unsafe.
Payman Armin: So, uh, you know, we talked about education and all that sort of thing that I will say two things around that. One is that security first culture will completely break down if at the individual contributor they see two different rules. People at management have different rules compared to at the ic, so is one.
And from a security standpoint, and I hope that you know, the employee base as a company, we are all aiming for one goal. We are unified. Doesn't matter in what we do, whether it is in our go to market, whether it is on, uh, supporting the employee base or in the security, we, at the end of the day, we have one goal that we are working.
Meghana Dwarakanath: Forman. Uh, you talked a lot about education and training being a huge part of a successful security culture. And now when you think of training, there can be a couple of ways you can go about it, right? There's the proactive training that you're doing as a part of your regular training exercises, and then there is the just in time training, which is, you know, somebody is about to do something wrong and they immediately get a nudge.
What are your thoughts on that?
Payman Armin: Actually, I mean, so that concept has been around for the longest time. You look at. 20 years ago, we had the proxy servers and as it, you know, and security, we banned going to gambling sites. But the reality is that the sites are now becoming harder to detect, harder to know if it's safe or not, and what is their ranking.
I mean, there is just so many KPIs that you can measure even at the site level that they are somebody's going to. So that's where the, you know, you, the technology and I, I, my recommendation, don't, you know, and we talked about this big brother, not big brother, you know, you don't want to end up being a forceful blocking, but at least educate them, this is what the ranking is.
This is looks suspicious, double check it. And once that education is that, then you can have that system to become actually more pro, uh, proactive and actually doing the right thing.
Meghana Dwarakanath: That makes sense. A crawl, walk, run.
Payman Armin: Yes.
Meghana Dwarakanath: Um, and with that, with all of these security tools helping you and the employees do the right things, there is again, that balance between the visibility that you want versus employees feeling that they're being monitored all the time, or that there is a big brother watching them.
How do you balance that?
Payman Armin: It's, it is truly a balancing and you will teeter-totter between the big brother and the, but truly is, uh, you know, using the tool to just in time education rather than having, uh, you know, and that's gonna get them into a motor partnership rather than, oh, you're gonna get an email, you did something wrong, now you're gonna do this.
That's usually after the fact that the action has already happened. So doing that, uh, you know, bringing them along for the journey. Being part of it is not that, you know, the last thing you, you wanna be is. Uh, you know, they have made this great strides great, something that they, they want to put it in the company or release it, whatever that might be.
And then you don't want security to be the afterthought because usually tends to be, oh yeah, security is stopping that progress. If, if you have bit, if you can build that culture of partnership being part of, you know, being at the table, being their advisor becomes the key that, you know, by the time they are there, you're not the one helping them.
You just help them make a better product, better for better experience for their customers, internal or external.
Meghana Dwarakanath: And with that, thank you so much for joining us today, Payman. That was a lot of great insights on security culture and how we should be looking at security across business units. Uh, and with that, uh, we come to a close of today's session for the Data Defense Forum.
Thank you so much for joining us, and do stay tuned for more upcoming sessions. Thank you.
Sponsor segment: Why legacy DLP is failing—and how data lineage fixes visibility
Cole Padula: Legacy DLP tools were built for a different era. When data lived in static files or behind firewalls on corporate servers, back then scanning content and applying labels felt like control. But today, that illusion is costing companies more than they realize. Data now flows across SaaS applications, chat platforms, personal devices, and unsanctioned tools.
It's copied, pasted, reshaped, and shared in ways legacy systems were never built to track. And while your DLP dashboard shows you that policies are being enforced, the truth is most sensitive data movement is invisible. And what you can't see will hurt you
in the real world. Employees use tools that it hasn't approved. They access data from coffee shops, send client details via personal email or drop IP into a teams chat and legacy DLP, it doesn't see any of it. Insider threats go unnoticed. Regulatory exposure increases and SOC analysts drown in alert fatigue, all because your tools are watching the wrong things.
Visibility isn't a nice to have anymore. It's the only way to stop data from slipping through the cracks.
Ignoring what your DLP can't see isn't just a technical oversight, it's a business risk, a security risk, a compliance risk. When sensitive data moves beyond your visibility, you're exposed. Breaches can go undetected until it's too late. Investigations will drag on, and remediation costs will skyrocket, and the damage to your brand and customer trust, sometimes that's irreversible.
Then come the regulators, GDPR, hipaa, CCPA. They all require proof that you're in control of your data. Legacy DLP with its gaps in guesswork can't deliver that, and the result, fines failed. Audits and reputational fallout. But even without a breach, there's a quiet cost. Security teams wasting hours, triaging false positives, burnout, frustration, and a growing disconnect between it and the business.
It's meant to protect. Ultimately, staying blind doesn't keep you safe. It just keeps you vulnerable.
Modern data security isn't about scanning more files or labeling more documents. It's about understanding how data moves and why. That's where data lineage comes in. It tracks the full journey of sensitive data from the moment it's created through every app, message, and user interaction. Combined with behavioral insights, lineage gives you the context to separate real threats from routine work and stop them instantly.
It's smarter, faster, and far more effective than Legacy DLP, but don't just take my word for it. You should request a demo today and see what legacy DLP has been missing. You'll quickly see that Cyber Haven gives you real time visibility, context, rich insights, and control over the way data moves in the real world.
All you need to do is visit the link below to get started.
New session kickoff: Insider threats with Rinky Sathi (context is everything)
John Loya: Hi, I'm John Loya. I'm the VP of Sales Engineering here at Cyber Haven. Today we're gonna be talking about insider threats and, uh, joining me today, today is rinky sat, uh, rinky. Would you like to introduce yourself?
Rinki Sethi: Absolutely. Thanks for having me here. Hi everyone. I'm Rinky Sathi. Uh, I'm currently the Chief Security Officer at Upwind.
I've been at Upwind for six months, uh, 21 years in cyber. Started my career at pg e. Went to Walmart, eBay, Intuit, Palo Alto Networks, did a short stint at IBM, took my first CISO job at Rubrik, second one at Twitter during the crazy times, um, left before the Musk takeover, and then spent the last three [email protected] as CISO and CIO.
And also the founding partner of Lockstep, a venture firm focused on investments in cybersecurity, excited to be here with you.
John Loya: Um, so what's kind of unique about insider threat kind of compared to external threats? What's, what's the the, the sort of driving factor behind?
Rinki Sethi: I think the interesting thing is like, you know, in the past I was always saying that like a threat is a threat, internal or external.
Um, I think my viewpoints on that changed, um, in the last few years as ciso, that the mo most important thing in differentiation is the context and the internal knowledge that an insider might have, whether it's ac that, whether the incident is accidental or not, that the threat is a little bit different and the movement, you may need additional checks to see if it's actually valid or not in what that user is doing.
So, um, I would say that's the major difference, but I still think of the threat as still a threat, whether it's an external or internal factor.
John Loya: Okay. And how do you kind of differentiate between those, those really sort of, um, malicious intent users that are insiders and the ones that are just kind of accidental usage?
Rinki Sethi: Um, it's actually a difficult thing sometimes to differentiate. Um, and a lot of times there's really good tools out there that can help you say that this user behavior seems normal and, um, is something that they, you expect them to do on their day-to-day job. Um, in other cases you might see that. Hey, this user has this access, but this activity shouldn't be performed.
Um, and so I think that having the right tooling and products and automation will help you understand whether somebody's doing something intentional or accidental. And sometimes you actually have to rely on interviewing the employee, right, to understand whether or not something's malicious or not.
John Loya: Okay.
So you're interviewing those employees kind of after the fact, or you interview. Are you interviewing departments kind of ahead of time to understand what's expected behavior?
Rinki Sethi: Oh, you have to do it before. If you're gonna build able to an insider threat program, you do have to understand what is acceptable usage.
And in fact, in that process you might unders, you might say, oh my gosh, we need to lock certain things down and put in additional controls, uh, so that when you roll this program out, you get more efficacy from the implementation of it. Um, but I think that once you implement the program, implement your tooling, implement the automation, you should be at a point where your noise is low.
But as we know, most of us practitioners that run these programs, the noise is not low. Um, and so you do sometimes have to go follow up and interview on why something may have happened.
John Loya: Okay.
Signals & noise: tuning alerts with SIEM + AI, and balancing privacy across regions
John Loya: And then in terms of early signals, like what, what are you looking for early before an actual, um, incident occurs?
What are you looking for in terms of user behavior or access or, or, or those sort of things?
Rinki Sethi: I think if you see some kind of like, let's say someone has is, uh, is, you know, escalating privileges and it's something that you're expecting that to happen. I think having those kind of alerts and signals baked in so that your team knows that, Hey, when you see this kind of escalation or you seeing this kind of data movement, we should always follow up.
We should always chat to make sure that it was actually intended. Yeah. Um, and so making sure that you understand what your high risk behavior that you wanna monitor is within your environment, what the data is that you're actually protecting, um, and if something is touching that data that you don't expect, this kind of behavior that you've got really good alerting set up, um, and fine tune that over time, such that like, you actually know that this is valid and that you need to go follow up on it.
John Loya: Okay, perfect. When you talk about fine tuning, a lot of what I hear from, from customers is that there's a lot of noise generated by tools, right? Because they're trying to monitor everything. They're trying to identify where the risk might lie. Are you using like a, like a, like a sim or some other sort of solution where you're pumping data in from these different tools and then you have signals to kind of work off of?
How do, how do you kind of approach that and how do you reduce noise within that environment?
Rinki Sethi: We do. We, in, in the past environments I've worked in definitely had a sim, um, where we aggregate all the data from different tools, maybe have our own correlation that, um, we may drive, and then that would then tell us whether or not an alert is fired off or not.
I think this day more and more the expectation is that with ai. There should be some auto kind of behavioral fine tuning that happens without you having to man manually go and intervene. And I'm seeing products in the market that are starting to like drive that kind of improvement and where users can just say that was a false and then you kind of have something that's, there's still a human in the loop, but there's some adjustment that happens automatically.
John Loya: Okay. How do you kind of balance, uh, employee privacy? Alright, you, you, you've got tools, you're monitoring user activity. How do you kind of balance that with, with, uh, regulations or how do you balance that with like a remote workforce?
Rinki Sethi: I think that's a really, it's some that can be really difficult. So I have to like go back in my career, I remember I was working for a company, um, where we had to roll out MDM for the first time.
You remember like mobile device management? Yeah. And I swore after that that I'm never going to go to a company where I have to roll out MDM. It's either gotta be implemented or it can't be an initiative. And one of the big reasons was it wasn't, back then the privacy laws were not as like stringent, right?
Yeah. So it was like you could kind of go and do what you wanted to. But there was an employee uproar where they were like very concerned about privacy and they're like, you're gonna put something on my phone, but it's my personal phone. Do will you have access to my photos? And all that kinda stuff. And at that time it was like making sure the employees understood why we were doing this.
I think now fast forward we're like rega like. Depending on what region you may live in, whether you're in the US or whether you're in like certain areas in Europe, expectation of privacy is very different. Like I think there's less expectation of privacy in the US than other countries. But I think working with your privacy team very carefully on how do you do this?
How do you communicate, how do you do that within the laws within the country? It's still possible to roll these things out. It's just gotta follow a process and there's gotta be transparency. I think that's key, uh, in how do you communicate and be transparent that we're going to be rolling out this program.
It's in alignment with our acceptable use policy and everything we're doing is for protection of our data and for protect, making sure our customers can trust what we're building. And so I think it's really important to drive that back to the employee base and then work with privacy teams to make sure you're doing things within constraints and laws that might be in place.
John Loya: Okay. And then in terms of, of, of that, obviously location is one of the factors, but you've gotta look for insider threat from multitudes of different departments and levels within the company. Do you get to a, do you get to a point where you're treating different groups differently, like executives versus contractors?
Versus, um, people that are working on a short term project. Is, is there a differentiator or are there different signals that you're looking for within that?
Rinki Sethi: Uh, in, in the two programs that I've kind of built around this mm-hmm. We've had, it's all based on behavior. So it's less based on like, Hey, we are not gonna do this for executives.
It's like, if we see certain behaviors and we see certain kind of data leaving the environment, certain types of behaviors or certain types of privileges that are being escalated or whatever that might be that we're concerned about certain types of data being accessed, that's what we would start monitoring less from a rule perspective.
Um, I think where we had to be careful is when you start detecting things that might be like someone's tax file or something like that, that how do we make sure we're not starting to look into things that we have no business looking into that wasn't actually malicious.
John Loya: Okay. Perfect. Understood.
Program impact story: finding hidden gaps (fake hire + non-enterprise GitHub/GitLab)
John Loya: And as a result of kind of these programs and, and the visibility that you have, have you ever made any changes to like access control or other configurations within the environment to kind of reduce risk in the future?
Rinki Sethi: A hundred percent. So, um, what's so interesting about programs like this is that as you're starting to implement them, you actually find gaps in your environment. You think you had a control in place, you think you had an application or a system locked down, or you thought that you had an enterprise account for a certain application and you find out you didn't?
Um, I had a, I have a, a, a story on this one. We actually had an employee that somehow passed an employee. With a fake identity. Okay. That passed the interview process. Never showed on video or anything like that. Okay. Ended up that there was no employee of that. There was no person of that name. And they had hired someone in a country that we didn't allow, um, we didn't allow employees to be in.
Yeah. And we noticed that it was going through a GitHub or a GitLab account that was not an enterprise account. And we thought that business, business that we had acquired was using our enterprise GitHub and turned out it wasn't. And so they uploaded it into some like, accounts that we weren't tracking, and we caught that through our insider program, um, and through the tooling that we had and that, and then it was like, what is going on here?
Hiring Fraud Exposes Process & Identity Verification Gaps
Rinki Sethi: And that uncovered everything. I just shared that. Oh my gosh. And it, it wasn't just gaps in tech, right? Because okay, of course we need to get these guys and get us a single sign on and get them on an enterprise account. But also what happened in that interview process that somebody was able to get in and I were there like identity verification.
Did we do all these things? Did someone interview and ask this individual to be on video, like management? Like, does management need training? It was like all these gaps, business process gaps, all these gaps that were uncovered. And, you know, we went back and said like, there's some work we need to do.
John Loya: That's wild. What, what sort of changes kind of came out of that? Um, um, from that incident?
Rinki Sethi: Yeah.
Fixes After the Incident: Manager Training, Video Interviews & Enterprise Accounts
Rinki Sethi: One, it was like. Our engineering managers need to have additional training on get your people to show up on video. Maybe we fly people in for the first interview to make sure they're like real people. Did we do a background check?
Um, and how did that all pass? So it went and we went and drove improvements in all the HR processes and kind of like training our employees on this, especially like this happened to happen under a first time manager who hadn't been trained. Yeah. And so like, how do we make sure they're trained and then all the way to let's do an assessment and make sure that all of our enterprise, uh, technology and enterprise app, let's go and make sure even for the acquisitions that we've made, let's make sure that they're using enterprise accounts where there are gaps in the due diligence process.
Like what happened there. Yeah. So it kind of led to doing kind of an audit and then locking things down that we discovered. So lots of proactive work then that happened out of an incident that was discovered through this program.
John Loya: I think what's really interesting about that point is that what you're kind of talking about underneath that is, is you made a change in company culture because of that.
Right? You're now more personable. You're now doing these interviews in person.
“Big Brother” vs Risk Appetite: How Culture Shapes Insider Programs
John Loya: Are, is it, is it difficult to kind of, you know, work within the culture of a company and kind of shape the culture of a company while you're running this sort of insider, insider program?
Rinki Sethi: It's, it's, it's it, like I've been at companies that I've just said, you're not gonna run a program like this.
You're not gonna do that. It feels like Big Brother and we're not gonna do it. Yeah. And so you can't like, and so then finally I went to a few companies that were either already had the program or had the appetite to put more controls in place because of the number of. Incidents and whether they were mistakes or intentional things that had happened.
Clayton Smith: Yeah.
Rinki Sethi: And it's, it was interesting 'cause I just, it was like, it's so from HR team to HR team in different companies or legal teams, the appetite for risk is so different. And it's like we're willing to accept the risk because we don't want our employees to feel like they're being watched. And there's others that are like, you have no expectation of privacy when you're using a company device.
And we put that in our acceptable use policy. Um, and so, and then there's companies that are kind of in between somewhere. And so I think like if you start with the information that here's the incidents that we've had. At the end of the day, you need to protect the company and the data, and we need to drive this.
And it's not a big brother. It's like really kind of monitoring our assets and there's, how else are you gonna do it? Right? Otherwise you're leaving the company with a gap and someone has to accept that risk. And so I think getting that, like risk culture, I mean, we talk about this in cyber all the time, is that you want like a very good, our, our job as security people is not to hold onto all the risks and remediate all the risks, is to drive awareness and accountability of those risks.
And, and so at the end of the day, that's what the discussion is, is what's the risk appetite? And that's like actually ties really closely into the culture of a company.
John Loya: Okay. Yeah, I hear that a lot from, from different, uh, security departments, right? Sometimes they want to kind of rule the rule the, the, the company with like an iron fist and you can't do this.
You can't use USB devices, you can't print. And then I talked to other companies and they're just wide open in terms of what they can do and they encourage. Creativity and they see it as a way to kind of, you know, uh, um, you know, develop and, and beat up petition. It's really interesting that, that it comes down to the individual culture of that co.
Getting Buy-In: HR, Legal/Privacy, and Executive Sponsorship
John Loya: Um, you talked about a little bit about other departments and HR departments. It sounds like you need a lot of buy-in from different departments. I'm imagining HR is one of them, but what are sort of the other departments that you need to get buy-in from whenever you're implementing a, um, a program?
Rinki Sethi: HR is definitely an important one from a culture aspect and like making sure they're on board with what we're doing, uh, what you're doing.
Um, I think legal team privacy are definitely key, um, stakeholders and honestly I'll say like the executive team needs to be bought into this, um, and supportive of it. And so like your, your e team or your C team needs to really kind of be, be advocates and champions of what's hap what, what, um, the programs that are in place.
Yeah. Um, I think it's interesting like, um, when once we did put this in place, it was actually the legal team and HR teams that would come to security all the time saying like, can you look into this? Or Can you look into that? And became like a really useful tool for them, right? And, um, for them to go and drive investigations when needed.
And if you don't have something like this in place, it becomes so difficult to go back and look at. Was something initially, uh, was something accidental or intentional? Was it malicious? Like what happened as also when you have like. We know, like after the pandemic and even late pandemic, right? We had like lots of companies, there was like a spur of like layoffs that happened, right?
Um, and having these things in, in place to make sure like, you know, employees, I don't know if you wanna call it intentional, but like they're nervous and as they're leaving a company and they feel like they need to save their data and the work that they did and
Joe Sullivan: yeah.
Rinki Sethi: Um, having some of these controls in place help prevent, right?
That like kind of prevent really like incidents from happening where data was leaving our environment and things like that.
John Loya: Yeah. Yeah. We see that a lot where there's um, specifically like engineering departments or anybody that is creating content, they feel ownership of that while they're working in an organization.
And so whenever they're leaving, they feel like that's part of their portfolio and that's something they can take with
Rinki Sethi: That's right.
John Loya: Yeah. Yeah. Yeah.
Rinki Sethi: And, and there's an integration right? Between work and life and people do store photos and things like that. Yeah. And tooling, the right tooling can actually help you enable employees taking the data that they need and want and like making sure that they can have that not triggering alerts.
And then you're also monitoring if there was some kind of accidental movement that shouldn't happen.
John Loya: Yeah.
Why Tooling Matters: Faster Investigations & Preventing Data Walkout
John Loya: You talked about, um, kind of, you know, some of the companies you've worked at that you would not implement a program like this due to culture and then others where you have implemented it or it's already in place.
How can you, can you kind of walk me through the difference in how you respond to an insider threat, um, incident at a place that doesn't have tooling in place versus a place that does have tooling?
Rinki Sethi: You literally have to write your own rules for the things that you care about. And so you're building your own kind of like insider threat capability and there's a lot that gets missed through that.
Right. Um, and so I think like in those environments you have to go then dig into logs, get onto user machines, perhaps do imaging for things that could actually like be prevented. Like could you just look at what, especially now in the environments we live in where it's mostly like SaaS cloud and stuff like that, you shouldn't have to do that.
And so I think that's the big point in that when you have an understanding and a program in place, there's really good processes that are established. Um, everything from what the security analysts or security engineers should be looking into and not looking into what the, um, when do you report this to legal and privacy?
There's generally like a very good understanding and partnership across teams. When you don't have all that stuff, everything's an incident and every investigation take can take like days if not longer to figure out
Clayton Smith: days
Rinki Sethi: and Yeah.
Clayton Smith: Okay.
Rinki Sethi: And stitch what an individual might have been up to. Right. And I remember like back in the day when we had to do that and it would take, especially with an insider, it would take 10 times as longer.
'cause there's like, they know how, like if it's intentional, they knew how to hide things and stuff so it would take forever. Whereas like the tooling and automation that exists today, like you can catch a lot of these things pretty easily even prevent something bad from happening.
John Loya: Okay.
Real-World Catch: Admin Exfiltrates a Full Slack Export
John Loya: Do you think you can walk us through maybe an example of where you caught an incident and it just wouldn't have been possible to even detect it if you didn't have any sort of pulling in place?
Rinki Sethi: Yeah, there was a, uh, this would've been really hard to catch, uh, without tooling, but it was when, um, we had an IT admin who had privileges that took an entire dump of everything that was happening in Slack.
John Loya: Yeah.
Rinki Sethi: Um, like every message the entire, like Slack file, the master file, um, and then took that and went and uploaded into a personal account.
John Loya: Okay.
Rinki Sethi: So like you think about that like super high risk. Right. And what we were, we got alerted when that file was downloaded.
John Loya: Yeah.
Rinki Sethi: And let that we knew that the individual was doing them and watching it. So it was like we could have prevented it, but we needed it to happen to be able to kind of make sure that it was actually malicious.
'cause there was, we expected there was gonna be malicious intent, but the fact that we were able to catch that, because technically an admin should be able to like, do some of these things even though maybe one would argue that would need like a secondary control in place. But that's a really hard thing to catch without any kind of tooling.
John Loya: Yeah.
Rinki Sethi: But if you say, I wanna be triggered on someone doing this activity specifically, even if it's an admin, that's where I think Wing gets really important.
John Loya: Yeah. That's really hard when somebody has access and they have necessary access to, to, to data. And then it's all about kind of what are they doing with that data and is that within reason.
Executive Departures & Privacy Concerns: Making the Case to Leadership
John Loya: One of the things I've seen in the past is, you know, we'll get brought in whenever an executive has left the organization. Right. And it could be all the way at the top of, of the chain, and they'll do this as well, right before they leave for, uh, you know, another position. They'll back up all of their email or something like that.
Is, is that something that you see occurring across kind of your experience as well?
Rinki Sethi: I've seen, I've seen that at all levels. Um, where, and again, it's like people wanna save their work, they've worked hard, they wanna like save their work, you know, and sometimes it's malicious, sometimes it's not. It's just them wanting to keep their work a hundred percent.
I've seen that at different levels in the company. Um, I think at the executive level, the access you may have to, like business intel and stuff is more kind of critical and so it becomes like a bigger risk to the company. A lot of times you see companies saying that the day you decide to exit someone or they give notice, they may just cut access right then and there, although that might also be late because they knew that they were leaving way ahead of time.
Yeah. So, yeah, we do see that.
John Loya: Previously you talked a little bit about how you get, um, kind of, you know, different departments and their buy-in and their support. How do you kind of get that from the executive staff? Do, do they understand the value of these sort of programs?
Rinki Sethi: You know, like there's, um, what I've seen is like people are very opinionated around their privacy and how, uh, that's where it kind of comes in where it's like, why do you need to look at that?
Why, what is, there's a lot of questions around like, what is the security team going to do with this? Kind of like, how do you know the security team are good people are not gonna abuse the data that they have access to? 'cause you have access to everything in this particular context. So there's a lot of questions like that, that I've had executives ask, like, what does a security team have access to and how are you making sure that's audited?
Um, but generally saying like the, the executive team wants to explain the risk and what this is there to go and solve for, most of them will get on board. I think the, the big question is around privacy of the data that I have on my devices that are not like company, maybe I have a picture of my family on there or something like that.
Yeah. And whatever that might be. And I think again, it's like company cultures vary. That some companies are like, we issue the device, it's ours. Like if you don't want us to know about it, don't put it on there. And there's other companies that are like a little more liberal in that, in that way. So it just depends on risk appetite.
John Loya: Okay. Perfect.
Modernizing Insider Programs with AI (and Avoiding AI-Washing)
John Loya: And then, um, obviously today it's, it's a very different workplace than it was five, 10 years ago. Um, how do you kind of, uh, uh, like what sort of advice would you give to other practitioners out there that are looking to start a program or enhance an existing program? Right. You, you, you've got gen AI now both as a threat and as a tool.
Um, you've got a various tools from network to endpoint to email. How, what, where would you advise 'em to kind of start and what sort of pitfalls would you kind of advise 'em to look out for?
Rinki Sethi: I think if you're creating a program today, you're lucky. I've been saying this like in every conversation I have that like, and I, I feel this way even now and I felt this way in my last CISO rule that you, if you're vintage 20 24, 20 25 companies, like companies that are like really innovative today or even PE companies that have been able to pivot right and are like really leveraging AI in a meaningful way.
As security leaders, we need to re rethink our programs. Like it's time to be like, we should look at the whole tech stack and get, leverage AI in the right ways. 'cause it can drive efficiency, it can drive like more accuracy. There's so many interesting things that's might have been my biggest advice is like if you have an existing program, go look at the innovation that's out there.
Look at the companies that are. Doing things with efficiency and can scale and like our think thinking about ai, not as a buzzword, not as like, oh, it's another ai, but like, really are they kind of helping you, like reduce noise, help you with remediation, all those kinds of things. Um, and I think it's a really interesting time for that.
And so that's my whole thing is like, it's really time to re-look at programs, whether you're, if you're building from scratch, you're kind of lucky. You get to go look at all the newest things that are out there. If you have an existing program, I still think it's time to look at how do we think about this end to end?
And if you have multiple two tools, it, there may be tools that can maybe like help you consolidate some of that and make it easier for the team too.
John Loya: Okay. Uh, beforehand we're talking about, you know, kind of being down on the floor at Black Cat. I think everything there had some sort of AI enhancement or, or AI sort of advertisement.
Whenever you're looking at tooling, how do you kind of, um, you know, how do you kind of judge their AI usage within that tool in terms of, is there an actual value here or is it just something that they've added on that allows you to query your search?
Rinki Sethi: Yeah. Is
Clayton Smith: it just like an
Rinki Sethi: open AI integration backend?
Yeah, exactly.
John Loya: Yeah.
Rinki Sethi: I think it's like, to me it's all about what are you solving for at the end of the day and how are you gonna help me? Like, and how is that AI feature or whatever gonna actually do something different. And I think like it's, it's when those, what's been interesting to me is when companies are building micro models right?
And like really focusing in on some of the, um, more unique data sets that they may have. Yeah. That to me is when you're gonna solve for this at large scale. And, um, I think also like who's building it? Like, are they just folks that have no knowledge about machine learning and ai and they're, or are they folks that are deep in that space that are really gonna be able to build something at a unique, in a really unique way?
Yeah. Um, and I think that's, that's how I would look at it. Um, I, you know, again, like you can tack on ai, everybody's an AI company now.
John Loya: Yeah. Yeah.
AI Data Use Policies: Keep Training Local (For Now)
John Loya: And how do you approach, uh, kind of those AI based tools in terms of how they're utilizing your data? Do you allow them to put into a larger model? Do you only allow them to use it locally within your own company?
What's, what's kind of your your, your take
Rinki Sethi: on it? My, my, my take is that I think you do have to restrict it to your, like, training within your own company, at least for today. Right. I think like until we figure out proper security around models and all the stuff that we need to be thinking about, we're still early days in that.
Um, and we're trusting these companies quite a bit on making sure they have good practices. I think like training on your own models is probably the safe way to go and that's kind of what we were doing.
John Loya: Awesome. Great. Well thank you so much Krin. I appreciate you, uh, you, you taking the time to chat with us today.
Um, everybody, uh, we do have a couple of sessions after this, so please stick around and thank you again. Appreciate your time. Thanks.
Rinki Sethi: Thanks
John Loya: for having
Rinki Sethi: me, John.
John Loya: Alright.
Sponsor Segment: Beyond Labels—Data Lineage as Modern DLP
Cole Padula: For those of you familiar with data security, you know that data loss prevention or DLP meant one thing. Label sensitive files then enforce rules based on those labels, and it worked well when data stayed put. That is, but today, data doesn't sit still. It flows across apps and lives in places traditional DLP cannot see, and this is a major problem considering that labels are static, they fall off, they get applied inconsistently, and most importantly, they fail to reflect the context of how data is actually being used, which means your legacy DLP solution is making decisions outdated or missing information, and that creates risk for your organization.
Unlike labels, data Lineage is the ability to track a piece of data from its origin through every point that it travels across apps, formats, users, and systems. It's not just seeing a file at at one moment in time, it's understanding the entire journey, where it came from, how it was changed, who touched it, and where it ended up.
Think of it like a GPS for your sensitive information, whether that data was pasted into a message embedded into a slide deck, or split across multiple tools, lineage shows you the full past. This context is what makes modern data protection possible. It's how you distinguish normal behavior from risky activity, even when the content itself looks different.
The bottom line is Lineage closes the visibility gap. Imagine that a confidential file gets copied into Slack, then pasted into a notion document, and eventually upload it into a cloud tool. Traditional DLP loses track after step one. Lineage sees the entire chain and understands the risk At every point.
When a developer pulls source code into a Team Wiki or a sales rep, paste customer data into a shadow application, labels will not catch it, but Lineage will because it follows the data even when it changes form, even when it blends into new content, even when it looks like something else entirely. This persistent context means fewer false positives, faster investigations, and stronger protection that actually fits how people work.
Analysts will get fewer alerts with better signal users aren't forced to classify everything manually, and compliance teams gain a clearer auditable trail of how the data is handled.
If you want protection that evolves with your business, it's time to look beyond labels and follow the data. But seeing is believing and Cyber Haven can show you how. I encourage you to schedule a live demonstration with us and experience how lineage reveals what's happening inside of your environment.
No guesswork, no noise, just clear, real time visibility and control over your most valuable data. Don't settle for blind spots. See the whole story with Cyber Haven. All you need to do is just visit the link below to get started.
New Talk Begins: Beyond Compliance—Securing Financial Data in the Age of AI
Cameron Galbraith: Thanks everybody for joining us today. My name's Cameron Galbrath. I'm the Senior Director of product marketing here at Cyber Haven. Uh, we're glad to have you join us for the Day of Defense Forum, and I'm excited to, uh, have a conversation now with Josh about beyond compliance, securing financial data in the age of ai.
And so with me, please give a warm welcome to Josh Staber. He's the CISO at Vista Equity Partners. Vista, of course, has, uh, over a hundred billion dollars in assets under management. They've played a major role in the growth of many of the most iconic companies around. And so Josh, I'll give you a few minutes to introduce yourself.
Josh Stabiner: Yeah, thanks Cameron. I'm, uh, excited to be here, uh, and, and talk about this, uh, very, very relevant topic. So, as you mentioned, I'm the, I'm the CISO at Vista, uh, equity Partners. It's a, a private equity firm based in, uh, Austin, Texas with the offices all around the us. Um, prior to Vista, I was CISO for General Atlantic, another large, uh, PE firm.
Uh, and prior to that the CISO at Pioneer of Capital Management, I, I started my career pen testing and consultant, uh, and somehow ended up the, the CISO for alternative investors.
Cameron Galbraith: That's awesome. So you, you've probably seen from bottom to top all the, the challenges and concerns as the industry has evolved.
Josh Stabiner: Yeah, it's, it's been quite a ride the last, uh, what, 25 years almost, uh, spent my whole year, my whole career in cybersecurity and, uh, yeah, I've seen all the ups and downs.
Cameron Galbraith: That's awesome. Well, great. Well, let's dive right in. I know we've got about 30 minutes total. Um, lots of great topics I want to get to.
I'll start with a question that's, um, maybe a little bit provocative, but we'll, we'll sort of kick the conversation off with this, which is, why is regulatory compliance no longer sufficient for protecting financial data in this new era of ai?
Josh Stabiner: So, I don't know that regulatory compliance was, was ever sufficient in and of itself to, to protect data, right?
I mean, if you, if you go back to some of the big breaches of, of your, like target for example, um. Target was PCI compliant when, when that, when that breach happened, they were doing card scanning and and whatnot. So I, I think you have to look at the regulations and the spirit of the regulations as a really good guideline, a place to start.
I think they do capture sort of the protective control framework that we ideally like to have. Um, but it's certainly not the end all, be all. Um, I don't think AI has really, really changed that. Uh, I think that's just sort of how it's always been.
Cameron Galbraith: Hmm, that's a very good point. So then maybe I'll ask, uh, a related question.
AI Adoption Risks: Shadow AI, Entitlements, and Data Fidelity
Cameron Galbraith: So how does, well maybe to frame the question, you know, every industry has seen major adoption of various AI tools from in-house, sort of homegrown stuff to using off the shelf tools. So that increase in adoption, uh, how does that AI adoption create maybe new risks for financial institutions?
Josh Stabiner: Yeah, so that's certainly more provocative, right?
The, the ai the way it's, it's the pace at which it's being rolled out today sort of reminds me back in like 2007, 2008, when, when the iPhone first came out and every executive wanted to get email on, on their iPhone, uh, and all the security professionals were like, whoa, whoa, whoa. Hold on. We need to figure out how to make this safe.
Uh, and. For all the security professionals who are watching this right now, you know, we lost that, that race, right? The, the executives wanted that email. They wanted access to corporate resources, and they got it. And we had to layer on security after the fact. And so we ended up in the boat that many of us are still in today with sort of these wonky MDM solutions that aren't really implemented the way we'd love them to be.
Um, and AI reminds me of that. There's, there's a lot of promise in AI and what it's able to do in terms of things like personal productivity, uh, automating business processes, uh, assisting with, with, uh, software development, um, eventually making business decisions and the exploration of AI and the realm of the possible is moving at this pace that is far faster than security's able to keep up.
And so we've been sort of forced to look at this holistically and say, what are all the risks? What are all the threats? And how do I using my existing tool base address as much as I can? And then what are the gaps that remain where we need new innovative security products to come into the market to address that?
Um, so it's, it's exciting. Um, but it's also a little scary. You, you know, that you're sort of, you're, you're in un uncharted waters here.
Cameron Galbraith: It sounds almost like, um, you know, in a way it's sort of a another wave or iteration of, so you mentioned like the iPhone, so the Bring your own Device challenge. I think we had a similar challenge with Shadow IT and SaaS and, um, everything's sort of cloud ten-ish years ago, maybe a little bit before that.
Now it sounds like you're sort of in the era of shadow AI almost.
Josh Stabiner: I think that's one piece of it, right? So look, there's, there's a tremendous number of tools that are available both to enterprise and consumer. Um, and I think your employee base at any company, um, they want to play with, you know, shiny new toys.
Especially when there's this promise of, oh man, I don't have to write that 30 page document. I can just sort of tell the model I want A, B, C, and, and it spits it out for me. And then I just do a little bit of editing at the end and Wow, what a time savings. Um, so there's, there's sort of this, um, excitement not just from the technology professionals who obviously want to use it, but from everyone, right?
Even people who aren't super tech savvy can use Chat GPT or Microsoft Copilot or, or Claude. I mean, they can, they get a lot of benefit from it. Um, and the new sorts of risks that come out of it are, I mean, mostly data related, right? It's a way to get around and not necessarily in a deliberate fashion, but to get around a lot of the access controls and entitlements that we define to data.
So sort of which users should have access to which data elements, um, what they're allowed to know versus what they're not allowed to know. Um, and there's also a data fidelity issue, right? Especially if you're using AI to make decisions. Um, you need to know that the data the AI is analyzing is correct, that it's high fidelity data, uh, that it's not using, you know, if, if it's using my inbox for example, it's not scanning all the spam in addition to all the corporate info that I really want it to be using, right?
Because it's gonna make bad decisions. Um, so there's, there's a bit of that data fidelity issue. There's a bit of an an entitlements access permission issue, uh, and being able to inherit permissions that exist if you've set them correctly. Leverage things like sensitivity labels and have the models respect those.
Um, and you need sort of the models to be able to interpret that. If I take this piece of data, uh, from, you know, this folder that user A has access to and this other piece of data from this other folder that user A does not have access to. Right. And combine them into something. I create a new data element that perhaps user C shouldn't know about, but I've now created this, uh, determination or this, uh, derivation of the data.
Um, so there's a lot of, you know, how do you control that? How do you control that? The AI is, is synthesizing and, and making conclusions and presenting those conclusions when those conclusions may contain information that the user who made the request is not really privy to. Right. Mm-hmm.
Cameron Galbraith: It sounds like it's a bit of the, there's the classic garbage in, garbage out just for quality.
Josh Stabiner: Yep.
Cameron Galbraith: There's the scope and permissions like you're talking about. Um, you mentioned too, like how it sort of lowers the bar or the barriers to entry rather for any employee to do lots of interesting things with information.
AI as an Insider Threat Vector & Why Data Governance Is Now Urgent
Cameron Galbraith: So in that context, how do you think then about insider risk and how people are using data within the organization use misuse?
I'd love to hear your thoughts on that.
Josh Stabiner: Yeah, I mean, so AI definitely presents a new vector, a new threat vector for insider threat. Right. And sort of as I was alluding to before, if I want to try to get access to, I don't know, bonus and salary information, right. I might ask, Hey, chat, GPT, you have access to all this data at the corporate backend.
Um, what was my boss's bonus this year? I'm, I'm curious, uh, and chat. GPT might rightly say you do not have access to that information. Um, maybe because it inherited permissions or you've got your MCP server set up properly, or there are many, many controls that you can layer in. Um, but I might not ask that question as, right?
So here's where the insider threat becomes, um, more pronounced is that there are many, many other ways I can ask that question. Maybe I can ask, what's the average of my bonus and my boss's bonus? And maybe chat, GPT will respond to that question, right? It, it shouldn't because it knows it needs to access this data that I'm not entitled to in order to do that calculation, right?
But we've seen all sorts of, uh, you call them exploits. I don't know if exploit's the right word, but, but these ways of leveraging the model or the GPT or, or whatnot to query information that maybe you're not really supposed to have now, that's part of what makes the AI so powerful. Right, is that it can do that sort of synthesis and it can think like a human being.
And if it needs to get a piece of data in order to answer a question, it knows where to find it and it goes and gets it. And that's, that's wonderful. But it creates these problems for us on the security side of things is, man, it was much easier when all I had to do was protect that spreadsheet that had all the bonus information in it.
Now I've gotta protect against this entity that is gonna query all different sorts of, of ways of trying to get that information into the hands of the person who requested it.
Cameron Galbraith: So then from a security standpoint, so what role does data governance play in securing these AI systems and AI driven systems?
Josh Stabiner: Yeah, so it's more important than ever, right? Because look, in the old model, before, before we were connecting our data stores to all this advanced ai, uh, data lineage, data governance, all of that was important. You had to know what your crown jewels were. You had to know where they were stored. You had to know, um, who had access to them and who should not have access to them.
And the only way to do that is to be very, very organized about your data, in particular your unstructured data, which tends to sprawl. Um. Now with AI in the mix, as I mentioned, we're adding all different ways of querying that data. You need to be able to build an entitlement model that the AI will respect.
And the only way to do that is to know where the data is, where it came from, what's the most up-to-date version of it, who should have access to who shouldn't. So all these things that before we sort of said, yeah, that's a project, it's such a big long effort. Um, we'll, we'll take our time with it. Now all of a sudden it's like, if you don't have that done, you're playing catch up.
Right. Um, and look, it, it, it's, like I said before, it's, it's very exciting to, to have this ability and, and the, the, the newfangled, um, way of sort of consuming all the data. 'cause there's a lot of it, but these dangers are real.
CISO Playbook: Hygiene, Reuse Existing Tools, Then Add New AI Security
Cameron Galbraith: And then what are some of the, so okay, we sort of identified like there's these major challenges.
What are some of the strategies that CISOs, security leaders, folks in your kind of role. Can adopt to strengthen their defenses, particularly against like insider misuse of financial data or, or some of these other sensitive uses of data.
Aman Sirohi: Yeah.
Cameron Galbraith: Um, you know, how, what are some of the, the, those strategies, how should they approach these problems?
Josh Stabiner: So it's, it's a, a multi legged stool here, right? The, the very first thing is hygiene, right? Make, make sure, as I said before, you've got a good handle on data governance, right? And so everything we just talked about in terms of access management entitlements, um, potentially sensitivity labels, but knowing where your data is, knowing what's important to you, classifying it, and having ways to sort of, um, protect it in a traditional sense, right?
That's the table stakes. Then the next thing is look at your existing capabilities and see what can be used to protect AI or, or protect, I don't know, protect with ai, protect against ai. Um, but I think that's a step we often miss, right? It's, it's, uh, one, one of my favorite movies, Apollo 13. There's a, there's a, uh, part where they're, the astronauts are suffocating, or sorry, um, the astronauts are trapped in, in, uh, in the module, the, the, um.
The command module and they need to use an engine and, and they need to correct course correct. And the only engine available is on the lunar module that was supposed to land. And, and Ed Harris, who's the, he's playing the, the mission commander is asking the person who designed the module, Hey, can we use the engine to course correct the ship?
And he says, well, that's not what it was designed for. And, and Ed Harris comes back with the best line, which is, I don't care what it was designed to do, I care what it can do. We have a huge number of security tools in our environment that were designed to do one thing but can be used more, much, much more broadly.
So I think it's, it's important to take stock of that and look at what you're using for DOP, what you're using for E-D-R-X-D-R, um, what you're using for potentially CASB or, um, you know, your, what, what's your, your internet backend, um, uh, you know, if you're using like a private network or, or something like that, then those tools can all be adapted.
You can, you can write additional rules, you can write additional signatures, you can, um, you can run scans like there, there are many, many things you can do to protect your data in the midst of all this AI to see what's going to, and from the models to inventory, which models are being used to sort of gather this information about the problem.
Without doing that, you're, you're sort of blind, right? And then the last part of the, the three-legged stool is this is where the new shiny objects come in. I think that's the, the stuff that we're gonna start to see in the security space. In fact, we've already seen every, every product has AI built in, but even more so, there's a whole suite of products now that are two, quote unquote secure ai.
Um, I think without doing the first two things, without getting the hygiene right first, and then two, taking stock of your existing environment to see which capabilities you already have. If you go right to the shiny objects, you're gonna leave gaps behind. The shiny objects are, should be to fill whatever gaps exist based on what, what you already have.
Um, but every conference I go to, every, every booth along the way is some new AI security product and, and a lot of 'em do really, really cool things. Uh, but you gotta do the first two bits first.
Cameron Galbraith: Yeah, I, I think it is so important to not neglect the fundamentals and the foundational basic kind of stuff.
And, uh, I love that story about Apollo 13, the need to, like, sometimes you have to adapt and you have to adapt very quickly. Um, so to touch on one of the things that you mentioned at the very end of, um, your remarks there. So, uh, third party AI vendor risks.
AI Third-Party Risk: Same Fundamentals, New Threat Models
Cameron Galbraith: So sort of an emerging area, but you're saying you need to look at like what are the tools that are being adopted from a security standpoint, but also in the environment.
Um, so how should firms go about assessing and managing that third party risk with whether it's a new AI chat tool or a AI enabled coding, IDE or, or anything like that? Um, is there a different approach and if so, what is it?
Josh Stabiner: I, I don't think there is, right? So I, as new and exciting as, as AI is, it is just another piece of technology and we treat it like another piece of technology.
It's, it's no different from any, you know, of the big, um, groundbreaking tech advances of the last couple years when, you know, we start, I said mobile devices, we talked, it was big data. It was a thing for a while. Like, um, machine learning, like all of that needs to be secured. But the way we go about it doesn't really change.
You still need to do your architecture reviews. You still need to do your threat modeling. You should still do your third party risk management. Make sure the vendors you're working with are reputable. You should still make sure you've got B-C-P-D-R. Um, this is all like sort of traditional third party risk.
Um, I don't see anything about AI that makes it different other than your threat models are gonna be different and the architecture or uh, review that you do is gonna look like a different architecture, but you're still going through those processes. Um, the processes themselves don't necessarily change.
Cameron Galbraith: I, that probably answers, um, what was gonna be my next question, which is how can security teams embed protection at every stage of AI adoption? It sounds like what they're doing today, assuming they're doing the fundamentals well, but continue following the process and the fundamentals.
Josh Stabiner: Yeah, I agree. And, and that's, that's why those are called fundamentals, uh, because they scale.
Cameron Galbraith: So then what, um, so if so much of it is, you know, it, we have to take the same approach as we've taken in, you know, prior waves of, uh, changes in the environment, changes to the security landscape, whether it was the bring your own device or move to cloud, um. What are, in your view, like, in your view as a ciso, what are some of those emerging, like truly new and emerging AI enabled threats, uh, that would pose a risk to alternative asset managers like yourselves?
AI’s Real Difference: Exploding Attack Surface Across Every User
Josh Stabiner: Yeah, I mean, so it comes down to the use cases and it comes down to how widespread it's right. I think those are, those are the two, uh, variables that, that really determine how threatening or how risky something is, uh, when it comes to a new piece of technology. And what's unique about ai, if you, if you think about all the other, uh, pieces of, of new tech that, that you just mentioned, right?
Mobile devices, bring your own device, um, you know, big data, machine learning, they were all relatively contained, right? So when it came to mobile devices, every user's got one and they're all pretty much the same when it comes to, to big data. It's really your tech and engineering teams who are working with that.
When it comes to things like chat, GPT, it's, every user in the org is using it. They're all using it on a different set of data. 'cause they all have it, they're all entitled to different bits of data. Um, and in many cases it's not just chat GBT, some users prefer. Anthropic, some users prefer Microsoft. Some users are using very specific pieces of AI that are embedded in the, in the technology that they have, uh, access to.
So product-based stuff like Einstein, et cetera. Um, so the attack surface is, is much, much, much larger, right? That's, that's number one. Um, so that's what sort of makes this different. Um, if it's, if it's, you know, there's one thing that really makes it stand out. Um, and then, so I think focusing on, on that tends to be the thing that security professionals look at.
First.
Stop ‘Search & Destroy’: Build Guardrails and a Scalable AI Governance Framework
Josh Stabiner: It's, okay, let me get a handle on the, quote unquote shadow it bit of ai. I only want my employees using sanctioned ai. And at our company we only allow maybe copilot, right? And so I wanna, you know, search and destroy every chat GPT connection, every Gemini connection, every clawed connection. Um, I think we'll lose that battle quickly if that's the approach that we take.
We'll, we'll lose that battle pretty quickly too, because what's, what's gonna end up happening is one, users tend to find a way. Right. So if I can't use it on my work pc, I'll use it on my phone or, or whatnot. And now I have to get data from the corporate environment onto my phone because I really want to use this particular piece of AI or this hugging face app that I downloaded or, or whatever.
Um, so you don't want to, you don't wanna put yourself in that situation. Um, two, they'll just become sanctioned, right? So, so
Nic Acton: you're
Josh Stabiner: gonna end up with users making requests for various different business cases to use those different models. So you're gonna end up in the same boat anyway. Um, so it's better to think more holistically about AI governance and how you're going to put up the guardrails, right?
That means instead of saying this particular GPT or this particular AI is approved, and this one is not, say, look, anyone that accesses these resources must have these controls built in. It must log to my sim. I must be able to access some sort of compliance API where I can make sure users are using it in an appropriate way.
Um, I must be able to leverage my DLP tools so that I know what, what types of data are being uploaded to it. Um, and I can build alerting around that. And so when you build a framework to say anything that matches this framework is good by us, and anything that doesn't, we've gotta have a conversation about.
You make not only your life easier, but you make it easier for the users to start exploring sort of what's possible with AI when it comes to financial services. Um, I don't know that it's any different from any other industry, it's just that what's, what are the crown jewels is different, right? So for an asset manager or an alternative investor, your investor lists your positions, your, you know, that's what's sensitive.
Um, if you're a manufacturer, it might be the recipe for whatever it is you're manufacturing, right? But broadly speaking, it's the same problem. It's just the data itself is what, you know, what differs. Um, and so I think focusing on how do we build the gar, like I said, the governance problem more, more broadly, um, it, it puts you in a position to move fast.
And I, like I said at the very beginning when it came to mobile devices, it was the speed that that hurt us. It was the fact that the users outpaced the security professionals. Um, and so now we're at this point where the security professionals need to really think of themselves as enablers of the business and support that speed instead of trying to tamp it down.
And in order to do that, you've gotta give your users a framework that they can work with to say, look. I'm happy for you to use any AI tool that you, you want, as long as it meets these criteria, we're good to go. Um, users I, I find will respect that, right? They, they appreciate that, Hey, we're, we're trying to work together here.
We're trying to give you that bump. Um, versus if you just tell them no, they will find a way around, right? They will, they will find a way.
Cameron Galbraith: It's, uh, it's, I mean, to do another movie reference, it's like the Jurassic Park where you goes life. Yeah, I was thinking that
Josh Stabiner: too. I
Cameron Galbraith: just didn't wanna say hide the way, right.
That's right. Yeah. Um, I, I love that idea where you're talking about putting some guardrails in place, having that process. Um, who are some of the other stakeholders that you bring into that? Mm-hmm. So, like, as an organization, I'm guessing you've probably gotta get some buy-in for some, from some other folks as to what those guardrails are.
How do you go about creating that from a, you know, just, just sort of operationally getting something like that in place?
Josh Stabiner: Yeah.
Getting Buy-In: Compliance, Privacy, Legal + Use-Case Intake
Josh Stabiner: So for us in, in the world of financial services, the first org, the, the first business unit you have to work with is compliance. I think any, any, um, highly regulated industry, you're gonna have security threats.
You're also gonna have compliance threats. You, you need to make sure that the users who are using whatever product you're trying to, to deploy are using it safely, and they're using it in the right way. Right, appropriately. Um, so number one is to come up with a list of requirements along with your compliance team, your privacy team, um, and legal in general, right?
Just to make sure that you're sort of doing everything by the book. From there, then it becomes sort of use case collecting, right? Who wants to do what with these tools? HR is gonna have different use cases than your finance team. They're gonna have different use cases than the front office folks. Um, so you need to sort of gather a, what are you gonna use this for?
Because you also want to embed security into the process, not just into the tech, right? Um, you know, there's, there are I think some organizations that will roll out AI to everyone as sort of a science experiment and say, Hey, figure out what you can do with it, and then come back and tell us what, what looks good.
And then there are other orgs that say, no, we will only approve it for a use case that's been reviewed and signed off. Um, and then of course there's all sorts of flavors in between, regardless of where you lie on that spectrum. I think it's still important for the security folks to understand the use cases, right?
Even if you're in the science experiment world, you gotta get a handle on. What, what's the value that each of the various business units is getting out of this? Why are they getting it? And I can embed security into the process, which look, when you end up with 50 or 60 use cases down the road, which is not too far away, um, you're gonna be happy you did that.
Cameron Galbraith: Yeah, that's, that's great advice. I mean, and to your point, when we, um, when we did a study earlier this year on AI adoption and you know, the number of tools that are, um, popping up and coming into the environments, I mean, we're now tracking the risk on over like hundreds of different tools, tools, not even counting the ones that are embedded in other ones.
And so I think to your point, trying to keep track of all the individual ones, you know, that could be a wild use chase, but if you've got the framework for how to evaluate new ones, it sounds like that's a much more scalable and effective process.
Stuck Between Lockdown and Wild West? Start Small and Learn by Doing
Cameron Galbraith: Um, so, so maybe last question, 'cause I know we're coming up on time.
Um, you mentioned, you know, some organizations there are one of the spectrum where like nothing is allowed. They've locked everything down somewhere on the other end of the spectrum where it's kind of the wild west and everything is allowed. But in either case, they're probably struggling to take a step forward and sort of move towards that balance of security and innovation.
So what would be your advice to those organizations that are maybe struggling to act? They know they gotta do something, but they, they're not even sure where to, to start.
Josh Stabiner: Um, yeah, that's a really good question. So, look, I've seen some amazing use cases with, with ai. Um, I mean, if I'm thinking just on my security team, our, our head of architecture and engineering built a GPT to do threat modeling, right? You throw an architecture diagram, it, it spits out a threat model, leveraging the Stride framework.
Um, that was really cool and that, that saves a lot of time. Is it as good as if we threw it up on a whiteboard and, and put four or five of our heads together and, and came up with a threat model? Maybe not, but it's, I've gotta say it's probably like 80, 85% of the way there, and it gives you a good starting point to, to refine, right?
Um, so for those orgs that are sort of like struggling with, with that next step, what I would say is pick some small, repeatable task and just try it, right? So now maybe I'm, I'm moving more towards the science experiment, end of the end of the spectrum, but just try it. Um, you'll find that one, as a user of these tools, it gives you a better appreciation for how you're gonna protect them, right?
So you can't be afraid of it. You've gotta, you've gotta be a user as well as the protector. Um, but two, you'll get some legitimate gains. From it. Right? Um, you know, is it this, the pania that we've all been promised? I don't think so, not yet. But you can see the inklings of that. You can see the beginnings of that, and there will be those killer use cases that come out, whether it's three months from now or three years from now.
Um, there's gonna be something where you say, oh my goodness, this is what everyone was talking about when they said, Hey, I was gonna change the world. Right? Um, and so start playing with it now. Try to do something interesting. It's actually not that difficult. Um, and yeah, you'll, you'll just get a new appreciation for it.
So, um, yeah, I don't know if that really answered the question, Cameron. Like, but I do think for those that are struggling, it's just, you just gotta go. Right? It's, I'll quote another movie and say, you know, stop preparing. Just go.
Cameron Galbraith: Well, yeah. You know, as they say, the, the journey of a thousand miles starts with a single step. Right. So just take, take that first action. Uh, well, wonderful.
Closing the Forum + Transition to Shadow AI Keynote with Joe Sullivan
Cameron Galbraith: Well, I know we're up at time, so Josh, super appreciate your insights, um, sharing your experience with us here. Thank you all that are watching for joining us for the Data Defense Forum and, um, looking forward to the next one.
Thank you so much.
Josh Stabiner: Thank you, Cameron.
Nishant Doshi: Hello everyone. Uh, welcome to the closing keynote session for the Data Defense Forum. Alright, um, I'm very excited, uh, to have you with me, Joe Sullivan. Uh, Joe needs no introduction. Uh, everyone in the security world knows Joe, and, um, but I thought it it would be a good idea to have Joe just say a few words, uh, uh, and, uh, and we can start the session.
Joe.
Joe Sullivan: Hey, thanks for having me on. Uh, I think this is an important, important conversation. Uh, I've been involved in cybersecurity since the 1990s and, uh, started out, uh, with the US Department of Justice, but, uh, then moved on to build security organizations pretty much from scratch at Facebook, then Uber, then CloudFlare.
Uh, and I, I currently run a security consulting business, uh, and do some advising on the side and support, uh, a VC on investments. Uh, but the thing I've been doing for the longest time is, is thinking about how do we secure organizations from the latest risks that are coming at us? And, uh, so I'm excited to join you for this conversation.
Nishant Doshi: Thank you, Joe.
Shadow AI in the Enterprise: Balancing Innovation with Control
Nishant Doshi: Today's session is, uh, talking about shadow AI in the enterprise. How do you balance access and control? Um. The rapid rise of, uh, generative AI tools in the workplace has given a new challenge, shadow ai. Um, today's conversation explores how unsanctioned AI usage introduces data security and compliance risks, uh, in your organization.
Um, we are gonna be talking about not just shadow ai, but a lot of different types of, um, actually no, I'm, I'm talking from the wrong, uh, document. Let me just close that. Okay. Hold on. Let me, can we just do this again? Um, so, um, okay. So today's session is gonna be about shadow AI in the enterprise. How do you balance access and control?
The rapid rise of generative AI tools in the workplace has given a new challenge for security teams. Shadow AI has become a big challenge amongst all the other AI challenges like model security, data access control, and others. This session, uh, we are gonna be talking about the risk of AI usage, how it impacts data security and compliance, and how enterprise and security teams can balance innovation while maintaining control and implementing guardrails.
So, very excited to jump into our first question, Joe, what are the different types of risks and challenges that, that you see enterprises struggle with? When they introduced AI usage in their environment,
Joe Sullivan: I think this is the, this is important starting part. We as security leaders and security professionals, uh, we're not just sitting here thinking about risk.
We're thinking about opportunity for our companies and we need to build guardrails to allow the company to have the ability to innovate, to move fast. And if you step back and you look at the world of 2025 going into 2026 that we're in right now. The world is changing really fast. It's, it's actually the most amazing, exciting time I've ever seen in technology.
And that innovation is happening on a weekly basis. If we stop paying attention for two weeks, AI has made big leaps. And so, um, we need to set up an environment where the people inside our corporations, our enterprises, our businesses, can move really quickly, uh, using the new technology, experimenting and figuring out the ways that they can use it well.
Because if we don't give them the freedom to do that experimentation, our company's gonna fall behind, uh, to others that are being more aggressive. So we start with that premise that we need to create an environment where our people can explore and innovate. And that means we've gotta think about all of the risks of implementing ai.
We need to recognize that our employees are going to go hear about something on the news and download it and start playing with it. We've gotta be able to manage the risk associated with that. So it's, it's a scary time and it's an exciting time. Um, when I step back and think about the risks of AI in, in the enterprise, the, the first thing we need as, as security is visibility into what's happening.
So we can't put up policies that say no use of ai. We what we need to kind, kind of promote transparency in the use of AI as, as a starting principle. And then obviously we have to trust but also verify. And that means we need to run some tooling to figure out where the AI is actually being deployed and what it is accessing in inside our environments.
Nishant Doshi: Thank you, Joe. Uh, this, you know, this segues into, uh, my next question, which is, um, what, you know, how do you mentally map, uh, these different types of risks when it comes to ai? You know, there, there's a whole spectrum of risks, the whole spectrum of AI usage. How, how do you sort of mentally map this?
Mapping AI Risk: Outside-In vs Inside-Out (Prompt Injection & RBAC Breakage)
Joe Sullivan: Yeah, I think that, um, it's interesting from a technology standpoint, AI introduces some new challenges, some concepts that we didn't really have to internalize in the past.
But from a risk standpoint, uh, like we can think of AI as a version of software. It's a ver it's code. It's executing and it's doing things. And what we know from, you know, many, many years of doing information security work across lots of different contexts is we have to worry about outside in attacks and inside out attacks, to put it simply.
Mm-hmm. Um, we need to be thinking about like. Does, does the AI create a, an expose exposure to the outside world? Uh, and so the new version of an attack that we, we talk about now from an outside in attack is something called a prompt injection attack. So we have to be worried that if we have a, an AI that's customer facing, that it'll be subject to prompt injection attacks.
And on the inside we need to be worried about the AI have, uh, that we deploy, having access to and kind of blowing up our, our model of role-based access control at the highest level. Those are the two ways I think about it. And then, you know, there are lots of examples of, of smaller vulnerable areas around ai, but we have to start with, uh, is the AI accessing internal data that's sensitive and can we make sure that it's protected internally?
And then as it is exposed to the outside world, uh, can somebody essentially game the code to get access to something they shouldn't have?
Nishant Doshi: Yeah.
Lessons from Cloud/SaaS: Discovery, Data Access, Config, and Logging
Nishant Doshi: As you sort of look back, uh, at, at, um, at cloud and SaaS security, are there lessons that we can learn from, um, from protecting, uh, and, and, and, uh, protecting that world that can apply in the AI world?
Joe Sullivan: I think very much, uh, the, the journey we've gone through on cloud infrastructure and SaaS apps, both are, are very kind of. Um, insightful in terms of what we, we should be thinking about. Um, like I said, there are some, you know, some, some new kind of threats that come up with ai, you know, like prompt injection I mentioned and, and data poisoning or risks.
Uh, but fundamentally we're, we're typically talking about rolling out a third party piece of software that we don't have, know the whole providence of, and we don't know all the risks of when we, when our company first starts deploying it. And so if, if we go back a decade and, and start thinking about what were we doing as security leaders when, uh, when cloud infrastructure started rolling out and when, uh, SaaS app adoption kind of went through the roof?
Well, the first thing we focused on was discovery. Uh, and then we, uh, and then we started thinking about the vulnerability and the code of those apps because so many of them were facing the internet. And then third, we started thinking about kind of what, what, what data they have access to and how do we, um, how do we figure out and make sure that, that they're only accessing the right amount of data.
And then finally, uh, you know, there was, there was a bunch of effort around configuration and making sure that if there is a, a compromise that we have good logging in place so that we can, uh, kind of detect the attack as early as possible.
Nishant Doshi: That's great. Can you, can you talk a little more about.
Data Minimization & Fixing ‘Fake RBAC’ Before AI Indexes Everything
Nishant Doshi: You know, the role data access controls and data minimization play when it comes to controlling AI usage?
Joe Sullivan: For sure. Uh, you know, I, I've always thought in security that we should be paying more attention to, um, what data our organization collects and who has access to it. It's one of the hardest challenges that I see in companies. Um, like historically, we, we've deployed tooling that we called role-based access control.
And more and more I believe that the deployment of new ai, uh, implementations inside enterprises is showing us that we never, ever did role-based access control. Well, um, you know, I give you like a, a simple high level example of that. Um, a lot of times when we, uh, when we use a document, uh, system, say like Google Docs or, you know, uh, a Microsoft version of it, um, we share documents internally inside our enterprise.
And a lot of organizations set the default as, okay, anyone inside the organization can open up that link. Uh, there was probably a 90% of those documents, things that most people inside that corporation shouldn't see. But there, there was a sense of, of privacy and security that came from the obscurity of you needed to know the actual URL to go find that document.
It wasn't, uh, you know, you didn't have an AI tool that was scraping through all of your internal documents, organizing them and making it really easy for someone to, you know, in natural language. Conduct a query. Uh, now, you know, we've seen real examples of employees going into their internal AI and saying, tell me about the, um, upcoming layoffs.
And, you know, a document that the HR team might have been working through with finance on upcoming layoffs, uh, would've been completely invisible to the rest of the company five years ago. But because of AI indexing and, and making it accessible, uh, sensitive internal things can get exposed inside the corporation that that just weren't before.
Uh, because it turns out the, the, the levers we were using for role-based access control were just a little too blunt and not sophisticated.
Nishant Doshi: That's, that seems like a hard problem to solve. Would you be?
Joe Sullivan: It is. Um, and that's why we, we've probably never done it really well in the past. Um, like the, the optimist in me says, well, if the AI can help us find it, the AI can also help us secure it.
Uh, and so we have to teach the ai, you know, more modern concepts of role-based access control. We need to, uh, use the AI to get visibility, uh, and then kind of, uh, ratchet it down appropriately before we use, uh, you know, expose that AI tooling to the, to the whole populace of our employees.
Gaining Visibility into Shadow AI: Turn Up ‘Good AI’ + Endpoint Coverage
Nishant Doshi: A lot of enterprises are struggling with getting, gaining visibility into shadow AI usage.
Um. How should, how would you think about this problem and what should security teams, uh, pay attention to?
Joe Sullivan: Yeah, so I think first of all, we should go in with the assumption that our employees are using AI every day. Um, you know, studies show that there's a lot of experimentation going on, and I believe that humans are, you know, essentially very curious about this.
And if I, you know, as I sit at my desk and I'm doing my work, I'm constantly reminding myself I should be using ai, I should be trying to figure out ways to kind of reduce the toil in my workload. The beauty of a lot of AI for us as workers is that it can take away the repetitive tasks that we don't love and it can free up our time to do other things.
So I, I think we have to assume that our employees are gonna be experimenting with. Um, and so we need to encourage our companies to not shut down ai, but turn up. Good ai. So instead of, you know, our employees having to go try the free versions, uh, that don't have enterprise controls on them, we should be encouraging our company to offer paid versions of the best AI products for our employees.
You know, spend a little money up front to get the, the enterprise versions with the right controls, the right, um, policies around, um, absorption of, of enterprise data and things like that. And then, um, and then we need to invest in making sure we have full visibility into what's happening. If the AI tools are integrated into our single sign-on, if they're part of our enterprise kind of suite, it it, it gives us a lot more visibility.
Um, but at the same time, there's so much experimentation going on right now. There's so many products rolling out on a weekly basis. We have to ramp up our ability to get visibility into all the different tooling that our employees are using. And so partially we can get that visibility through the same tools that we're using in the past to, to, to keep track of the shadow SaaS apps.
Uh, you know, it's looking at, uh, the expenses, the, you know, the licenses it's looking at the software that's running on endpoints. Uh, it's, it's a whole kind of combination of things. Uh, and so, you know, I think that we as security leaders need to think holistically and try and, um. Think a little bit outside the box.
Sometimes we have to use security products in ways that we didn't think that they were built for, but that do help us in these, in these new, uh, risk areas.
Nishant Doshi: That's great. How important do you think having some protection on the endpoint is, is pertinent protecting ai?
Joe Sullivan: Yeah, look, uh, you know, we, uh, we talk about concepts like zero trust all the time.
Uh, we, we talk about, you know, the vast majority of the compromises that do happen, uh, come through employee machines. So I, if, if we didn't already need, uh, visibility into endpoints with the introduction of these new AI tools that are running on the endpoints, you know, most of the time they're desktop applications.
Many times they're running in browsers as well. Uh, and employees have the choice. But it's really important for us to get a level of visibility into what's happening on the endpoint in terms of new applications, uh, and, and how employees are putting data and other sensitive things into them.
Nishant Doshi: In a world that is changing every week, how should security teams think about keeping up with a AI and all the advances that AI bring to the table, all the new tools at the same time, all the new risks that come with.
Joe Sullivan: Yeah, it's, uh, you know, in, in some ways as a security person, you can look at this new world of AI and wanna pull your hair out because it, the change is happening so fast and it's not like we were perfect at security before the new risks came along. And so it, it feels like a bunch of new work, uh, if we look at it that way.
But on the flip side, um, I think we need to be optimistic at this time. We need to believe that, um, well, I, I put it this way. We need to be the kind of people who embrace change. We have to be the kind of people who wanna experiment with AI ourselves, like the, the, the bad guys are always gonna be early adopters.
We need to be early adopters as well. And AI changes the game. So fundamentally it should be something that we get excited about, that we wanna learn about, that we want to experiment on, that we want to understand really well. Because like, I, I fundamentally believe in the world of, of offense versus defense that we talk about all the time in cybersecurity.
AI can help the defenders even more than it can help the attackers. As long as we embrace using it and thinking about it and experiment with it and push it, uh, to its capabilities.
Nishant Doshi: That's great.
Who Owns AI Governance? Shared Risk Across Security, Business, and Employees
Nishant Doshi: In your view, who should own AI governance, IT security compliance or the business? And how do you foster cross-functional alignment?
Joe Sullivan: Look, no security risk has ever been well managed. When the security team went off in the corner and said, this is our problem and our problem alone. Um, the companies that do best on managing risk are the ones that think about it in every corner of the corporation. Uh, as a security leader, I often view it as my job not to take on the risk, but to help make sure that the rest of the company understands that we together are taking on this risk.
And then because any strategy to manage the risk of AI has to start when you're thinking about deploying the ai. Security never works as a belt on belted on, bolted on after the fact solution. Um, you know, we have to be involved at the earliest conversations about deploying AI in the company. We have to stay involved all the way through, but we, it's not our job to own the risk alone.
It's our job to make sure that collectively the organization owns the risk. Because there has to be, uh, risk management in, in the deployment, in the implementation, in the training of employees and how the employees think as they sit there on their own and use the tooling. Like if, if we can't teach our employees how to own some of the risks themselves, uh, we're, we're, we're gonna set ourselves up for failure.
Nishant Doshi: That's great. How, how do you think about data exfiltration and insider risk in the context of both sanctioned and unsanctioned ai?
Joe Sullivan: Yeah, so it, it's interesting. I, like, I, I think that data exfiltration, um, goes through like periods in cybersecurity where it's cool to talk about it, uh, concepts like insider threat and other times it's not cool to talk about 'em.
But what we've seen is that, um, the whole concept of insider threat has gotten, uh, blown up into such a bigger risk than it, than it was historically understood. And so I think this is a fundamentally important area of security. You know, we, we, we can't turn around without seeing the stories about the North Korean IT workers who are inside of our corporations.
Uh, and I've, and I've worked on a couple of those investigations and cases myself and can say they're really scary when you do realize that your organization is accidentally onboarded somebody who, who doesn't have the best of intent. And so we, we, in every organization we have to ensu assume that we have that risk.
And I'll tell you, uh, in the past, my insider threat tools have also detected outside attackers because, um, the reality is that if you're an outside attacker, the first thing you do once you get inside an enterprise. Is try and take over an insider's account so that you can escalate your privileges. Uh, so we have to think holistically about, uh, securing the data inside the enterprise.
Uh, and, and so, you know, we talked a little bit earlier about role-based access controls. AI really threatens our fundamental, you know, kind of like structure around role, historical, role-based access control. And so we've gotta double down on that area, and we need to make sure that employees only have access to the data that they need.
Uh, and, and recognize that AI is a really powerful magnifying glass for finding data that is accessible to the employee.
Nishant Doshi: What are the key elements you believe every organization should include in their AI usage policy?
Joe Sullivan: So first I, I think that we want to come, like I said earlier, come at this from an optimistic standpoint.
We should be embracing the use of AI inside our organizations. And so that means we, we need to set up policies that are designed to promote healthy use of ai. So the first thing we want is transparency. You can't secure it if you don't know it's happening. And so the security organization should be, uh, working with everyone else on policies that, that, uh, make employees want to come forward and say, Hey, I'm experimenting with this new piece of ai.
Hey, I deployed this SaaS tooling and it looks like it has AI built into it. And that SaaS tool has a bunch of access to sensitive data. Like we need all of that surfaced as quickly as possible. So we gotta do that partially through, you know, voluntary programs, encouraging, you know, creating an environment where employees wanna come brag about the cool things that they're, they're doing with AI and share their best practices with other employees, not, not squash it.
Second of all, we, we, we, we, we definitely have to use tooling to verify, uh, what's going on and we're gonna. We also need to start thinking about implementing solutions that will, if the AI is going to, you know, get access to data that it shouldn't, or be vulnerable to prompt injection tax attacks or something like that, we're gonna have to start putting like that, that layer of protective security technology on top.
So I think it's a, it's kind of like a three step process transparency, uh, uh, and then validation and visibility through tooling. And then the third thing is kind of managing the risk through software.
Procurement & Third-Party Risk: SaaS Apps Adding AI Without Opt-In
Nishant Doshi: So today a lot of companies, um, a lot of procurement teams are concerned about how their data, their enterprise's data is being used by third parties, especially in the context of ai.
How do you feel that world, uh, will evolve and, and what can companies do, uh, to ensure that they, they provide that transparency?
Joe Sullivan: Yeah, this is a tricky one I've been hearing of, of, of a lot of gotchas in this context. Uh, what we see is that, you know, most companies have literally hundreds of SaaS apps already deployed in their environment.
And those all went through some kind of, uh, third party, uh. Risk assessment before they were onboarded, right? You know, there was probably a vendor questionnaire. There was some scrutiny before the product was integrated, it was implemented in single sign-on, et cetera. Uh, but then what's happening is that all these SaaS apps are competing with, you know, other SaaS apps that do the same thing as them.
And they're all rushing to put AI into their product solutions. So you might have done a security assessment for a SaaS app six months ago, and the SaaS app had just, has just fundamentally changed. And a lot of these SaaS apps are, um, implementing AI with a, at best high level notice to you. No, and, and not doing any kind of opt-in.
And so, uh, you might have a bunch of sensitive data inside a SaaS app that all of a sudden the SaaS app provider tells you they're gonna start using for training of their AI models. And you gotta be like, wait a minute, I never agreed to that. So, um, our third party risk programs have to be dialed into this evolution of risk.
Uh, our third party risk programs need to, uh, keep an eye on the existing apps and see what's happening. You might need to do, uh, more frequent questionnaires. You might need to, uh, manually do some auditing. Uh, you might need to put into your contract terms, uh, prohibitions on, on, on things around AI training, uh, upfront.
Uh, but it's, it's, it's definitely a challenging area right now.
Agentic AI Is Coming: Treat Agents Like Untrained Employees with Superpowers
Nishant Doshi: How should, uh, security teams think about protecting agentic ai? The world of agentic AI is coming to us, uh, if not in weeks, at least in months. How should companies think about that?
Joe Sullivan: Companies are already starting to think about it because it is a slightly different risk than everything we've been talking about so far.
You know, to, I, I think from a mental model standpoint, we need to think of an agent as the equivalent of another employee inside our environment. And as a mental model, it should not just be another employee. It should be an employee without any training and any boundaries in terms of, uh, constraints on, on what they're willing to do.
Uh, because, you know, everything we've seen so far with agents suggests that agents, you know, like a lot of these new products are like creating an agent that will do a task for you, but they're stacking agents. And when you start stacking agents or allowing agents to create, uh, additional subagents, um, a lot of the constraint, like I've seen too many situations already where we put 10 constraints on an agent, but then the agent is allowed to create subagents or additional agents.
And we're seeing that when the additional, you know, when the agent creates another agent, they're actually, um, removing some of the constraints that were imposed on themselves to, because they decide to make the agent that they will help them be more efficient. So. Age world, I think is still in its early days.
Uh, but it, it is introducing a new set of risks and a different mental model than, you know, a pure, you know, SaaS app that happens to have AI built into it. Uh, and so we, we have to start thinking about ag agentic, identity insider organizations. Uh, we need to be able to monitor what do those agents have access to and what are they doing with that data, uh, in the same way we would, uh, any employee.
Nishant Doshi: How far do you think we are from the world of, uh, enterprise becoming more agent tech, uh, and more digital identity?
Joe Sullivan: Well, I, I, I mentioned at the beginning, I, I spend a, a decent amount of time, uh, working in the venture capital space, looking at new, uh, startups. And I'll tell you that we rarely ever see a startup that doesn't talk about how it is going to deploy agents to do, uh, uh, the work of, uh, a human, uh, particularly focused on what I call toil.
Uh, you know, a lot of the times in our security organizations, for example, we have jobs that people don't love to do. Sifting through, uh, vulnerability reports and alerts, triaging them and, and removing, you know, the, the, the noisy ones. Uh, having AI agents do that for us sounds very appealing to me and just about every person on a security team.
But when we deploy those agents, we have to remember that they're not trained to the same level as employee. They don't have a conscience. They don't have, uh, a lot of the constraints and history that an employee has. Uh, they're given a very specific task, uh, but they may, you know, flow like water and find a hundred different ways to complete that task that we never thought of.
And so we really do need to have good visibility and monitoring on, you know, the sensitive data that they have access to and, uh, what they might do with it.
A Sustainable AI Governance Strategy: Iterative, Flexible, Visibility-First
Nishant Doshi: What does a sustainable long-term strategy for governing AI in the enterprise look like to you?
Joe Sullivan: You know, it's, uh, it's, it's evolving so fast right now that I think the, um, the most important thing, uh, that I wanna see in a policy right now is flexi, is flexibility and visibility, I guess are the two things that I want.
I, I want, I want the there to be an ongoing, iterative process around securing ai. Like we don't fully understand all of the risks around ai. We have a, a long laundry list of risks that we think are the most significant risks, but we won't really know until a few years down the line, which of the areas of risk are the ones that are, where the bad guys are really hitting hard, where the, um, vulnerabilities are most extreme.
And so we, we, we have to be very holistic right now and we have to be very cautious, uh, but in a way that doesn't stop the company from rolling out these new AI tools.
Nishant Doshi: Thank you, Joe. Uh, that was a great answer and thank you for spending time with us and shedding light on this very important topic, uh, to all those who dialed in and participated in the Data Defense Forum.





.avif)
.avif)
