HomeBlog

Cyberhaven Unlocked: AI Data Security

No items found.

November 19, 2025

1 min

In This Article

Introduction and Welcome

Volodymyr Kuznetsov: Hello. Hi everyone. Welcome to the Cyber Haven Ice Un Unlocked, um, series of webinars. We are excited to, um, have you join and learn about the latest and greatest in cybersecurity and data security in particular, and, uh. Cyber Haven, uh, product.

Today's Topic: Security for AI

Volodymyr Kuznetsov: So today's topic is gonna be security for ai. Um, in terms of introductions, I'm a CTO and the co-founder of Cyber Haven and Security for AI is my absolute favorite topic.

Um, it's, um, evolving very fast. It is, um, hard. For any of us to keep up and follow. But at the same time, it's unlocking a lot of innovation and potential and productivity in our organizations, and that's something that we are, um, embracing, um, here at Cyber Haven. So, um, today we are gonna talk about, um, our view on how, um, security for ai, um, could be done today.

What matters, um, how, what, what are the key pillars of it? Um, we are gonna also show you some very cool and exciting demos about functionalities that we released in our products that all of you could use right today and access in, in our consoles that give you some visibility and control and, um, of, of what applications your company's using and what data goes to general applications.

And, um, we'll talk a little bit about the future, what's coming, um, what you will get in Cyber Haven over the next. Few months.

Understanding Enterprise AI Security

Volodymyr Kuznetsov: Um, so let me start by just walking you through how we at Cyber Haven view, what enterprise AI security is about. I'd like to think about it in terms of three pillars.

Pillar 1: AI Usage Control

Volodymyr Kuznetsov: So, first. Is, um, AI ops usage.

So our, um, employees of our companies are adopting AI ops very fast, uh, and, uh, sometimes without really a lot of enterprise visibility or control. Um, it's. It does remind, uh, how SaaS apps were adopted, you know, in the early days of the Cloud by enterprises. Uh, however, with AIOps, I feel there is a big difference.

Uh, first, um, AI app vendors are really incentivized in, uh, storing and retaining the data that users put into this ops because the data is what enables them to train their models, improves their ops, makes them better. It's, uh, kind of the oil of the AI world. So, um, unlike old school SaaS, AIOps often have terms of services that allow them to store and use the data, and, and that's particularly affecting three tiers or personal tiers of this op.

Um, second big difference to SaaS is that, um, enterprise users, employees are being heavily pushed to use as much ai. Possible by, you know, their managers, the CEOs top down because it increases productivity. So they end up finding apps starting to use them as quickly as possible, oftentimes without really working with the security team to make sure the apps are safe to use.

Um, all of that results and AI apps being adopted much faster and, um, with less control and with a lot more chaos around it. So, um, your security program must include, um, controls that give you visibility and also ability to control, to limit the usage of AI applications. That should include ability to discover the apps that are being used, uh, in use.

To do the inventory to, um, control, um, whether companies are, whether employees are using personal accounts with apps or company accounts, and to get visibility into shadow AI applications, um, that may be used without enterprise knowledge. Um, besides just control and inventory, it's very important. To, uh, get accurate risk assessment of these applications.

Questions like, what's the reputation of the app vendor? What are the, um, compliance standards like SOC two that they have or not, right? Um, and what their privacy policy is and the general data use. Policy, are they gonna train their models on your data or not? Getting that risk assessment is, um, crucial to understand, um, which, um, AI applications should be allowed versus blocked in enterprises.

So that's your pillar, AI usage control.

Pillar 2: Data Security

Volodymyr Kuznetsov: The second pillar is around data security. So what data is going to AI ops? Um, AI is very data hangry and generally, um, users are incentivized to put as much data into AI as possible because it helps 'em with, you know, summarizing, uh, understanding, analyzing data. Um, driving, um, actions and so on.

But, um, depending on the app in question, that may or may not pose security problems or compliance problems. So, um, we need visibility into what data goes, what types of data, what kind of regulated data goes into which AI applications and ability to enforce governance, like prevent p ai from going into.

Um, SaaS applications, AI applications that are not approved for that use cases. Um, data security around AI is, um, actually becoming more complicated now as we are going into agentic world as AI tools are offering connectors to data including MCP connectors and other types of connections, right? So today, um, all major.

Uh, AI app vendors like Chat, GPT GM I Cloud allow you to connect, uh, it directly to your data silos, to your, you know, Google Drive, uh, office 365. And, uh, with MCP servers, users can connect it even to databases and, um, like SaaS, I type of applications, visibility into that is, um, very important to be able to control the data flows in the AI world.

Those are the first two pillars, and today you will see demos of both of those functionality in Cyber Haven. So we will show you a dashboard that allows you to.

Pillar 3: AI Agents and Security

Volodymyr Kuznetsov: Um, so, um, Egen is actually something that we at Cyber Haven are, um, looking and, um, building, uh, security around. Um, the questions in the Genting AI are, um, around inventory. You need to know what agents are being in use, um, agents. Inventory changes much faster than app inventory people, spawn agents turn down agents very quickly.

We need to have visibility into access and permissions of each enterprise agent. We need to be able to control prompt injection because at the prime vector of attacking AI agents, and finally, we need to have visibility into what agents actually do the analytics around their behavior. I am gonna, um, talk a bit more about this and, um, I'd like to make a point that actually AI agents in many ways are similar to humans to and dealing with AI agents.

Security is not so different from dealing with inside the risk management. Um. Of course there is traditional application, uh, posture management around it and data security and so on. But, um, it's, um, was, was, was with how agents behave. They actually behave like an indemnity on its own. So the classical, um, threats we have with insider risk management, like we have malicious insiders, negligent users, misled users, they map very well to what we see with AI agents.

Malicious insiders become malicious agents, for example, um, an agent could be running in your environment, um, through some supply chain attack or, um, like planted intentionally with intentional goal to do, uh, something bad like the steel data. It could be misbehaving agents. Um, misbehavior could be caused by misalignment or hallucinations or, you know, just boxing the agent.

And they could be compromised agents, uh, because of prompt injections that is unsolved and, um, won't be sold, uh, in, in, in the near future. Uh, prompt injection allows, um, attackers to really take control of. Um, agents running in your environment and makes them do, um, um, unexpected things like stealing your data, um, that Argus for, uh, like the worlds of, um, DLP and IRM and agent security to merge.

And the, um, the, that's how we are thinking about AI agents in Cyber Haven and, um, um, that will dictate our feature product roadmap in this era.

Exciting Demos of Cyber Haven's Functionality

Volodymyr Kuznetsov: So, um, with that, I would like to, um, hand it over to Sean, who will show you some cool and exciting demos of, um, today's functionality.

Sean Daly: Awesome. Thank you very much. Lova. Share my screen.

All right. Hello everyone, and thank you again for joining our webinar. My name is Sean Daley and I'm a customer success manager over here at Cyber Haven. I've been able to see firsthand how AI adoption has exploded recently along with the need to secure AI data. A customer's biggest challenge is visibility and control.

A security for AI dashboard is designed to give you that clarity, focusing on three pillars of AI risk, which is apps, data, and users.

Navigating the Security for AI Dashboard

Sean Daly: First, let's find out how to navigate to the security for AI dashboard. I'm in there now. Usually when you come in, you'll be in the risk overview page. If you click on the home icon and the dashboards in the top left over here, and then you click on security for ai, it'll bring you right into the security for AI dashboard.

One thing that I'm gonna do as well is I'm gonna adjust the amount of data on the lookback to 90 days to do that. If you go here in the top right. And you click on 90 days click apply. That is gonna change all the tiles to 90 day, uh, to a 90 day lookback. You do also have the option for dark mode, whatever your computer preferences are, or light mode.

I'm gonna leave it in light mode for the purposes of this webinar. Alright, let's start with apps. This is where the journey begins in understanding which AI applications your employees are actually using and what risk profile they carry. Speaking of risk profile, it is important to understand what this is.

The risk profile is generated by Cyber Haven's, AI powered risk IQ for gen AI apps. This evaluates each app using a defined set of security factors and assigns a five level overall risk, which is very low, too critical. We train the model based on publicly available data scanning, AI websites and other public research, uh, data on each application.

And we have 800 plus Gen A gen AI apps that we are continuously monitoring within the security for AI dashboard. You'll see the risk profile and the number of apps on the tile on the left over here. So you'll see the number apps right here. You'll see the risk profile, critical, high, medium, low, very low.

I can click on any of these to adjust only to that risk profile if I'd like. And then if you click back on the tile that you just selected, it'll bring you back to all apps and all of the timeframes that you're looking for here.

You also see the actual risk profile within the right tile. So looking at the actual app name over here on the right and the risk profile right here. One thing that you can also do is I can click view more here at the bottom and get a larger view of the most used AI apps. In doing so, you're gonna see the AI application name.

You are gonna see the risk profile for that application. You're gonna see the amount of users using that application, the amount of data flowing between, uh, or between the user and that application, the usage over time. And then this will open up into the risk overview page as well. One thing that I, I like to do is adjust this to the critical only.

So if I wanted to see only the critical apps or the most critical apps within my environment, I can click on this little icon here to filter and I can click on critical. Click apply, and then I'm getting a view of only the critical applications.

Deep Dive into AI App Risk Profiles

Sean Daly: Now that I'm here, if I want to deep dive into one of these apps as an example, I'll deep dive into deep seek for right now.

If we click on deep seek, it's going to bring over a slide over, and that slide over is going to give you the information for the risk summary, which you're given as critical. I can hit read more here. Read everything that I want to on this page, but what I can also do is click view full assessment and get the page in one view.

When I do so, it's gonna bring you into a page like this. What I can do here is read what's going on and the reason for the critical here. So deep seeks overall risk profile is critical with significant systemic weaknesses identified across all five risk categories. Those risk categories are data sensitivity and security.

Model security risks, compliance and regulatory risks, user authentication and access controls, security infrastructure, uh, infrastructure and practices. If I wanna see any of those, I can drop down here into any of those categories. I have the dropdown error arrows for any of these. If I wanna read the risk summary and the reason for the critical that we saw at the top, just know that you can go to any of these and see the dropdown, but if you want to export this as a PDF, I can do so here.

The way that I look at this, if you're interested in a new, uh, AI app as a company, or you want more information about it, or you even wanna send information to anybody within your teams, you can do that by exporting as PDF.

One thing that we have as well here under the AI apps under deep seek is I can go into the risk overview page. Clicking on this shield icon here is going to bring me into a page looking like this. When I go into this page here, I'll see risk overview overview here on the left, I've got the data sets, I've got all of the matching events.

I've got the destination, which right now we're only looking at deeps seek. I've got the AI apps as a category which deeps seek is under, and then I have users over here at the bottom right, as an example here, if I wanted to, um, go and look at one person, if I wanted to look at Steven here, I can click on Steven's name.

I can click on events. Then I'm driven only to the events that is related to Steven. As you see here. See here we have the sensitivity, we have the data set, we have the severity, the policy. Um, one, one item that I'll come back to is PII. Here I'll show you another tile that I can deep dive into Steven, again, which just a certain data set.

If I wanna only look at one data.

All right.

Data Security and Enforcement Actions

Sean Daly: Next, let's move on to another critical component, which is the data. We need to know what sensitive data information is being shared and what new data is being created. This top left chart is a major red flag for any security team, and it's going to track the unprotected sensitive data that violates your existing DLP policies and is being sent to generative AI apps.

This includes things like source code and PII, and you can hover over this graph here to see all the information related to those applications. This top right chart tracks how much sensitive data is being created by the Gen A I Apps, and it shows where it is going, and you can see all that information again by hovering over the line right here.

Lets scroll up. Um, this bottom left chart is giving you a real-time picture of your enforcement actions. The dash line shows a number of events and incidents observed. The shaded areas are giving you a breakdown of the security response. The red area is showing the incidents and full policy violations.

Here at the bottom, the darker gray is showing the policy responses where Cyber Haven's platform is automatically blocking warning, monitoring all the data based on your rules. Being able to see the volume of block actions proves that the platform is actively protecting your environment. Finally, this tile on the bottom right shows you what kind of data is at risk.

You can also view this in, in a large state. I'm gonna click view more here at the bottom, and I'm gonna get more information within this tile. So I've got the dataset, I've got the sensitivity, I've got the AI application used, which I can hover over the eye icon here to see those applications. I've got the users.

So I can hover over the I to see the user. I've got the usage over time. And then again, I've got that shield icon that we saw before. So like I said, when we were looking at Deep Seeq, if I want to deep dive into just a certain category or a certain data set, and this one's based on PII, I can do the same thing and click on this shield icon.

Clicking on that shield co shield icon will bring you back into the risk overview page. I can see all the data sets on the left here. I'm looking at just PII, I'm looking at the matching and or the corresponding policies. I'm seeing all the locations here. And then again, I have those users. So if I want to go back into Steven again, I can click on Steven's name, I can click on the events.

I can drill down to the PII events related to that dataset under Steven's name. When Pat goes onto his session after me, he's gonna deep dive a bit more into the data and the information that we're seeing here.

User Risk and Interaction with AI

Sean Daly: Alright, our final pillar is based on the users technology Doesn't leak data, but people do. We need to identify the the key risk or who the key risk actors are. The user trend here on the left is showing the total number of users interacting with AI separated by the risk level of those applications in use.

The red band highlights the users interacting with critical and high apps. One thing you can do on this dashboard is you can hover over these. You can also click on view all users.

I can see all the users here on the list and clicking on any of these users is going to bring you into the insider risk page. Over here on the left, this table on the right is your prioritization list of users. It ranks the top users by their interaction and their risk score. I can click view more like I did above to see this in a larger view.

So now similar to what we just saw before on the tile to the left, we're seeing the user, we're seeing the amount of data, we're seeing the apps used by hovering over the icon. Again, we're seeing, seeing all the events related to that user, and then the most used dataset based on that user. And again, if I click on any of these, uh, user, it will bring me right into the insider risk page here.

So this last pillar is answering the question of who is driving the adoption and risk so you can conduct targeted training or intervention. Last but not least, one of my favorite parts of the dashboard is if I want to look at only certain tiles here, two of my favorites are the most used AI apps. So I can star, icon, that app.

If I come down here to most active users, uh, for AI apps, I can start that one as well. And now anytime you go into the dashboard page that we saw before, it's gonna bring you right into those tiles here for those favorited icons. If you click on this dashboard icon here, it's gonna bring you into the security for ai.

If you want to go there or back to the bookmarks, I'll go back into the security for AI dashboard again. In summary, cyber Haven, uh, the Cyber Haven Security for AI Dashboard provides an unparalleled 360 degree view, turning the abstract threat of AI risk into actionable intelligence across apps, data and users.

We move you from, I don't know what is happening to, I know I now know the high risk apps. I see the sensitive data flowing in and out, and I can identify those high volume users. This allows you to enforce targeted data-centric policies that enable productivity while ensuring security. Thank you very much, and I'll pass it over to Pat to deep dive even more into the data within the Cyber Haven platform.

Patrick Collier: Thank you, Sean for going through the data protection for AI dashboards.

Just waiting here to share my screen.

All right. Perfect. So thanks again, Sean. Uh, my name is Pat Collier. I'm one of the data protection analysts here at Cyber Haven.

Creating Policies for AI Security

Patrick Collier: What I'm going to cover next focuses on the risks overview page, where we'll look at new generative AI application visibility within events. We'll also take a look at creating policies and data sets using our new gen AI conditions.

Uh, will also identify enterprise versus non-enterprise AI applications in your environment. And we'll also use that distinction inside policy logic. So we'll walk through a few real examples using my own user activity. Uh, but first I do wanna start back on the data protection for AI page here. As Sean had showed you guys earlier, there are a few sections in the dashboard where you can directly pivot to the risks overview page where we can see all of our historical file operation event logs.

So there are several options in console here for pivoting to the risks overview page. If we take a look at the most used AI applications, we can open up this widget, and as Sean had walked through, we can use this badge icon here to open the risks overview. And I'll do that here for one of our applications for Google Gemini.

So if I open this page, what this is going to do is it's going to automatically build a destination specific query using our Gen AI app name condition here in policy, and it's going to filter that specifically for Google Gemini. What you'll also notice in the locations panel is we do have this new AI apps categorization as a location, and we can also see Google Gemini listed there below.

If we pivot to events within the event metadata, we can see all of the data within the last 30 days flowing to Google Gemini specifically. And within this metadata we can also see degenerative AI app name, which is Google Gemini. And we are also looking at some of our other regular metadata, including the URL, which is gemini.google.com, uh, as well as the authenticated account to that web application session.

So what I want to do with these two new attributes, that being the generative AI app name in the AI apps category, is walk through how we can curate some policies internally to look at sanctioned and unsanctioned and generative ai, as well as look at enterprise versus non-enterprise AI as well. So if I pivot to this view, I do have a few policies that I want to go through and we are looking specifically at my user activity.

What you'll notice in the location panel on the top right is I have been experimenting with quite a few different generative AI applications over the last 30 days. Um, so whether this is me testing out different models, just exploring or comparing tools, um, I have evidence here that I have moved data to a variety of different tools here.

Now what you'll also notice here is we are making a distinction between enterprise tenants or enterprise workspaces within some of these applications and the public or personal workspaces here as well. So you'll see an enterprise flag for applications like chat GPT. You'll also see this for Google Gemini Enterprise versus the regular instance of, or the public instance of Google Gemini.

And we also are able to measure those flags for cloud ai. Copilot and perplexity, all of which have enterprise and public or personal offerings. So we're able to do that with some of our latest browser extensions. They're able to read a runtime global state variable, uh, to determine that the active sessions that the user is moving data to are an enterprise workspace, uh, versus a public workspace.

And you'll see that distinction here in both these sources and destinations of events. Now this distinction is very important because historically we could only see flows to chat, to be dot, chat to bet.com or copilot.com. We couldn't distinguish if these are our, uh, specific tenants or potentially secure environments.

And as I go through this, I'll use Cyber Haven as an example. Here at Cyber Haven, uh, we are sanctioned, or our corporate AI security policy allows us to utilize chat, GBT Enterprise and Google Gemini Enterprise. Um, we purchase these services, right? We pay for these services because they are, or they state right, that they're single tenancy, they're encrypted, they have valid security controls and infrastructure.

And because these enterprise versions do not, uh, use our data to train public LLMs. Um, so we want to ensure our employees are using these enterprise versions and our tenants. Um, we do not want them using all of these other AI applications again, because we don't want to use personal or public gen AI tools, uh, for the reason of not exposing sensitive data to public LLM training models.

So. What we can do, uh, for policy, the the first one I'll go through is, uh, again, using Cyber Haven as example, is we want to monitor the usage of our sanctioned enterprise applications. Um. I have a policy here called Flows to sanctioned generative ai. This is primarily, uh, for visibility and for auditing.

We can see what users in our environment are leveraging our sanction tools. We can see how often they're using them and also what types of activities or workflows generative AI tools are useful for in our environment. So with the flows to sanction to generative AI policy, I can open this and edit. I'll open this over here.

You can see we're looking at flows to sanction generative ai. The description or the purpose of this policy is to monitor flows to cyber haven's, enterprise Chat, DBT or Gemini instances, and I'll show you how I build this policy using four conditions. These four conditions will stay consistent across all the policies that I'll demonstrate today, so trying to keep it very simple.

But when we think about policy, we are talking about data moving to a specific destination or a specific action occurring on data. And in this case, again, we're looking at sanction degenerative AI for us, which is Gemini Enterprise and Chat g BT Enterprise. So with that new AI app categorization, we can specify a location.

Type is an AI app. So the destination for our data is an AI app. We can also use that Gen AI app name condition, which you can see here in the available policy conditions and attributes. Generative AI app name, which is the name of the generative AI apps. We can specify the Chate Enterprise and the Gemini Enterprise instances.

In addition to that, uh, our corporate policy requires us to use our Cyber Haven tenant on these enterprise applications. So we can also use our cloud active user attribute, which is the authenticated account to that web application at the time of the session or, or the logged in user to that account to also validate that our users are logged in with their cyber haven.com account to our enterprise tenants.

And in addition to that, we'll look at specific actions. So we're looking at copy and paste and upload actions, which are the traditional actions. We'll see going to web-based, uh, tools. So with a policy like this, we do get visibility now into all of our user activity. This is just specific to myself, but all of my user activity going to enterprise tools that are sanctioned within the environment.

So in the destinations here, you'll see chat to PT Enterprise and Google Gemini Enterprise. We can pivot to these events, and what you'll see is over the last few days here, I have been doing some regular workflows utilizing these sanctioned generative AI tools. You can see I've copied and pasted data from Microsoft Visual Studio Code into Chat Enterprise.

You can see that I was moving data from my Notepad endpoint application to Gemini Enterprise. Some. Content from Postman to Chat Enterprise. So within the event metadata here, again, we can make that distinction. The browser extension is letting us know that this is the enterprise Workspace application and also that the user is signed in with the appropriate account.

Denoting that all of this activity is sanctioned activity in accordance with our security policy. So. A policy like this where we're just logging flows to sanction. Generative AI allows us to see who's using sanctioned tools, how often they're using them, and for what types of actions they're using it for.

Now, from A DLP perspective, uh, sure. The sanctioned usage here is definitely insightful. It's, it's great for visibility. Um, it's great also for monitoring return on investment for large AI licensing or investments. Uh, but the real value here is preventing unsanctioned usage or monitoring unsanctioned usage.

So we do wanna take a look at all of those other tools or the shadow it, the shadow AI in our environment that users are moving sensitive data to. So we can use very similar policy conditions to also create a policy looking at flows to unsanctioned generative ai. So this is going to be any, uh, AI application that is not Google Gemini Enterprise or Chat Ute Enterprise within the Cyber Haven tenant.

So I can edit this policy as well and show you how I've built this. Again, we are looking at flows to unsanctioned generative AI applications. We are looking to monitor flows to all gen AI apps where the user is not logged in to Cyber Haven's, enterprise Chat, GT or Gemini tenants. And so I can blow up these policy conditions again, and I'm using those same four conditions that we had used before, uh, with a little bit of variance here.

So we do want to look at the, the destination for data as being an AI app. We also, in this case, want to specify chat chip Enterprise and Gemini Enterprise, but this is for instances where the user is not logged into our tenant. So it is possible for the users if they have some sort of. Educational institute that they're a part of, or some other third party organization where they have an enterprise license to another tenant that they could also log into that on their corporate device.

And we want to ensure that users are only logging into the Cyber Haven tenant for these services. So we can say that when the users not authenticated with Cyber Haven on these applications, we want to know about it because this is unsanctioned. In addition to that, we can use an OR condition to also look at all AI applications that are not our sanctioned applications.

So the Gen AI app name does not contain chat, BT Enterprise or Gemini Enterprise, and all of the copy and paste activity to those locations. So when we filter down and take a look at this policy. You can see here on all the destinations, none of these destinations contain the enterprise tenant that is sanctioned within our environment.

All these tools are considered unsanctioned tools and unsanctioned data flows to generative AI tools. And if we take a look at some of these flows. What we can see here is the browser extension is making that distinction. It knows that this is a personal or public, uh, workspace for chat in this case.

It's also giving us details of the logged in account, uh, for the. That session. So I'm logged in with a personal account here that does not have a licensing agreement with CHATT T. Um, there's potential here for my data to be exposed to public training models, um, and we can prove that here with degenerative AI app name and the logged in account.

Now there are multiple applications that we can see here. There are instances where I am unauthenticated to perplex the ai. If there's instances where I am authenticated to perplexity ai, but it's still not the enterprise version, and the same thing for Claude and some other applications here.

So looking at data flowing to these unsanctioned applications, irrespective of size, uh, all representative of potential data leakage risk, right? The capability for models to, you know, continually learn based on small amounts of information over time. And also the risk that these models pose when they're found without, uh, proper security infrastructure.

Leave organizations open to a great deal of data leakage risk, which is why it's very important for us to monitor the flows to all generative AI tools, specifically on sanctioned ones. Now, another policy that I do want to go over is our flows to critical risk generative AI apps. As Sean had showed you before, we are performing risk assessments utilizing deep research from our own internal AI models to evaluate other AI models and their security risk.

So we assign these risk profiles and as Sean showed you before, we can look at the applications within the environment. Over the last 30 days in which users have moved data to that, we are denoting as critical risk AI applications. Again, for a variety of reasons. If they have historical data leakage, if they don't have the right compliance frameworks in place, there's a variety of criteria that we're using to evaluate the risk profile.

But we can also leverage this risk profile in putting together more strict policies within our environment. So depending on an organizational risk appetite, um, we can block all unsanctioned AI if we wanted to, but if we wanted to ensure that we were only blocking users from using critical applications and maybe low and medium risk applications are okay, um, we can do that specifically within policy.

So what I've done for the flows to critical risk AI apps policy, if we open this up, is using those same conditions. I can specify those three critical risk applications that we had seen before listed here. To target them specifically, and in this case, I can create incidents so that all other AI tool usage we are monitoring.

But in the case that a user moves data to deep seek or deep swap, our SOC team gets an alert and they can triage this immediately. In addition to that, if we wanted to block or warn on the movement to any of these applications as well, we have that variability to configure within policy itself. So if we take a look at this policy specifically, this is going to filter down all the way to the critical risk applications that I've moved data to within the environment.

So making that distinction based on risk assessments can be very, very beneficial here in targeting some more strict policies within the environment. Now, so far we've looked at data flowing to AI applications, but there's another equally important risk, which is data coming from artificial intelligence tools and the output, uh, that those tools generate.

So, uh, one of the examples I'll go through is, is especially important for something like AI generated code. Because as we know, AI models can be inaccurate, they can be incorrect. AI models can hallucinate. So when development teams are generating code via AI tools, um, these models can introduce, uh, embedded vulnerabilities.

They can introduce, uh, errors within the logic. Um, they could introduce unvetted dependencies or, or whatever plethora of issues that have potential to be introduced to that code. Um. Without proper quality assurance, uh, for some of that stuff, entering your internal systems or your DevOps pipelines or production environments, there is great risk in utilizing the AI output.

So we are able to monitor that usage here as well. So using data sets, we can use those same gen AI app conditions to monitor the data coming from AI applications, going to your other internal apps or systems. So what I've done in this case is I've created a dataset looking at AI generated code. And we can open up this data set as well and we can use that location type, uh, that we used before for AI apps.

So this is going to look at all data coming from AI applications, moving to other locations within the environment. And with that, we can also combine this with Cyber Haven's ability to perform content inspection on clipboard, uh, data. Or download content. Um, so we can specify that we want to utilize content inspection policies that will match the AI generated content for specific programming language syntax.

Um, so combining both the AI app, uh, location type, and these content attributes for source code, we can easily identify source code being generated from these AI applications and view where that data is flowing within the environment. So, for example, we've got that AI generated code. I've got a simple policy here looking at the flows to different endpoint applications.

And what we can see within the events here is there are multiple instances where I have gone to a variety of different AI tools. We can see the sources. Here, different variants of AI tools, um, where I am asking those tools to generate specific code and inputting that into whether it be an IDE or specific development related app.

So in this case, I've asked Gemini, or I'm sorry, I've asked deep seek to generate, uh, some code here. Uh, testing for CVEs specifically log four j. And I've copied and pasted that to Microsoft Visual Studio. You can see the same thing for Gemini. Uh, this is some JavaScript related to an API request that I'm utilizing in Postman.

Um, and there's also other instances here where I am using the. Gemini Enterprise instance to create an uninstall script and putting that into Google Docs. So this provides us visibility or provides organizations visibility, um, into the types of output that users are, uh, utilizing within regular workflows.

Um, this can help to prevent, you know, unreviewed or risky AI generated code from entering production or DevOps pipelines. So all in all, uh, we reviewed identifying AI applications across the environment using policy and dataset conditions and some of those queries. We did some distinguishing between sanctioned and unsanctioned generative AI applications.

Uh, we built some policies for enterprise, non-enterprise and critical risk applications, um, and we did monitoring for both the AI generated output and the outbound data sent to AI flows. So all of these capabilities are part of Cyber Haven Security for AI platform. Um, you can incorporate any of the content inspection, warning, or blocking and monitoring capabilities with all these policy configurations to look at AI tool movement, and the hope peers to give organizations deep visibility and control over how their users interact with generative ai.

If you guys are interested in hearing more about, um, some of the developments here for A-I-S-P-M, um, feel free to reach out to your CSM.

Conclusion and Q&A

Patrick Collier: But that wraps up my portion of the demo and I think we can hand this back to any questions we may have to answer live here.

No questions. All right, well, I think this wraps up our webinar here. Thank you all for joining. We appreciate you guys listening to us talk about AI security, um, and we look forward to developing the capabilities moving forward. Thank you all.