Welcome & Introductions
Cameron Galbraith: Alright, cool. I think now we are live. Uh, I'm gonna bring my colleague, Dave Stewart, uh, on stage to make sure you can see all of us, and we will get started here in just a second, folks.
All right. Thank you everybody for joining. I see we've got a few more that are. Uh, hopping on. So we'll give them just a minute and then we'll get started.
Alright, it's uh, 1102 Mountain Time, so we'll go ahead and get started. Thank you everybody for joining. I see we've got probably, uh, 30 some odd folks who have joined, uh, many of you customers, so we're grateful for your partnership and for your time today. Um, my name is Cameron Galbrath. I'm the senior Director of product Marketing and I'm joined by my colleague.
Dave Stewart, and today we're gonna go through, um, some interesting data on AI adoption that our Cyber Haven Labs team put together. Uh, really interesting stats. We're excited to share with you really interesting insights into how AI is being adopted in the enterprise. And then, um, we'll talk a little bit about some solutions we've put together to help address these challenges that have come up.
Through the research. So Dave, I'll let you just say a, a quick word to say hi and we'll dive into it.
Dave Stuart: Yeah. Hey everyone, great to have you here. Um, it's Cameron said, my name is Dave Stewart. I'm on the, the product team here at Cyber Haven. And, uh, really excited to kind of tell you about some of the exciting, uh, new capabilities that we've been delivering recently.
Cameron Galbraith: Cool. All right, so we'll dive in.
2025 AI Adoption Report: What Cyber Haven Labs Measured
Cameron Galbraith: So, uh, AI adoption trends, uh, I'm sure many of you joined because your organizations have either had a mandate to adopt AI or there's just been bottom up interest from your frontline workers and from really everybody from across the organization. And, you know, research is a big part of what we do here at Cyber Haven.
We were founded by a group of PhD, uh, candidates at the time. Now doctors in computer science. And so deep research is a big part of our DNA. And so our Cyber Haven Labs team, uh, took a look at some anonymized but actual AI usage patterns in the enterprise. And we compiled these findings into our 2025 AI adoption and risk report that, uh, some of you might have seen was published recently.
And I'm gonna give you some of the highlights that we saw in this report, some interesting stats to be aware of, and then we'll dive into the implications. Of those.
AI Usage Is Exploding: Two-Way Data Flow Becomes the Workflow
Cameron Galbraith: So first of all, it's no surprise that AI usage at work has been growing exponentially. So you see in this graph, it's been continually going up year over year, over the last few years that we've been measuring this.
And you know, that little bit of variation, that's the weekends, holidays, things you would expect in work, uh, schedule. But the clear trend is upward. And what's really interesting is if you look at the two colors on this graph, this represents data both being copied from or going to ai, but also being copied from ai.
So it's not just a one way thing, it's not just a matter of data egress. This really shows that AI is now clearly a part of the workflow. So many enterprises. So it's really interesting to see this overall.
Who’s Adopting AI? Industry-by-Industry Growth
Cameron Galbraith: When we dug in and looked at, uh, from an industry standpoint, where are we seeing the most adoption?
Who is adopting these AI tools? Uh, the trend across the board has been a big leap. Um, year over year jump in who is using AI and how it's being used in the enterprise. And so, you know, not surprisingly, we saw, uh, you see at the bottom of the graph technology, the technology industry was clearly an early adopter.
Uh, but really it cuts across every industry. Now we see a huge increase in adoption in retail, in pharmaceuticals and manufacturing, healthcare, financial services, certainly. And we think this is indicative of how. We've got off the shelf AI tools. You've got in-house models that are being trained, but again, every organization that we talk to seems to have AI as some kind of a mandate or initiative or major project.
That they're embarking on because of course the potential for productivity gains is just huge.
The Risk Reality: Sensitive Data Going to AI
Cameron Galbraith: Now, from a security standpoint, that has some interesting implications. So when we looked at the amount of corporate data that is being sent to AI that is sensitive, there's also a clear trend and it's somewhat we'll say, concerning or at least something to be aware of.
Um, so just over a third. Of the data that is going to AI solutions. So this could be chat bt, it could be Gemini, it could be some in-house thing, et cetera. Um, just over a third of that in the last year has been sensitive data. So to us this says, this is indicative of AI becoming more integral to very mission critical workflows, right?
People are using, uh, AI to help with cus analyzing customer data, with doing all kinds of things like that. But of course there are some security implications with doing that.
Which Sensitive Data Types Are Being Shared (Code, PII, HR, Finance)
Cameron Galbraith: So we drilled down into it and when we looked at the sensitive corporate data, uh, that was being sent to AI by that type of data.
Now, not surprisingly, like I said earlier, the technology industry was an early adopter. So source code is a pretty big part of that and r and d. But really again, not only does it cut across all industries. It cuts across all functions within a company. So source code, of course, that would be the development team and r and d, but sales and marketing, uh, file.
So things like strategy documents or customer lists or forecasts, corporate messaging, uh, health records, HR and employee records. Certainly for manufacturing, things like graphics, cad design, et cetera. Financial files and then sort of everything else that would be bucketed as other. So really, again, cuts across the entire enterprise and shows just how widespread this adoption has been and that it's not just one group within the company that is.
Handling sensitive data that's potentially exposing that or potentially handling it in a risky way. It really cuts across the entire company.
AI App Risk Levels & the Shadow AI Problem
Cameron Galbraith: So then we took a look at, alright, what kinds of apps, what kinds of AI solutions are they sending that data to? And then how risky are they? So something we'll show you in a little bit is this AI risk IQ that we've put together to analyze how risky the apps are.
And again, we saw some pretty interesting findings, potentially somewhat concerning. So almost a quarter had critical risk, meaning the way that they handle the data or maybe they train their models on that data or the way that that data might be accessible to other people. There's a few, um, versions of that.
Uh, but critical risk, uh, and then half of it high risk. So you've got basically three quarters. Of all the AI apps being used that are risky in some way, right? So they don't have the enterprise controls in place, they don't have, uh, the SLAs, the things that you would need to make sure that they are safe, secure, compliant with the handling of data.
And again, this is probably indicative of employees just seeking a way to use AI to be more productive, to advance the mission of the company, to just get the job done. Um, but it means that there needs to be potentially some guardrails on how they're doing that and steering them towards some safer usage of ai.
So to kind of marry a couple of those stats together, no surprise then that the, the majority of enterprise data. Um, so breaking this down by how much of that data is risky, how much of the apps themselves are used are risky. You've only gone about 16%. Of, uh, that data going to enterprise ready apps, right?
So it's the corporate version of a chat, GPT or Gemini or perplexity or something like that. Really, most of it is going to probably personal accounts or ones that have just not been vetted by the organization. It's a big, what we would call shadow AI problem. And so.
Key Security Challenges in the AI Era (Visibility, Egress, AI-Generated Data)
Cameron Galbraith: So those are some really interesting stats and so taking a step back to look at that, then we really kind of boil this down into, there are a few big challenges around securing data in this new AI era.
The first is that last one I just talked about, the, uh, identification of shadow AI usage. Alright, so just having an understanding and the visibility into what is being used today, who is using that data, who's using those tools, what is in the environment? This is very analogous to, if you remember, you know, 10, 15 years ago when it really started the shadow it when sort of concerns, right?
When you had the rise of cloud and SaaS apps and employees were using those again in furtherance of productivity and supporting the company's goals. But doing so with tools that maybe are not fully vetted and not fully approved for usage. So then there's sort of the two-way movement as well. So of course there's the challenge of egress.
So where is sensitive data going and is that going into AI and making sure that we protect. The sensitive data that might be going to some of these risky tools, but then going the other way too, right? If AI is becoming a bigger part of our workflow and data's flowing in and out of AI, and there's this derivative data and fragmented data that is coming out, well, then we need to track the AI generated material as well.
To understand how it's being used in the enterprise, how people are consuming that data. If we're making business decisions based on that data, there's a lot of things that we have to consider and look at. And having an understanding of where that AI generated or rather where that material came from, and if it was AI generated, can be pretty impactful.
So that's kind of an overview of the findings that we had from this report. Uh, I encourage you to go to the website, download the full thing. It's about 20 or 30 pages. There's some other really interesting insights in there and you can dig into that data, but so. Looking at that.
Introducing ‘Security for AI’ (Release 25.07 Overview)
Cameron Galbraith: Then we also looked at, okay, well what kinds of controls can we help our customers with today?
What can we put in place to help them understand that shadow AI usage, understand the data that's moving to and from their AI tools, and how risky are those tools in their environment? And so that's where we put together a solution that we're sort of. Altogether packaging up as security for ai. And this goes above and beyond the existing controls that have been in Cyber Haven for a long time around monitoring the use of data and where it goes, and whether it's going into a personal or corporate chat, GPT and other generative AI account, things like that.
But to cover four big areas. So the first of all is that shadow AI discovery, so the visibility into what you have. Then there's the AI usage insights. So not just discovering it, but understanding how that data is flowing. I mentioned earlier also the AI risk iq. So understanding how risky some of those apps are and what is the risk that they introduce potentially into your environment, and then the AI data flow control.
So for customers today, this is gonna, these, uh, capabilities, everything that we're wrapping up into security for ai. This will be available in the 25.07 release, so available very soon. So I'll run through just a quick overview of each of these and then I'll hand it over to Dave to talk about some of the other things that we're doing with AI at Cyber Haven.
Shadow AI Discovery: Building an AI Tool Registry
Cameron Galbraith: So the first element of the security for AI is the shadow AI discovery. So very simply, this is going to uncover the AI that's used by employees. And so what it does is it goes and creates this exhaustive registry of AI tool usage. Continually discovers new ones as they emerge. And the goal here is we're gonna give you the visibility into not only the standalone AI tools, but also the embedded AI functionality within some of your existing SaaS applications.
So if you think about many of the, uh. SaaS applications that you use today, they're adding, you know, an AI assistant or a copilot or something like that, that is integrating AI to make that data store smarter. But that's something that you need to be aware of because it's a potential egress channel. It's a potential source of information.
You want to have some visibility into which of those applications are gonna have that. And you know, again, we really believe that the visibility you get here. Is the foundation of an effective AI governance program. And you could sort of think of it like we're answering that critical question, what AI tools do employees use in our environment today?
Right. That's a great starting point.
AI Usage Insights: Tools, Data, and High-Risk Users
Cameron Galbraith: Now once you've done that discovery, that's where we get into the AI usage insights reports that, uh, again, will be out very soon on AI adoption. And this is gonna look at it from three perspectives, the tools, the data, and the users. So our research team analyzes data from millions of, uh, users to identify those AI usage patterns.
And we've incorporated those findings, things like you saw in the, uh, AI adoption risk report to help you understand the tools that are being used today and the usage patterns today, and whether those are risky or not. So we can also identify the most active AI users across the organization. And so in this example, uh, on the screen here, you can see Emily Johnson.
This is all of course, anonymized demo data, but Emily Johnson stands out as a critical risk user, and in this case, she's accessed risky AI tools like Deep Seek and better GBT and perhaps more concerningly. Is regularly sharing PII and other sensitive data with these tools. This is a great example of where you might wanna see, like there could be employees that are handling sensitive data.
They have access to sensitive data. Again, maybe they have the best of intent. They're trying to do a better job by using some of these tools, but the way in which they're doing it could put the organization at risk. So it's a great opportunity to identify that, reduce that risk. And be able to move forward and actually safely enable that kind of productivity by introducing some safer tools.
AI Risk IQ: Scoring Tools Across 5 Risk Dimensions
Cameron Galbraith: So to that point, the next big element of security for ai, AI is this AI Risk iq. So AI Risk IQ creates this comprehensive risk profile for each AI tool, and it assigns risk levels from very low to critical for every one of these tools. So that risk evaluation framework. Evaluates the risk of every tool across five key dimensions.
There's the data sensitivity and security, the model security risks, the compliance and regulatory risks, the user authentication and access controls, and then finally, their security infrastructure and practices. So each risk profile. We'll include a very clear human readable summary that articulates each dimension and the strengths and weaknesses on that dimension.
So you can further drill down to understand, you know, how we constructed that risk profile, where we obtain the source information, um, and which you might need to be aware of in terms of whether that matches your organization's risk tolerance or it's something that you want to take action on and remediate.
So how do we bring all this together? Well, we've got our agentic AI engine with some pretty deep research capabilities, and this is powered, uh, basically a proprietary risk scoring system that incorporates all of these. So it enables your teams to make those informed decisions about which tools can be safely used, um, for which kinds of data and which kinds of workflows.
So you'd be able to get that granularity to drill down into the usage. At the risk of the tools and be able to align, uh, again, your risk appetite with that usage.
AI Data Flow Control: Policy Enforcement for Data To/From AI
Cameron Galbraith: So then the final thing I'll mention on the AI or security for AI is the AI data flow control. So you remember back to that early, that first graph where I showed we've got an exponential increase in data moving to ai, but also from ai.
It's become a big part. Of enterprise workflows and AI data flow control monitors and controls that sensitive data moving to and from the AI tools. So it's going to utilize our core reimagined, DLP and IRM technology to inspect the data, uh, interacting with ai. But with this feature, the security teams can access, um, or can rather apply policies based, based on each tool's risk profile.
So using the data from that ai Risk IQ I just showed, uh, you can use that to help prevent sensitive data from being shared with high risk AI systems. The same approach that prevents sensitive data from reaching the AI tools can also be leveraged to stop AI from revealing confidential information to unauthorized individuals right into this world of fragmented and derivative data.
Understanding that flow and that control, again, of data going to and from AI is super important. And so we're giving you the visibil visibility tools, uh, to be able to see that, identify it, and then take action on that. So to that point, I'm gonna hand it over to Dave. Um, who will talk a little bit more about linear ai, which some of you may have heard of, um, but some of the ways that we use AI to help you protect that data.
So, Dave, I'll advance the slides for you. Just gimme the word and, uh, perfect.
Dave Stuart: Will do. Yeah. Thanks so much. Yeah.
How Cyber Haven Uses AI in the Product (Dave’s Segment)
Dave Stuart: So let's talk a little bit about how we, um, cyber Haven are actually using AI within our product to help, uh, discover previously unknown risks, as well as it helped make your review of all the risks within the platform, uh, as quickly and efficient as possible.
So, yeah, Cameron, go on the next slide. Um, so as quick background, we set out about a year ago to start kind of innovating and do research around how can we use air in our product to primarily address, you know, these two use cases, these two challenges that we've heard repeatedly from customers, right?
First of one being right? We know policies can have blind spots. They only know to look for the rules and terms you've used to define them. And so you have these hidden risks, these truly unknown unknowns. And so. We wanted to investigate how we could integrate AI within our product to autonomously monitor all events coming in and surface, uh, risks that you may not have known to look for.
I'll talk about how we're doing that. I'll give you some examples of that here in a second. But that was one of the core challenges when we set out a year ago, uh, in building this AI capability. And then separately, we also wanted to address this challenge where we know teams have limited time, right? Soc teams often run lean.
They're often, you know, you know, managing multiple different security consoles. So their time is very limited, uh, when they go in and want to do the triage and review of the incidents, and they really want to know which ones are the most important. When they investigate a particular incident, they want to quickly get the context of what happened as fast as possible to kind of, you know, speed and aid, uh, in, in that delivery and in that time it takes to review it.
So again, I'll talk about how we're using ai. I'll give you some examples here in a second, um, to really address that challenge. So on. Uh, next slide please.
Meet Linea AI: Faster Triage, Unknown Risk Detection, Fewer Incidents
Dave Stuart: And so what we've done is we've developed linear ai. It's a product that we've, we released about six months ago, and we call it this AI agent for Superhuman Insider Risk Management.
And in particular, there are three really powerful benefits that Linea provides, first of which being it helps customers resolve incidents faster. And we've actually measured from customers as they've adopted Linea, and we've seen a five times faster incident resolution process. By using line's, automated severity assessments and natural language summaries, they're given quick prioritization of which are the most important incidents to review first.
And when they review that incident, they have all the context of what happened in that incident in order to make the decision about what to do next. And I'll show you an example of that here shortly. Secondly, Linea can also help you detect unknown risks, right? I mentioned this challenge of, you know, policies only being as good as the rules and terms that are used to define them.
You have these hidden risks, these truly unknown unknowns. We use Linea to, you know, automatically detect risks that you might not have known to look for. And what we've observed from customers that have started to use Linea. Is up to 40% of the high and critical risks that they see in their environment were uniquely captured by ai.
Right? That to truly kind of, you know, underscore that there are these, you know, hidden gaps, uh, that, that they may not have known to look for. And I'll show you some examples. I'll talk to you about how that works in, in the next slides. And then lastly, we don't just want to create, you know, more incidents.
Uh, with AI detections. We want to pair that with a capability to reduce the total number of incidents that require manual review. And so we have a capability, uh, that allows you to, uh, only, uh, generate incidents when AI has truly validated that the particular event is a higher critical. Uh, risk even when it matches your existing policies.
And what we've seen from customers that have used that is they can reduce the total number of incidents that require manual review, uh, by over 90% by using that capability to still allow them to define the rules. But having AI validate that those particular events, uh, are truly your higher critical risk.
And so next slide.
How Linea Detects Anomalies with Data Lineage + Deep Analysis
Dave Stuart: So I'll just talk a little bit, uh, in, in depth about how the AI detections work, right? It really takes advantage of the fact that, you know, cyber Haven is collecting, uh, lineage information that we can use to build a semantic model of what are, what is an expected, you know, workflow within a given organization.
And how that can work is imagine on the left side of the screen here, we have this kind of, you know, notional data flow where the CFO creates this executive equity award document. Shares it with a corporate accountant who downloads it as an Excel file, and then they go and copy and paste from that into Telegram.
Well, we have built a model for that customer using their historical lineage data to truly understand, you know, what is expected, uh, in that environment as far as, uh, you know, natural workflows and what might be an anomalous or suspicious workflow. And so given the context of this flow so far, every event that Cyber Haven is collecting, we're analyzing through that model.
And we might say, you know, in, in this case, uh, given the context of that flow, that John, now that he has, that downloaded Excel file, might upload it to corporate storage. That would be very expected. He might email it to other executives that would be expected. But given the context of that flow and the kind of history of what, you know, we've observed in that, in that environment.
Copy and pasting that into telegram would be, uh, very unlikely and therefore anomalous, uh, and require further investigation. But we don't just want to surface these anomalies and then require the kind of customers to do that further evaluation. We actually can do that itself within the linear AI products.
On the next slide. Uh, once something is flagged as an anomaly, uh, we kick in a more thorough analysis of that event where we are looking at the source, the destination, the full flow of the lineage, including insights into the content to truly validate whether or not that was a higher critical incident.
And then therefore create it, uh, within your incidents table. And, and, and pro, you know, encourage you to review that manually because we're, we believe we've, you know, identified something that may have, uh, been missed by your policies, but nevertheless would've been higher critical. And so you can see here an example of that natural language summary gives you a quick explanation of what happened.
I will show you on the next couple slides, actually, more live screenshots from the platform to see what that looks like integrated within the product. So next slide.
Live Examples: Severity Scoring, Summaries, and AI-Only Detections
Dave Stuart: So this first use case here is really just helping with that incident prioritization. So if you look at the top here, here are two incidents side by side that are essentially the exact same workflow, right?
The user took a screenshot within their corporate device and uploaded that image to their personal web mail or unsanctioned web mail. Now. Just looking at the event data for that, these will look, you know, nearly identical, right? It'll be an image file, you know, uploaded to personal web mail. Uh, if you go on a Mac, they're gonna be, you know, named exactly the same on the default screenshot application.
So it's really hard to kinda differentiate between these two and understand, you know, what might be the highest risk if this occurs a lot in your environment and in practice, this, this does occur a lot in customer environments. What we can see here on the left, there, Linea has already done its severity assessment of these events.
And so it can, you can distinguish between that first one being a high risk and the second one being a incredibly lower informational risk. And how did it do that? Well, we're looking at the, the content here. Uh, giving an example because the one on the left is a, you know, AWS architecture diagram, right?
That had all these infrastructure details, security measures, development, testing, production environments, et cetera. And that was going to personal email that does represent a higher critical risk, and Linea automatically did that. Uh, review for us. It, you know, it generated that severity assessment and before we even, you know, needed to view or preview the content there, it wrote that natural language summary so that we can, you know, immediately prioritize the highest severity risk first.
We can open up the details, get that little, uh, you know, one sentence, two sentence summary there. We know exactly what happened. If we wanted to view the content and we had it, um, you know, captured within the evidence bucket, we can do that as well. But it's performing that analysis for us automatically, right as these events are coming in.
And then just compare that to the complete other end of the spectrum there on the right. It's just a catme that the user took a screenshot of and then uploaded it to their personal email. We already saw from the severity assessment that it was low. We can just read that quick, uh, summary to be, you know, to validate why we believe it's low.
And if we ever wanted to view the content, we could do that as well. But this is what, you know, what we're really getting at when we talk about the, uh, the efficiencies. Uh, that our customers can see as they adopt linear, because they can prioritize which ones are the highest risk first, and then reviewing each of the details.
They get this, you know, quick, concise summary to help them fully contextualize what happened in each one of those incidents. Next slide. Then just to show you an example of AI detection, right? So in this case here, right Linea identified this event that did not match any existing policy and might have previously gone unnoticed with the, you know, volume of data that was collected, um, and surface this as a potential high risk, uh, for the customer to review.
And again here, you know, we have, uh, from the incident table itself, that first little, you know, snippet of screenshot at the top that's giving you the information to say, this is the highest severity of risk because it didn't match any existing policy. Linea is actually trying to contextualize what it believes would be a, a, a notional data set or policy name for that.
And then, then it's, it's similarly giving you that natural language summary to give you that quick explanation to say, this is what happened. This is what was in the data, and this is where, where it was going collectively. That is what, um, you know, led it to the, to believe that this was a, you know, truly high severity risk.
All three of these examples, by the way, just to note, you know, they are, you know, in this case you're just doing images. Um, it's full, you know, it would do it just as well for texts or, you know, uh, you know, data, you know, sheets or, you know, Excel documents or any other, um, type of files being moved. But just wanted to highlight, uh, the images ones here in the, in the, uh, you know, for the slides.
I believe, Cameron, that might be it for me. I think we back next in the q and a and
Cameron Galbraith: yes,
Dave Stuart: to hear from people on the call and, uh, answer any questions that we can.
Cameron Galbraith: Yeah, that's right.
Q&A: Tuning/Training, Enablement, Blocking, Exceptions, Mobile/Unmanaged
Cameron Galbraith: So if you have any questions, uh, just drop them into the, uh, the q and a little panel there. I see there's a few that came in.
So I'll start going through some of these and we've got a few minutes that we can go through those. Um, so Dave, first one will be for you, uh, related to Linea. So the question is, are customers able to tune or train linear ai, AI models, uh, for their particular orgs? For example, disclosing an incident with destination not at risk.
Um, tune or whitelist that destination from triggering it in the future. So I know we've got a few ways that we can sort of tune things. Why don't you talk about that?
Dave Stuart: Yeah, great question. Right. So, um, you know, right now the, the way Linea works, I talked about, uh, you know, how we, you know, we build that model with historical lineage within the organization.
We actually retrain that every month, right? Because we want to, uh, take into account any recent changes in the environment and, you know, keep an up to date view of what is. Kind of, you know, the expected workflows, uh, within the environment. Uh, there is capability to provide feedback to our team, right? As reviewing incidents, there's a kind of, you know, mark is incorrect option.
Um, that, that helps provide feedback to our team. Um, on that particular question about, you know, closing incidents out as, you know, being valid or not valid and how that can help retrain it, that's on our roadmap and that's, that's, that's, you know, that, that's areas we wanted to, to address because we, we do wanna get those signals from customers.
Um, around, you know, how they are, you know, evaluating incidents and how that might, you know, change what we might, uh, uh, how might, might evaluate in the future. Not currently, uh, being done yet, but we do do that automated training. So we're constantly getting, uh, the kind of expected workflows within the environment and having that UpToDate knowledge.
And we are working towards, you know, more of that kind of direct, uh, you know, user feedback that, that, that you might give, um, you know, through your actions like closing incidents out.
Cameron Galbraith: Very good. Um. We'll take an easy one next. So how do you enable linear AI for an existing customer? Uh, we will reach out.
We, uh, basically we will chat with your CSM, so we'll connect with you. We can enable it, um, for, uh, a trial and, and go into it, uh, deeper from there. Uh, okay. So one more question. Let's see here. Um. Okay, so question for you, Dave. How can we enforce block policies for incidents generated by Linea? Can it block it in line or can we convert a linear finding to a policy or dataset?
Dave Stuart: Yeah, great question. Um, it's currently the latter right now, right? So, you know, the, the primary goal of Linea and you know, initially is just surfacing insight into, you know, risk that you didn't know to look for previously. The process for Linea detecting these incidents is a slightly asynchronous process.
It could take, you know, two, three minutes, uh, for the AI severity assessment to, to come in, which is not, you know, currently fast enough. Um, to support blocking actions. And so what we recommend and kind of what we, you know, we position, you know, the, the value of when you detect is that it's surfacing right areas of risk that you may not have been aware to look for.
And then we absolutely, you know, would like to see, um, you know, if that is, is, uh, you know, uh, area of high concern for you, then you can adopt new policies, update existing policies. That turned that into blocking actions. And that kind of, you know, step from starting with Alinity detection and then manually updating, uh, you know, policies or creating new policies.
We're actually working to, in our, in our roadmap to, you know, automate that, to give you that kind of, uh, you know, uh, you know, further kind of AI suggestions around, uh, the, the appropriate next steps, which could include, you know, creating policies, updating existing policies.
Cameron Galbraith: Very good. Um, here's a, a good one related to Linea and in general.
So has this POC been used on obfuscation techniques as well I-E-P-D-F with sensitive data being hidden within. Um, so if I'm understanding the, the question correct, the, then the answer is yes. So one of the things that is really valuable about the underlying core technology we have with data Lineage is that it is not just looking at, or rather just relying on the content inspection element.
So because we know where that data came from and maybe how it's been modified, the file's been forked, copy pasted, et cetera, over time. If we identify sensitive data early on, then even if it gets into, let's say, a renamed file or another part of a file, um, as long as it's identifiable as sensitive data at some point earlier.
So even if you encrypt that file, et cetera, then we still have that lineage and therefore the knowledge to know that that is sensitive data. So I hope that answers the, the question we can follow up. O offline if there's, uh, a more specific case. But yes, that is one of the major advantages of data lineage is that many of the, let's say, most common, uh, obfuscation techniques that you might use to circumvent a traditional DLP, uh, well, they don't work with Cyber Haven because we have that lineage.
So we, it's not a, uh, something you can get around. Um, let's see, maybe, I think we've probably got time for one more quick question. Um, so Dave on Linea, so the question is how to make sure that Linea is correctly correlating with lists and excluding exceptions. While detecting.
Dave Stuart: Got it. Yeah. I mean that, uh, you know, that question, you know, sounds like potentially from and from a customer that might already be using linear or, or, or, or, or, or VOC linear.
Um, yeah, we, you know, there, you know, just for background, for everyone else there, there is a capability within linear. Um, on the exception side where, you know, you may, uh, you know, know certain situations where you don't want, you know, AI to be, you know, potentially, you know, alerting on incidents or just even, you know, reviewing those incidents.
And a common use case for that might be, I mentioned how we retrain that model every month. Um, but there could be, let's say, um, you know, some, you know, subsidiaries or partnerships or maybe acquisition talks underway. Uh, that are, you know, at the early stages. And so the model doesn't have, uh, you know, a good understanding of that.
And it's seeing legitimate, you know, sensitive details going to, you know, new or, you know, previously not seen in high frequency external domains and, and will, you know, uh, will be, um, understandably concerned about that might be triggering incidents off of that. And so, uh, there is a capability, you know, you can use exceptions kinda similarly, the same rule, you know, definition that you can do with data sets or policies to say, you know, don't, you know, um, um, generate linear incidents when you know things are going to this domain.
Because I know I've just started talking with this company last week and it's a, uh, you know, that we might, you know, acquire them and so we're sending 'em sensitive data and that's okay. And so there, there is a manual capability to kind of fine tune and, uh, kind of restrict things. That does support the use of lists.
And so certainly, you know, if the user has, you know, your questions we're more than if there's, you know, issues they're running into. Uh, if they're using linear already, we'd be more than happy to kind of, uh, you know, take that offline and we can go into detail and, and, and take a look at their environment and make sure it's working.
Cameron Galbraith: Yeah, that's great. Um. Okay. I think we are pro. Well, I'll give, let's do one more, uh, and then we're probably at time and we'll follow up with some other stuff. So, uh, one more on linear ai. So can linear AI capture the interactions using a personal account as well? And then a follow up to that is can the product capture the interactions in mobile devices?
Dave Stuart: Yeah. Two great questions. Um, you know, I'll kind of you answer 'em separately so that, that first question. Absolutely. Right. And so in fact, all the examples, um, um, that I showed there, at least the first two examples. Uh, were files that were uploaded to personal web mail, and so we're able to look at the acting user information or the signed in account unit information of the, you know, the, you know, in that case the Gmail to determine that it was a personal Gmail account and not a corporate Gmail account.
And understandably, the risk assessment that Linea would do would be completely different, right? If it was uploaded to a, a corporate account. Uh, 'cause it would be staying in a corporate environment versus if things were uploaded to personal accounts. And so that is. You know, part of that evaluation, the, the severity assessment that Linea does is looking at all of the, the lineage information that Cyber Haven is generating to include information about the acting user, um, to, you know, help, uh, uh, you know, um, tele part, right?
The kind of, you know, personal use case versus a corporate use case, um, as well as looking at the content, right? Because, you know, no one cares about cat means going to personal web mail. They do care about, you know, corporate sensitive information, going to personal web mail. On the question about can it, um, capture interactions and mobile devices.
So I'll talk about more broadly kind of unmanaged devices, uh, and where, you know, some of our, you know, recently developed capability, uh, well, um, cloud sensors as we continue to roll out, um, you know, more, uh, you know, cloud sensor integration can really help that. So OneDrive is the great example right now.
Um, you know, we, we just recently, um, you know, released, uh, you know, really powerful OneDrive Cloud, API sensor. And it can detect, you know, data movements from OneDrive to unmanaged devices. And linear can actually de, you know, uh, uh, surface, um, you know, AI detections for that, where it will see, you know, if there was a, uh, you know, a, what it believes is right, you know, sensitive data, um, through its, you know, AI analysis going to an unmanaged device, um, that can be, you know, by itself, uh, in, in, in, in, in AI detection, uh, that was surfaced and I forgot.
I mean, it might have actually been, let me go back and look at the. Oh no. So, yeah, I, um, uh, I was, I was, you know, when I was creating the slide yesterday, there was a couple different examples I was using. I thought maybe that, uh, the AI detection example was actually an unmanaged device, but I just looked right back now and it was, uh, to a degenerative AI service.
But, uh, that, you know, it absolutely can, can, can do that. Um, in that case for OneDrive. We're rolling out, uh, you know, uh, you know, you know, further cloud API sensors for, uh, you know, for SharePoint and Google Drive and others that will support that same capability as well.
Cameron Galbraith: Awesome. Thanks Dave. Yeah, we've got some really exciting things that are coming up that we'll be talking about, um, along those lines publicly in the next, uh, month or so.
So stay tuned for further updates.
Wrap-Up: Next Steps, Report Download, and Contact Info
Cameron Galbraith: Uh, we're gonna end it there. I know there's a few questions we didn't get through, but we'll be able to follow up, uh, individually for your individual use cases. So the last thing I'll leave you with is just in general, if you want more information on some of these product capabilities that we showcase.
Um, or more information on Cyber Haven generally access to the report, et cetera. Uh, feel free to to get in contact with us if you're a current customer. Contact your customer success manager if you are a partner or we're not working together yet, we'll follow up. Because we'd love to start that conversation with you.
So I'll end it there. Thank you so much everybody for joining. Hopefully this is really informative to you. We're gonna be trying to share a lot more insights from our Cyber Haven Labs team, uh, on a more regular basis because there's really some cool stuff that they're coming out with. So with that, I'll leave it.
Thank you very much and uh, everybody have a great day.
Thank you.





.avif)
.avif)
