HomeBlog

Cyberhaven Unlocked: Linea AI for Analysts

No items found.

October 30, 2025

1 min

In This Article

Introduction and Webinar Overview

Amy Gurell: Okay, welcome everyone. Uh, by way of, of introduction, my name is Amy Grell. Um, I'm the director of professional services here at Cyber Haven and I wanna thank you for joining our first webinar in our new series, cyber Haven Unlocked.

Understanding Linea AI

Amy Gurell: Today we're gonna show you how cyber haven's linear. A linear AI helps security teams move from manual review and policy tuning into a world where AI agents handle the heavy lifting, finding the unknown risks, explaining what happened, and helping you respond fast, faster.

I'm gonna review a little bit about linear ai and then I'll pass it off to one of our senior data analysts, Nick Dobbs, to walk through the console with some live examples. At a high level, Linea AI delivers three outcomes for every customer. It helps analysts resolve incidents faster by  summarizing what happened and why it detects unknown risks that policies might miss.

Using our data lineage intelligence to see how information normally flows and it prevents future incidents, giving you context to refine policies so that you're not chasing the same alerts twice. And when we say AI agent, what we really mean is a set of capabilities that act on your behalf. They never sleep, they never forget, and they're tuned into how your organization actually moves data.

Key Components of Linea AI

Amy Gurell: Under the hood, these results come from three interlocking components. Cyber Haven's large lineage model or LLIM is unique to each customer. It learns your data flows and flags behaviors that don't fit the policy engine, which uses your rules and datasets to define what matters. And the LLM analysis layer.

Which reviews both policy incidents and anomalies to assess severity  and produce a clear natural language summary. The key takeaway here is that linear AI operates within your tenant. No data leaves your environment and nothing is used to train external models.

We describe Linea as having two AI agents working side by side, the detection agent, which reviews every event and applies what we call semantic risk detection, reasoning about the meaning of user actions, to find real threats and cut false positives. And the analyst agent, which automatically investigates confirmed incidents, analyzing context, screenshots, and behavior history to tell you what happened, why, and what to do next.

Together, they automate the manual work of a tier one analyst. So those are the three pieces. Working together, detect the unknown, analyze the known, and learn from every case to make policies smarter. Now to take us into the product for examples on how those  agents actually work day to day, and most importantly, how to use linear as an agent or as an analyst to investigate, tune and hunt in real time.

Uh, I wanna introduce Nick Dobbs, one of our senior analysts here at Cyber Haven to dive in. And just a note, if you have any questions as Nick goes through this, please put them in the q and a and have time for them at the end.

Nick Dobbs: Thanks Amy.

Live Console Walkthrough

Nick Dobbs: So let me transition us into the console. One moment. Alright. So what you're looking at right now is one of our linear demo tenants, an environment that we use internally to test new features.

Incident Examples and Best Practices

Nick Dobbs: Uh, from here we will start walking through several incident examples, uh, one where Linea triages a normal policy incident, one where it finds something completely outside of policy and one where it helps make a noisy policy smarter. We'll also cover some best practices, architecture theory, and a  couple of upcoming features towards the end.

My overall goal here is to give you a clear picture of how linear agents support you and each other throughout your data protection journey. So. Right now we are looking at the risks overview page. This is where I like to land in any new tenant to get my bearings. This view gives you a clean mapping of data sets to the policies that are monitoring them, along with the volume of raw events under each.

Think of it as a pulse check, a quick way to see which areas of your environment are busy and where your attention might be needed. You'll notice that some policies, which I've highlighted here for ease of viewing, have a little shield and star icon. That means let Linea decide is enabled, highlighted these.

So for these policies line's, analyst agent is going to automatically review each event behind the scenes and only promote the ones that deems high or critical risk to the incidents tab. It's basically  policy refinement in real time. If you've ever had one of these monitoring policies that floods your queue with routine matches, like in this case, every download to an endpoint, this is what keeps those from turning into a wall of meaningless alerts.

Everything. The AI suppresses still exists here as a raw event. So you never lose audit visibility. You just don't have to triage it unless you choose to. So again, this page gives you a quick sense for where Linea is active across monitoring policies and which high volume flows might require some deeper uh, attention.

All right. So we're gonna now go into a couple of these incidents and take a look at what the analyst agent actually produces for you every time that a policy is matched. Cool. So what we're looking at here is a policy incident that matched to our flows to unsanctioned cloud storage rule, uh, from the customer data data set, which triggered our linear  analyst to build out the assessment that you're seeing on your screen.

Before we dive in, it's worth calling out that what you're seeing is included in the advanced tier of cyber ha uh, cyber Haven's license. Once you have it, every single policy violation in your environment automatically gets this same level of AI triage. So up top we have the AI assessed severity critical.

Next we have the general summary, which is your what happened here statement. It always calls out both the providence and destination of the data. We will revisit how those tags are generated when we get into data labels. It's pretty cool stuff though. In this case. Our providence is corporate employee records and the destination is personal cloud storage and documents.

Together these tags anchor Linea severity judgment. In this example, you can see that the user Derek OV uploaded a zip archive containing sensitive client data to a personal Gmail account.  That's what pushed this to that critical level of severity. Next we can see a data summary. Uh, this is where Linea explains how it reached its conclusion about the data's provenance and risk.

Because this policy includes content inspection, Linea is able to combine what it found with the lineage context that you can see here, uh, with the actual content of the file, uh, plus some event metadata to reach its conclusion. And again, since we have content inspection, we're able to view the full content.

As you can see here on your screen, this is important because it allows you to sanity check the AI's reasoning IE, you know, was this really PII or in edge case, false positive based on some, you know, strange RegX match. Underneath all that we have the destination summary. Which is going to, uh, show you where that file was headed, be it a corporate system, personal cloud, uh, or in this case a personal Gmail  account.

You'll also see whether the transfer succeeded or was blocked again, depending on your policy configuration. One quick clarification here. Uh, Linea itself is not performing enforcement. It can't block or warrant on its own, and that is intentional line's. Job is to analyze and prioritize, not surprise your end users.

We leave the blocking and popups to the policies that you manually create. Under all this is usage insights. This is where Linea is exposing the underlying large lineage model data that it used as part of this assessment. It generally will give you a one sentence snapshot of historical behavior. Like in this example, in the past 90 days, the company regularly moved data to drive.google.com, but not this user in particular, and we can click into that.

So in general, what you get here is, uh, transparency into the AI's reasoning,  and it's going to include both information about the user itself, the department that the user is assigned to, and how the overall company, uh, uses this destination. Again, this is gonna be informed by directory mappings and we'll get to that in the architecture section.

Um. Yeah. So the next piece and final piece is the escalation suggestions. So clicking or so, you get three options here for every, uh, event. You can ask the user to explain the incident, ask the manager to review it or notify hr. Clicking any of these will open up a popup where Linea can draft for you a context appropriate message pulling from the same summary that you're observing.

So if we do that real quick, give it a moment to generate.

Cool. So you can take this summary and simply copy, paste it into  your email or ticking system that you're using to contact the end user. So when you think about this from the analyst perspective, the summary is telling you the story. The provenance and destination are filling in for the ownership and direction.

The data and destination summaries show the what and the where. Usage insights explain whether it's normal activity, and then the escalation actions allow you to remediate quickly. Cool. So now that we've seen how to validate linear's reasoning, let's use that context in practice. This is a filtered view of policy incidents where the, uh, human policy severity is set to critical or high, but Linea has updated that severity in after its assessment to medium or lower.

Now, uh, you can also pretty easily see this in the reverse, where you have lower human severity and higher, uh, linear  assessment severity. Uh, I would like to note also that what we're looking at is a demo environment. In production environments, we typically see a much higher proportion of severity, uh, downgrades by Lydia.

Um, so just keep that in mind as you're looking at these examples. Now, kind of a general rule of thumb here is that when Linea is consistently downgrading a policy, it's acute and narrow the scope of that policy. And when it upgrades the severity, it's probably a good idea to investigate that and perhaps tighten up your controls a little bit.

So that's one way that you can start using Linea as a feedback loop and not just a filter. Next we're gonna look at what happens when Linea detect gets involved and finds something without any policy involved at all. That's where it really starts to shine.

So again, this detection, uh, came straight  from, uh, a non-conforming event. Given the context of the large lineage model, which is unique to each tenant, these live under linear AI incidents in the incidents page. Uh, just a quick note on this, Linea detect and let Linea decide do represent the next tier of the Cyber Haven license and are thus part of the enterprise package.

So it's additive, uh, on top of the Linea analyst. Now, you might notice that each one of these incidents right here has a data set and a policy with a little three star icon. This is showing you, this is Linear's way of showing you that it did its own classification on this event. Uh, and it's showing you, um, what, uh, detection rule or what, what data set or policy it would recommend to cover this gap that it's identified.

So again, it's not enforcing rules, it's showing you where your blind spots are. Now something  else to note here is that on the subject of the LLIM, uh, from the moment that your environment goes live, Linea is going to start training on your tenants model, uh, with whatever lineage data it has available.

So for new tenants, the model is gonna start learning right away. You can expect to hit peak accuracy once you have about 180 days of data history. Of course, it's gonna surface meaningful insights before that point, but we like to leave that as kind of a safeguard for existing tenants. If you have any historical data at all, it'll be begin training immediately on the full history of your environment, and then from there, it's going to retrain monthly.

Adding each new month's worth of data on top of its existing model. So in effect, it's an additive, uh, enhancement. So. The LLIM is going to learn what is normal for your organization, which users, apps and destinations interact, and then flag movements that break those patterns. When it finds something non-conforming, however, it's  not going to just alert you.

It's going to feed, uh, that event to the same linear analyst agent that we observed earlier. That analyst agent will read the full lineage, classify the providence and destination. Generate a severity score and a written summary, then only if that severity, uh, that lineate analyst has generated crosses, uh, higher critical thresholds.

Does the event get promoted into, uh, your incident queue? This is the reason that detect incidents can be lower in volume, but are almost always worth taking a look at. So this one's a great example. In this particular incident, we have the user, Alison Fellas, uploading a text file named Encrypted Key txt to a personal Google Drive account.

Right away the AI summary is calling out the essentials. We see Providence tagged as corporate credentials, and we see the destination tagged as personal cloud storage and documents. The AI severity  here is again critical, and when we open the data summary, it becomes clear why. You can see here that we have a straight up, uh, private key in plain text, so.

The destination summary is going to confirm that the upload went to a personal Gmail account while the usage insights note that although the company as a whole often does send data to Google Drive, this user in particular never has. And as always, you've got the same escalation options below. Um, ask the user for the explanation, escalate to the manager or notify hr.

So it's this deviation, which is a factor of content plus context that gives linear high confidence that this was a event of real risk and not just noise. And here's what I like to tell analysts as far as how to think about detect incidents. So regular policy incidents that we just looked at are gonna confirm what you already know.

They  validate your written rules. Detect incidents are going to show you what you didn't consider movements between those rules. And that's great for two main use cases. The first being threat hunting detect is going to give you pre-filtered leads that already carry good narrative context. The second is policy discovery, which is the process of turning repeat patterns surface by detect into new data sets or policy rules.

And I like to also say that, you know, if. You are only checking, uh, even if you only check linear AI incidents once a month. Start with anything marked high or critical on AI severity. And then from there, if you find something legitimate, that's a great opportunity to tighten up a policy or define a new data set, or perhaps both.

And over time this will make your model much more efficient because as you increase the coverage of those policies and data sets and, and make them specific lineage detect will have less surface area to cover  and therefore give you more accurate insights into your blind spots. Okay.

The third and final type of incident that we're gonna look at is AI plus policy incidents. These happen when a normal policy match, uh, in set to monitor mode. Also gets an ai, uh, severity assessment through Let Linea decide. These are interesting because Linea is not finding anything new here. It's instead reevaluating something, a rule already caught.

But through the lens of full lineage and context, I would like, I like to think of it like getting a second opinion from a really sharp analyst one who's already seeing this same pattern a hundred different times across your environment. When you enable let Lineage decide on a monitoring policy, the AI doesn't just look at the file name or content that triggered it, although of course it will fall back to that if it lacks other context.

Instead,  it's looking at the entire data flow, who touched it, what it was about IE, providence, where it went, destination, and then asks, is it really worth surfacing? So in practice it's rescoring existing policies based on context. This one is a great example of that. This one shows, or this example shows the user Taj via Willis, uh, downloading a file called Entity Risk Intel from Google Drive down to a local file system.

The policy that triggered here is downloads to endpoint, which is going to fire anytime that someone pulls data from any external location down to a workstation. IE quite noisy. But if you look at the AI summary up top, Linea assigned a high AI severity, while the base policy itself is set to you informational, that Delta matters, it's Linea telling you, Hey, this rule costs something real.

In this case, our Providence tag is corporate security reports, and that destination is an unmanaged endpoint,  meaning it's outside of company control. This combination is why Linea bumped the severity. The rule itself was broad of context of this specific match was determined, risky. If we look at the usage insights, we can see that this user has never moved data to this unmanaged endpoint.

Looks like a mobile device, uh, from the IP address there. So, uh, if we consider the inverse, if this had been a managed endpoint, it's highly likely that Linea would've recognized it as expected and quietly suppressed it. And this is what we mean by policy refinement in real time. The real power of Let Linea decide is that it allows you, it empowers you really to take a high volume audit rule and turn it into a clean sim ready incident feed.

Uh, while every raw match remains fully auditable within the Cyber Haven tenant. If you need to. In this demo tenant for this policy downloads to endpoint alone, we saw roughly a 73% reduction in  surfaced incidents, which is very much in line with the results that we get in real customer environments. And that's what makes it such a powerful tool, right?

It gives you the capability to, again, keep your compliance trail intact, strip out the noise before it hits your detection stack. Make sure that whatever does land in your downstream sim or SOAR is already triaged and ranked by risk, not just raw logs. So, um,

Amy Gurell: yeah,

Nick Dobbs: in short, it helps you, uh, ensure that. Or does Linea Let, Linea decide is doing the hard work quietly of connecting policy detection, AI context, and downstream automation potentially into one clean foil.

Alright. One other thing that I wanna show you here is again, uh, that you have the ability to look at all of these AI plus policy incidents. You can see here 39 of these, uh, for the downloads to endpoint, uh, policy. And you can see  144, uh, actual hits against it, so, you know, pretty good results.

Data Labels and Classification

Nick Dobbs: Now I want to talk about data labels.

Because this is where all of the context actually originates from. This is a new feature, uh, for linear, and it is forming the backbone for everything linear and even the broader DSPN platform that we're building towards. It's what lets us move beyond static keyword based classifications into something that is truly context aware.

Um. Where the platform actually understands what your data is, where it came from, and more importantly, who it belongs to. There are a few different modes of classification that live under this umbrella, but they all share the same goal. Make data labeling fast, flexible, and human readable. So let's start with data types.

These are the classic, you know, what kind of document is this? Labels things that you can see here, like legal  contracts, customer feedback code. But instead of having to build massive training sets or write complex RegX rules, you can define them in plain English. In the case of code, for example, you can simply define it as software code scripts, algorithms, code repositories, and that they must be actual code or a photo of code.

Now, the model understands that naturally because it's backed by a large language model with broad world knowledge. It knows what code looks like, how it's structured. What distinguishes it from something like a security report? You can get, you can get quite granular with this. You can specify specific languages, um, potentially even certain kinds of functions, stuff like that.

So again, the real power here is the semantic classification. And the best part is that once you define a uh label, you can test it right here in the console. So if we open this test, uh, button here and we upload  some sample data, we can let that run for a moment. So what's cool about this is that it's designed to be quick iterative and you don't need to be a data scientist, right?

In this case, we got classified as other data, um, which is interesting. But, um, it's a good way to validate whether or not you've written good labels. The next thing is that we want to talk about data provenance. So. This is where Cyber Haven is really gonna start differentiating itself because we're not just looking at what a document is, but where it came from and who it's about.

It's what helps the AI reason about ownership and intent. A simple example of this would be, you know, every tax season you've got employees that are going to send themselves their own W twos, usually from a work machine to their personal email. Traditional tools will simply see a financial document going to a personal destination and  panic flagging it as data exfiltration.

But with Providence Cyber Haven can distinguish that the W2 belongs to that same employee. That the content of that W2 is about that user and that it's being handled by the rightful owner. Therefore, it lets it go. However, if that same person were to send someone else's W2, um, then the providence won't match.

And that's when it flags it as corporate data leaving the environment. And that's the magic of context, right? It brings real world reasoning into your classification. And we're just getting started. The next piece, uh, that we're gonna roll out, it's not in this build, but it's coming soon, is automatic taxonomy, which will group data into thematic clusters.

Things like projects, case files or initiatives automatically. And instead of asking analysts. Uh, to hunt, uh, through hundreds of files to figure out what's important. Cyber Haven will simply surface it for you, so you can think of it as  data discovery meets project awareness. Tying all of this together will be the also upcoming AI assistant.

It'll be the interface layer that lets you create, test and explore labels using natural language. The intent is to enable you to ask questions like, you know, show me all of the data labeled customer contracts that's left our environment this week, or even. Create a label for training data sets related to ML models.

The goal is to turn complex data governance into a simple conversational process so everyone, not just power users can benefit from the full platform. So again, when you think about data labels, I want you to think of them as the foundation that powers everything else within Linea and the broader cyber haven ecosystem.

The connective tissue really between content, context and classification. They're what make context AI possible or context aware AI possible. And they're the key to scaling risk detection without drowning in endless  manual rules. Alright, now. Let's, uh, switch gears a little bit.

Architecture Best Practices

Nick Dobbs: I want to talk more broadly about some architecture best practices.

Because, you know, we get this question a lot, which is, can I simply flip on Linea everywhere? And the short answer to that is, yes, you can. Um, but I would argue that the smarter move is gonna be some kind of phase rollout. So here's what I would recommend that you do. The first step is to ensure that you're feeding linea clean data.

The ana, the analysis it does is only as good as the signals it can observe. And that means keeping both lineage and directory data healthy. On the lineage side, you need to make sure that your endpoint agents, browser extensions, and cloud connectors are all up to date. Uh, if any of them go dark, you might start seeing broken event chains and that'll limit the accuracy of which, or that which Linea can reconstruct events.

The other half of the question or of the  equation is directory context. Linea uses your identity provider. Typically Microsoft entra or the equivalent for you guys to understand user relationships and org structures. That context is what's going to power the activity, insights and usage insights that you see within incidents.

Things like we saw, right where this department has never sent data to this domain right now, without that directory link, those insights will still appear, but they'll be far less informative. You might be lacking peer group comparisons for one. So think of it as feeding both halves of Linea brain, right?

The lineage data will show what happens. Directory data will explain who, who it was and how unusual it was for that identity. When both are healthy, linea summaries are sharper and its findings are easier to act upon. The next step is fairly typical, but we recommend that you start with policies that protect your crown jewel data.

Uh, and those are typically gonna be  worn or block policies that you already know are gonna carry real business impact. From there line's, automated analysts will automatically step in, uh, with those existing policies to evaluate them, summarize them, and assign severity. You'll see fast payoff there and speed up triage.

At the same time, uh, we recommend enabling Let Linea Dec decide on your broader monitor policies. It's a quick way to see how much noise Linea can actually filter out for you, and it'll clean up your queue, uh, downstream. And while all that's happening, the detect agent would run in parallel quietly learning, uh, your normal data flows and surfacing non-conforming movements that aren't tied to policy.

So it watches your back even as you're fine tuning. Thirdly, uh, you need to consider integrations downstream. So when you connect Cyber Haven, uh, to something like Splunk Chronicle Cortex, right? Any of these sim or SOAR platforms, you'll want to account for  how incidents are gonna flow out of the platform.

And so that's going to be over here.

Nope. Integrations. There we go. All right, so a couple things here. You come into settings, you'll want to look at some integrations, let's say tines in this case, and we'll select incidents because that's what you're gonna want to export. And you'll need to include AI summaries. You need to check that box and also subscribe to incident updates.

With these settings enabled, you'll get, uh, initial log as soon as the policy triggers and creates an incident. And a second one, a few minutes later, after Linea finishes its analysis and adds the summary. That way you can ensure that you're getting fully enriched events downstream, but you do need to account for the fact that you'll get two events per incident.

Additionally, starting with  version or platform version 25.0 9.01, cyber Haven introduced a bidirectional incident, API that closes the loop between cyber haven and external tools. You can see that right here. Again, if you come into administration, look at API specification. This will depend on your Cyber Haven tenant version.

So if you don't see this, you would need to upgrade your tenant. But, um, what it does is allow your, you know, SOAR or sim or uh, ticketing s system to change, uh, incident status directly within the Cyber Haven console. Using a simple patch call, you can do things like update the status. Close reason. Add notes to it, assign ownership and some other things.

This is how you're gonna keep Cyber haven and your external systems in sync, right? Eliminate duplicate queues. Remove stale open incidents hanging out in the console, and when it's all connected, it'll make your workflow feel much more seamless.  Right? Um, and have the effect downstream of ensuring that your analysts are spending less time bouncing between tools and more time remediating.

Step four is to use exclusions intentionally. Right? And we'll get to that right now. This is the linear exclusions page, right? So. This is where you decide what Linea should ignore outright, and it's really the governance side of the house here. So these, this, uh, exclusion, uh, engine, this policy engine is the same thing as what you would build data sets and policies with.

Uh, so those of you familiar, you can get quite granular with it and. Most teams in the wild are using it to exclude trusted partners or subsidiaries and or maybe honor, privacy, uh, or compliance carve-outs where AI review simply isn't appropriate. Think of it as your tuning layer, right? If you're seeing some noise  from linear, you can come in here and exclude whatever it's looking at that you know to be a false positive.

And one final thing on this, exclusions do apply globally to Linea detect and automated analyst. So if you exclude something here, neither agent will analyze it, even if it does generate a valid policy match. So do be intentional because you're defining the edges of what the AI system can see. All right.

Before we wrap up the hands-on part, I wanna zoom out even further and really just kind of close with how to think about linear day to day because. We see it as valuable in two very different ways tactically. Think about Linea as a time multiplier for your team, right? It's there to speed up triage and investigation.

It's there to highlight unknown unknowns, and it filters out noise from broad audit or monitor policies with Let Linea decide. On the strategic layer, Linea becomes your feedback loop, right? It shows you where your policies and data  sets might need tuning. Um, when you compare human defined policy severity to AI assigned severity, it helps you discover new data sets and or policies and close coverage gaps based on what tech is surfacing.

It gives your SIM and SOAR richer context, so alerts upstream already come with a story and it helps you maintain a healthy governance model. Clear policies, intentional exclusions, and clean lineage. So put simply, Linea isn't there to guess, right? It's there to maintain a living map of how your data behaves and help you separate what's truly unusual from business as usual.

And that's the quick tour of line in action. Um, yep. So I guess, uh, you know, final closing thoughts. Just, uh, keep in mind that Linea is improving in accuracy month over month. And again, you get better outcomes as you tighten up what it sees and can't see, and how your data sets and policies are configured.

And if you haven't already,  I recommend that you talk to your CSM about enabling Linea on a trial basis within your tenant.

Q&A Session

Nick Dobbs: Now, uh, we'd like to open things up for some questions.

Amy Gurell: Thanks Nick. There are a few questions in the chat and so I'm just gonna read you the first one here. So, um, does Linea automatically generate incidents for block policies?

Nick Dobbs: Um, so it depends, but yes. Right. It's, it's going to generate incidents, uh, four block policies if you have linear analyst, right? So any policy match at that point.

Amy Gurell: Okay. Um, two quick questions here. Um, one you mentioned keeping agents up to date. Will it work if you're running on N one or N two? Ask that one the second part,

Nick Dobbs: yes, it should.

Um, I believe the endpoint agent itself won't affect whether or not Linea is working right, but it will potentially affect the quality of the event data that Linea  is, you know, doing its analysis based off of. So the more up to date your endpoint sensors and just really all of your sensors are across the board, uh, the more reliably we can expect line Ed to perform.

Amy Gurell: Yeah. Great. And the second part of that question is, um, you mentioned an AI chat interface. Can this be used to help create and implement protection policies?

Nick Dobbs: That's a good question. Um, I'll have to go back and talk with product about that, but I believe that's the intention. I just don't want to, uh, pigeonhole them.

Amy Gurell: Okay. Um, we have someone in a product that might be able to help us too. So, we'll, we'll circle back on that question. And then also how do we define what is considered internal data?

Nick Dobbs: Gotcha. Yeah. So that's where the providence tags are gonna come in. Um, and you will be able to. Well, you should be able to, uh, find pieces of internal data and run tests on them, uh,  in the data labels to have within your tenant and determine whether or not those are gonna be flagged as personal.

Um, Linea does a pretty good job of it out of the box. Um, so really it shouldn't be a huge factor as long as it has the ability to inspect the content.

Amy Gurell: Great. Um, how are you measuring improvement over time?

Nick Dobbs: Yeah. Um, so I think that's a great question and I would like to go back to product and get some very specific details on that.

Um, but in short, uh, we do have some usage metrics and stats that are tracked on the backend and yeah. But that is a little bit above my pay grade.

Amy Gurell: Mm-hmm. Um, if we enable, let Le Linea decide, can we lose visibility on our lower severity events?

Nick Dobbs: No. Um, you'll still retain full auditability on all events that let Linea decide is observing.

The main difference  is that instead of receiving every si, so, okay, lemme back up a little bit. We see a lot, uh, in production environments where customers will enable a monitoring policy to always generate incidents. And they do this because they want to export all of these audit events into a downstream SIM or SOAR platform.

And retain the audit record there with Let Linea decide you are instead. I mean, think about it a little bit differently, right? Instead of simply sending every auditable event of that class, you're only sending now the ones that. You know, a human might actually want to look at, and so you're still retaining them within your console and you can go into the risks overview page, navigate to that policy and see those events there just fine, right?

So you're not losing data by enabling Let Linea decide. You're really just paring down that total volume into something much more manageable for your on the ground, you know, analyst team. 

Amy Gurell: Great. Um, there's one question in here. Are the best practices documented or will this training be available to watch later?

Yes, we are recording and we will provide that link for you after the call. Um, there's another question in here. Um, looks like some of you might be missing your data labels tab. So we will, um, find out who your CSM is and, and we'll have them reach out to you to make sure that, that, um. We can figure that out for you.

And then there's another question, uh, linear classification. Assume it can scan contact content, excuse me, and then try to classify that type of document or the type of document I should say.

Nick Dobbs: Yep. Uh, I can do that. I mean, it has a number of capabilities that I didn't mention explicitly. Including things like optical character recognition.

So it'll actually do a very good job of picking out, uh, even just like screenshots, right? Um, I saw a couple of examples even within this demo tenant where, you know, you had a very  small part of window that was screenshotted that contained a one time, uh, passcode, and Linea was able to pick that out and call it out.

Uh, within the summary. So yes, um, it is looking at content wherever possible, but it does depend on, uh, in the case of like policy match, whether or not you have content inspection enabled. Um, so there are, there are some caveats there, but in general, if you can see the content, Linea can also see it and it will base its assessment off of that, or, or at least factor it in.

Amy Gurell: I think this might kind of answers that question as well, but, or this question as well. But, um, one question. Can we pass in multiple files to train Linea for a new data type? For example, recognize format of an Excel spreadsheet for the new label?

Nick Dobbs: Um, that's a great question. So you can totally run tests on sample files, um, through the data labels tab.

Uh, you would wanna make sure that you of course, define the label that you want, um, beforehand and.  Yeah, test it there. If it's not matching, then refactor that, uh, label description that you've given it. Um, this is a feature that I think we'll see enhancement as time moves forward. Um, again, this is kind of the early days of data labels, so, uh, we encourage you to submit feedback, uh, wherever possible and, you know, help us help you.

Amy Gurell: Um, okay. And I, we might have covered this a little bit, but just, um, last question here that I haven't answered yet, or you haven't asked you yet is, um, can Linea be configured to generate incidents solely on unmatched traffic?

Nick Dobbs: Yes. Uh, so that is going to be linea detect, right? And so Linea detect is going to look explicitly at events that are not matched by a policy, right?

Um, and only at those kinds of events. So, um, yeah, that, that is, that is baked in, but it does require the enterprise grade license.

Amy Gurell: Great.  So those are our questions so far in the chat. If you have additional questions, um, please let us know.

Closing Remarks and Announcements

Amy Gurell: Um, just to kind of close things out here, a couple quick announcements.

So, um, as a reminder, we're launching our DSPM Beta program, so please mark your calendars and attend our DSPM early access. Program announcement, which will be on November 4th. And if you're interested in hearing more about that program, I believe we're gonna be sharing a link, um, where you can click to sign up or you can reach out to your CSM to get added as well.

There are, um, limited spots available, so act quickly if you're interested. Um, additionally, we're gonna have another Cyber Haven unlocked on November 19th to discuss security for ai. Um, in just the first 24 hours of chat, GBT Atlas's release, over 500 endpoints for 57 of our customers saw traffic to it.

And with the AI landscape ever evolving at lightning speed, um, it's really important topic right now. So please join us to hear cyber Haven's approach to security for ai.  And finally, um, you'll receive a survey after the call where you can suggest additional topics you'd like to hear more about in our unlocked series.

Um, so please provide your feedback for us so we can make this series as valuable as possible for you. Um, thank you again for joining and if you wanna see your first 30 days with Linea mapped out, uh, your customer success manager can get that rolling for you. Thank you guys so much. Have a great rest of your day, everyone.

Nick Dobbs: Yeah, have a great day guys.