Most security programs have more visibility than ever. Dashboards are full. Alerts are firing. And incidents are still happening.
That contradiction is not a coincidence. It reflects something most security vendors have quietly avoided saying out loud: Visibility and control are not the same thing, and for a long time, the industry has been selling one while calling it the other. The reason is architectural: effective data security requires presence at the endpoint, the lineage to understand what data is doing, and the AI context to turn that understanding into action. Most programs have fragments of one. Few have all three.
Visibility Creates Accountability. Control Is What Lets You Meet It.
There is a version of the visibility argument that sounds reasonable on its face. If you can see risk, you can manage it. More dashboards, more telemetry, more findings, more awareness. That is progress.
The problem is that awareness without the ability to act does not reduce risk. It documents it.
Once your security program can see a finding, your organization is accountable for it. That changes your legal exposure, your regulatory posture, and the board conversation after a breach. Awareness creates liability. It does not reduce it.
The distinction matters because visibility and control require different elements architecturally. Visibility is about collection: pulling telemetry from endpoints, cloud environments, SaaS applications, and network traffic and surfacing it somewhere a human can look at it. Control is about action: being positioned at the right layer of the stack to intervene, in real time, before data leaves the boundaries where it belongs.
Those are not the same capability. And vendors that have built primarily around one tend to underdeliver on the other.
The organizations feeling this most acutely right now are the ones that have invested heavily in monitoring tools. They have visibility. They have findings. And they are discovering that a dashboard full of risk you cannot respond to is not an asset. It is a liability.
The Endpoint Is Where Risk Becomes Real, and Where Programs Go Blind
Security tools tend to watch the parts of the environment that are easiest to instrument: cloud storage, network traffic, SaaS APIs. Those are reasonable places to look. They are not, however, where most data risk materializes in the AI era.
Data risk does not crystallize in storage. It happens at the moment someone acts on data: copies it, transforms it, moves it, pastes it somewhere it was never meant to go. That moment of action, and the context around it, is almost always at the endpoint.
Consider the scenarios we described in our earlier post on endpoint AI agent security risks:
- A developer copies proprietary code into a personal repository
- A finance analyst pastes forecast data into an AI tool to summarize it
- An AI agent aggregates internal documents and redistributes content across a collaboration platform without any human initiating the action
None of these appear cleanly in cloud logs or network traffic. They happen on devices, in local processes, across clipboard events and application interactions that most security stacks were never designed to observe. Cloud-first tools see data where it lands. They miss the moment it moved and, critically, they miss the context around why.
The second failure mode is equally common, and it shows up in both legacy DLP (content-inspection-only tools) and standalone DSPM deployments.Legacy DLP was built around static rules: file types, destinations, keyword matches. When data is constantly being reshaped and moved between tools, those rules cannot distinguish between a routine workflow and a genuine incident. An analyst copying a report to an approved drive looks identical to one copying it to a personal account. Without context, the alert fires either way, or neither way, depending on how the rule was written. The result is the false positive volume that has made legacy DLP programs so difficult to sustain at scale.
Standalone DSPM solves a different problem and introduces a different version of the same gap. It can tell you where sensitive data lives across cloud environments. It cannot tell you what a user did with that data the moment it left storage, which application it moved through, or whether the action that triggered a finding was routine or genuinely risky. DSPM sees data at rest. The risk happens in motion. That lag between what the tool observed and what actually occurred is not a minor limitation. It means the context security teams need to act is already out of date before anyone sees the finding.
Both failure modes leave security teams in the same position: accountable for risk they cannot confidently act on.
Context Is What Turns Visibility Into a Decision
Raw telemetry tells you that something happened. It does not tell you what it means. And in security, the difference between those two things is the difference between an alert and an answer.
This is the second dependency in a three-part architecture. Endpoint presence tells you where the action is. Data lineage tells you what the action means. Without the second, the first gives you events you cannot interpret.
The questions that actually matter in any investigation are not about events in isolation. They are about relationships: Where did this data originate? Has it been modified? Who touched it before this moment, and in what context? Was the action that triggered this alert part of a routine workflow, or does it represent a meaningful deviation?
Data Lineage is what closes that gap. By tracking how data is created, copied, modified, and moved over time, rather than treating each event as a discrete signal, it becomes possible to build a continuous record that gives any alert its full context. That context is what makes the difference between a finding the security team can act on and one that gets added to a triage queue that never fully clears.
The consequences of operating without it are predictable. Controls that lack context generate false positive volume, analysts tune policies to compensate, and genuinely risky events get suppressed alongside the noise. The cycle is well understood. It is also still remarkably common.
This is also where the architectural difference between endpoint-native and cloud-first approaches becomes concrete. As we outlined in our post on AI-native endpoint DLP, a cloud-first tool can surface where sensitive data lives. It cannot tell you what a user did with a file before it reached that environment, or what happened to it after. The endpoint is the only place where those questions can be answered, in real time, with the continuity needed to make the answer reliable.
An alert without lineage is documentation of an event. An alert with lineage is a decision.
Agentic AI Raises the Stakes
The shift from chat interfaces to agentic AI workflows has changed the scope of what security teams need to see and control, and it has done so faster and at a larger scale than most programs can adapt to.
User-directed genAI interactions, where a person types a prompt and reviews a response, at least preserve a human in the loop. The person is the actor and the tool is an instrument. That model is already difficult to govern at scale, but it is at least legible.
Agentic AI is different. Agents become the actor, accessing and manipulating data autonomously, often running in the background, across multiple systems, without a human initiating each step. They read files, summarize content, call APIs, and redistribute information across applications. And that activity, for the most part, does not register in security programs built for the previous model.
The only way to see what AI agents are doing with sensitive data is to have presence at the endpoint, where those actions actually execute. Cloud logs show you where data landed. Network traffic shows you that something moved. Neither tells you which agent acted on which file, in what context, with what downstream effect.
This is not a future risk. Cyberhaven Labs' research showed that nearly half of all developers (49.5%) were using desktop-based AI coding assistants by December 2025, up from roughly 20% at the start of the year. Agents are already operating inside enterprise environments at a scale most security programs were not designed for, and the same foundation that closes the visibility gap on the threat side is what makes AI-driven security operations possible on the defense side. Endpoint presence and Data Lineage are not just inputs for detecting what agents do. They are what makes automated investigation, anomaly detection, and policy enforcement accurate rather than noisy. You cannot use AI for security well if you have not built the foundation that gives it something real to reason over.
Securing AI and using AI for security are not separate initiatives. They depend on the same architecture, and the organizations that recognize that are building an advantage that compounds in both directions.
Having Both Visibility and Control Is Hard. That Is the Point.
It is worth being direct about why most organizations end up with visibility without control, or control without sufficient context. Doing both well is genuinely difficult, and the difficulty is underestimated by buyers and understated by vendors.
Endpoint agents that provide deep visibility without degrading device performance are hard to build. Operating systems have become more restrictive about what agents can do at the kernel level, with good reason. Apple's Endpoint Security Framework and Microsoft's equivalent have narrowed the access available to endpoint software. Working within those constraints while maintaining the telemetry quality and real-time enforcement needed for meaningful DLP is a systems engineering challenge that most vendors have not solved cleanly. The result tends to be either limited visibility or user-facing performance problems that erode trust in the program before it has a chance to demonstrate value.
That engineering problem is the prerequisite. Even if you solve it, the second challenge is harder.
Controls that are accurate enough to act on require context, and context requires continuity. You cannot build data lineage from snapshots alone. It requires tracking data as it moves and changes over time, correlating events across sessions and systems, and maintaining a model of how data relates to other data and to the workflows around it. That is not a feature you can add to a cloud-first or network-based architecture. It requires the endpoint as a starting point, because that is where the events that create lineage actually occur.
Most vendors optimize for one side of this equation. Some offer extensive visibility with limited enforcement capability. Some offer enforcement without the context to make it precise. The organizations that close the gap are not doing so by stitching together point tools. They are doing so because they started with a foundation that was built to handle both.
That architectural reality is also what creates a durable advantage for the organizations that get it right. Closing the gap between visibility and control is hard enough that it functions as a moat. The teams operating on the right foundation are not just better protected today. They are positioned to extend that protection as the threat landscape continues to shift, because their architecture does not require retrofitting every time the environment changes or a new technology is introduced.
The Gap Is Architectural. The Answer Has to Match.
The organizations that are ahead of this problem are not just monitoring more. They are operating at the layer where risk actually materializes, with the context to distinguish signal from noise and the controls to intervene before an incident becomes a headline.
That requires all three elements working together. Endpoint presence without lineage gives you events you cannot interpret. Lineage without AI context gives you a history you cannot act on at scale. AI context without presence and lineage gives you a model with nothing real to reason over. Miss any one and the architecture fails.
Visibility is a starting point. Control is the point.
If your current data security program gives you one without the other, the gap you are managing is not a configuration problem. It is an architectural one. And it does not close on its own.
Further explore the value of AI-native DLP with our on-demand webinar.
Understand how a unified data security program can offer both visibility and control with our ebook, “A Practical Guide to Modern DSPM.”




.avif)
.avif)
