AI has no-so-quietly shifted from a single interface used by a small group of specialists into a mainstream capability embedded across enterprise infrastructure. Employees are now operationalizing AI for core business functions across departments.
This shift fundamentally changes how organizations must think about data security.
Cyberhaven’s 2026 AI Adoption & Risk Report examines how enterprises and employees are adopting AI, how these tools are being used, and the new data risks that are emerging as a result. One finding is especially clear. Sensitive enterprise data is flowing into AI tools, and most organizations lack meaningful visibility into that activity.
The Invisible AI Problem: Where Employees Actually Use GenAI
Cyberhaven’s research shows that AI usage is fragmented, highly personal, and largely invisible to security teams. A significant portion of GenAI usage occurs through personal rather than corporate accounts, which creates immediate governance gaps:
- 32.3 percent of ChatGPT usage happens through personal accounts
- 24.9 percent for Gemini
- 58.2 percent for Claude
- 60.9 percent for Perplexity
For security teams attempting to understand AI usage and govern the flow of sensitive data, this behavior creates substantial blind spots. Personal accounts bypass SSO enforcement, centralized logging, enterprise retention policies, and controls related to data usage or model training.
In some cases, employees may be unaware that they are logged into personal accounts. More commonly, they are experimenting with new AI tools to improve efficiency or working around enterprise restrictions. Broad attempts to block AI usage rarely reduce risk. Instead, they push usage outside formal controls, further reducing visibility and oversight.
Employees Are Feeding AI Core Business Data
The impact of AI adoption on data security is driven as much by usage patterns as by adoption rates. According to Cyberhaven’s findings, 39.7 percent of all AI interactions involve sensitive data. This includes prompt text, copy and paste actions, and file uploads.
On average, employees input sensitive data into AI tools once every three days. This level of frequency reflects a fundamental shift in how work is performed. Sensitive data and AI are now deeply embedded in daily workflows across the enterprise.
What Kinds of Sensitive Data Are Being Shared?
The volume and variety of data being shared with AI tools expands the enterprise attack surface across multiple business functions. The data spans revenue generation, innovation, and regulated environments:
- Sales and go-to-market data represents a mid teens percentage of AI bound data globally and nearly 30 percent of what sales teams send into AI tools
- Research and development content is the dominant category in industries such as healthcare, where approximately one third of AI bound data is research related
- Technical and controlled assets include source code, internal project data, and regulated information such as health records that traditionally require strict access controls
AI related data exposure is therefore not confined to a single team or use case. It cuts across the organization.
These Actions Aren’t Malicious, but Constitute Rational Employee Behavior
Employees are not sharing sensitive data with AI tools to introduce risk or violate policy. Workflows are evolving, and employees are adapting accordingly.
AI is used to:
- Increase speed and efficiency
- Support complex problem solving
- Reduce cognitive load during knowledge intensive tasks
The resulting risk is not solely a user behavior issue. Rapid AI adoption combined with slow moving security programs and legacy controls has produced gaps in governance. Many organizations now rely on third party AI tools that store, reuse, or retrain on inputs while security guardrails lag behind day-to-day usage.
Why AI Agents Change the Risk Model Entirely
The risk profile becomes more complex with the rise of AI agents, including endpoint-based tools similar to OpenClaw style agents.
By December 2025, 49.5 percent of developers were using coding assistants, up from approximately 20 percent at the beginning of the year. At the enterprise level, nearly 23 percent of organizations have adopted agent building platforms to create custom AI workflows.
AI agents introduce structural risk due to their design and operating model. Many include persistent context windows, local searchable memory of interactions, and direct access to file systems and clipboards. These tools frequently operate at the operating system level, bypass traditional network controls, and synchronize data to infrastructure outside security team oversight.
This creates long lived, high value data exposure directly on endpoints.
The Core AI Risk and the Path Forward for Security Leaders
AI related data risk is active and widespread. Sensitive data is flowing into AI tools on a daily basis across most enterprises. Organizations that move quickly on AI adoption are often exposed to greater levels of unmanaged risk.
At a structural level, this risk manifests as:
- Large scale data leakage across teams and tools
- Expansion of AI usage beyond IT and security visibility
- Reduced control over where data is stored, who can access it, and how long it persists
- Concentrated exposure among fast moving teams and AI focused organizations
To address this shift, security leaders need to evolve their approach to governance and visibility:
- Establish visibility into who is using AI tools and how they are being used
- Identify what types of data are flowing into AI systems
- Extend governance beyond browsers to include endpoints and AI agents
- Treat AI data flows as a core security surface
AI adoption will continue to accelerate. The effectiveness of future data security programs will depend on an organization’s ability to see, govern, and protect data across the AI tools and agents already embedded in everyday work.
For a deeper analysis of these trends and their implications, read the full 2026 AI Adoption & Risk Report.
.avif)







.avif)
.avif)
