Back to Blog
12/3/2020
-
XX
Minute Read

Time To Value

Alex Lee

Traditional DLPs can take up to 6 months to deploy, but Cyberhaven can be up and running in just a week's time, providing record-setting time to value.

In this article

Time to Value (TTV) is the amount of time it takes a customer to realize value from a new purchase. It’s different from Return on Investment (ROI) in that it’s concerned with the desired result of using the product (however the customer defines it), not just its effect on the bottom line. Depending on the product, customers may see a time to value immediately, or they may have to wait a while to truly meet their goals.

For example, if your hands are cold and you pop into a store to buy a pair of mittens, the time to value is immediate. Your hands feel warm right away; you got the desired effect. But if you’re addressing your cold hands by installing a new heating system, you need to wait through the installation, the time it takes for the house to heat up, and so on. Your TTV will be a lot longer.

The same is true for deploying a new DLP solution. Traditional DLP typically has a significant, lengthy TTV (and there’s no DLP equivalent to just buying mittens).

Cobbling Together the DLP Stack

First off, if you’re a modern enterprise, you’ll have data stored in many locations, such as endpoints, network shares, and SaaS applications. If you want to have a holistic DLP solution you’ll need a few things to get started. To get started, for on-premise data protection you’ll need

  • email DLP to prevent data exfiltration through corporate emails and attachments,
  • endpoint DLP to prevent data exfiltration for any endpoint application, and
  • network DLP to prevent exfiltration of data in transit.

Then, for SaaS applications you’ll need a Cloud Access Security Broker (CASB) solution. While CASB products can provide visibility into corporate cloud usage, they lack visibility once data leaves the SaaS environment. To fill this gap, CASB products are often integrated with traditional DLP, with alerts from both captured into a Security Information and Event Management (SIEM) solution. Even then, you’ll get a complete picture only of data egress events, not of data ingress from outside your network or interoffice data movement.

This is no trivial task, as each of these “solutions” requires its own purchasing cycle—vendor agreements, third-party security reviews, and so on. In addition, you may have to set up and manage on-premises servers for databases to support them, which require further OS and software licensing. And after all that comes testing, deployment, configuration, management, and training.

Defining the Prerequisite Policies

Once you’ve got past the hurdle of purchasing and deploying three or more products, you’ll need to build your data loss prevention policies. Creating DLP policies requires you to predict the future—you need to know what the data that you are trying to protect looks like (i.e., what patterns it contains), which channels will be used for exfiltration, and under what conditions the data is allowed to be shared externally.

As an example, consider a seemingly simple scenario: preventing documents containing sensitive IP, such as designs for a new device or a new vaccine formula, from being shared externally.

Step 1 is to identify the textual pattern—you might look for “confidential” markers or the IP’s codename in all document types supported by DLP (MS Oce, PDF, text, etc.).

Step 2 is to figure out which channels are allowed to send this info outside the organization, by which users, and under what circumstances.

Step 3 is to determine how to deal with exceptions. There are always legitimate scenarios for sharing data externally, such as for an upcoming product launch or when collaborating with supply chain partners, and you do not want to trigger an incident alert each time there is a content match. Thus, one exception might be based on allowing a particular user group to share externally, and another on the volume of matched documents in a particular email or web form post. And with every change in business workflow (e.g., new project code names added to the same sensitive IP or a new cloud application for editing files), you will likely have to create a new exception.

With increased collaboration, it’s harder than ever to know with certainty the content of the sensitive data to protect and how to tune DLP policies to establish a low rate of false positives. More importantly, it is very hard to test DLP policies. The knobs provided are essentially just the content of the file and the exfiltration channel; the absence of additional context makes policies very complex, as they must overcompensate to differentiate between legitimate sharing and a data breach. Moreover, the security team does not have a bird’s-eye view of data flows or the ability to look at historical user and data activity in order to test their policies before deploying them

Furthermore, since DLP products only evaluate egress events, you’ll need to wait and see if your policies are working and adjust them as needed. A typical DLP implementation can take six months or more of fine-tuning before going live, and often the resulting policies are purposely relaxed to avoid triggering too many false positives. It’s no surprise that the difficulty of keeping policies up to date at the rate of business needs was the most frequently named gap in standard DLP implementations.

Cyberhaven—record-setting Time to Value

Using a proprietary technology dubbed data tracing, Cyberhaven monitors all your data across on-premise and cloud environments. This speeds up your TTV in several ways:

  • Cyberhaven automatically finds all types of high-value data. To protect your data you must first know where it is. Cyberhaven does the work for you by analyzing the provenance, content, and user context of all your data to automatically identify what is of high value.
  • Cyberhaven enables you to automatically detect improper handling without the hassle of tagging or classification. Did someone copy IP to the cloud? Did a document with customer PII get copied to a shared folder? Cyberhaven extracts and records metadata from each user interaction so that you can inspect any data flow in real time or retrospectively with just a few clicks.
  • Data tracing finds risk across the lifecycle of your data. Most data has a long life, constantly being copied, modified, and shared across a wide variety of formats and applications. Cyberhaven tracks and records all the many ways your data can proliferate, showing the who, how, and when each time data is shared, pinpointing risky apps and behaviors that can lead to a loss.
  • Cyberhaven flattens your DLP stack. The flow of sensitive data is monitored and controlled through SaaS apps, endpoints, network shares and email, starting from creation through egress.
  • A single lightweight sensor and API integration is all the infrastructure a Cyberhaven client needs. This eliminates the need for SPAN port, network tap, or proxies so clients have visibility into data movement regardless of network segment and even if no network traffic is involved, such as saving content to a USB.

Cyberhaven can be up and running in as little as a week’s time. If your goal is effective data protection, that’s a dramatically shorter TTV than standard DLP can offer.