What is the cost of a breach and how can we prevent it?

What is the cost of a breach and how can we prevent it?

The recent breach at Experian prompted a lot of press, but most of the questions being asked are all wrong. There is no magic product that can prevent all attacks. Target, Snowden, TJX, Sony…pick your favorite example, we’re all vulnerable, but we can do better.
Given enough time and resources, a determined hacker will get in. In an article last year, CSO online magazine published results that showed that no matter which vendor and what products we deploy, we are all vulnerable online.

In testing all the major technologies used by corporations, NSS found that some of the 1,800 serious malware it tested always managed to get through, no matter what combination of products was used.” – CSO Online, Antone Gonsalves

I’m as disbelieving as anyone when it comes to what I read, especially when published by pundits, and while I don’t have access to the source tests, having seen and verified NSS labs results in the past, I know the organization to be one of integrity that tests equipment, not just “reviews” it. In my own lab, and at all to many customer sites I’ve seen the same results.

There is no magic bullet. No product or even combination of products will make your organization invulnerable. So the question becomes “How can we detect” this type of activity and prevent it before it leads to an unacceptable loss?”

Having one’s public web site defaced, or having a single account compromised won’t get you in the papers, so long as we fix the leak or the site quickly. We don’t have to stop everything, but we do have to detect and prevent the loss of thousands of records, or significant intellectual property.

I know some out there want to say that any loss is unacceptable, but all businesses operate with some loss expectation. Retail stores have shop lifting and loss prevention programs. Restaurants and grocery stores throw out food after its expiration. Our favorite sports teams don’t win every game. We all have to minimize loss and aspire to always be on top, but we also have to plan for some level of loss.

“The enemy of a good plan is the dream of a perfect plan.” ― Carl von Clausewitz

In the real world, no business can afford to be perfect. We have to spend appropriately to get adequate coverage. If we take as a given that attackers will get in, then the appropriate response is to assume they are in our networks, and watch for misuse of the data we keep.

ISC2 has published formulas on loss and risk for over a decade that built on the principal that the cost to protect an asset should not exceed its value. The unstated but implied inverse is true too, we have to spend appropriately to protect the information we have.
Data loss is expensive. The Sony breach lost an estimated $1 Billion in market capitalization. In 2011 the Epsilon breach may have exceeded it.

“The total cost of the Epsilon breach could eventually run as high as $3 billion to $4 billion, given that compromised email addresses could be used by hackers and phishers to gain access to sites that contain consumers’ personal information, according to CyberFactors..” – Fahmida Y. Rashid, Epsilon Data Breach to Cost Billions in Worst-Case Scenario

One problem most organizations face is not knowing what to protect, or how to assign value. Computers themselves are generally not physically damaged when infected, there is simply a cost to clean or re-image the system (typically about two hours of an IT staff person’s time at a cost to the company of about $100/hour). The real cost is in the loss of data.

Symantec published a Ponemon Institute study showing the cost per record lost in the United States costs businesses $188 per record in 2013. – Ponemon 2013 Cost of a Data Breach Report

Common data with personally identifiable information (PII), we have to protect include:

  1. Employee records – human resource documents, current and former employees as well as contractors
  2. Customer loyalty data
  3. Medical record data
  4. Supplier and business partner information

A good rule of thumb is the number of records will be 3X the number of employees (to include ex-employees and contractors), plus 3X the number of customers.
Intellectual property (IP), is often of even higher value, but may be subjective, and where it’s stored and how it’s accessed is often difficult to trace.

Both PII and IP need to be considered when building a security program. Prophylactic tools like IPS, Anti-Virus, Firewall…are all useful, but they are not enough. We need visibility into what our people and systems are doing and we need to look for unusual activity.
Verizon Data Breach reports consistently show several key points.

  • 92% of all breaches were by external attackers
  • Verizon 2013 Data Breach Investigations Report
  • 84% of all lost records were a result of stolen login credentials
  • Verizon 2012 Data Breach Investigations Report
  • “…almost all victims have evidence of the breach in their logs.”

Verizon 2010 data Breach Investigations Report

The percentages change slightly from year to year, but the facts have been pretty consistent. We have the logs, we just don’t have the analytic tools, people and processes to find the threats before they become a problem.

The era of watching for insider threats, and compromised accounts is here. We can no longer just look for attacks, we have to monitor for abnormal behavior.

Many companies have linked that next generation of threat monitoring to big data. The data sets are truly monstrous, but the solution lies in better analytics, not in bigger data.

“Average organizations are storing approximately 11-15 terabytes of security data week, a figure that Gartner expects to double annually through 2015.” – Nicole Henderson WHIR post

Unfortunately our staff sizes aren’t increasing and there are too few qualified people to hire, even if we had the budget to build a bigger security team.

“According to ESG Research, 55% of enterprise organizations (i.e. those with more than 1,000 employees) plan to hire additional security professionals in 2012 but they are extremely hard to find. In fact, 83% of enterprises claim that it is “extremely difficult” or “somewhat difficult” to recruit and/or hire security professionals in the current market.” – Jon Oltsik, Network World, The Security Skills Shortage Is Worse Than You Think

The tools need to become smarter as the data sizes rise the problem gets bigger, and staffing can’t keep up. The traditional solution is to have smart IT security people write rules and create complex network and asset models to look for exceptions. Unfortunately a rules based approach to log analysis is akin to signature based security tools like intrusion prevention and anti-virus. If signatures worked, we wouldn’t need log analysis.
We can do better.

We need learning algorithms and techniques that can profile “normal behavior” for a user or computer, and detect and warn on “weird”.
Whether the use case is data loss detection, high privileged account monitoring, fraud prevention, or advanced persistent threat detection, the ability to discern normal from weird in an unknown ever increasing network and sea of events will mark the next generation of security analytic tools.

Securonix is one vendor leading the pack with peer based analytics. Security intelligence modules ingest data from existing log collection platforms, overlay identity data, add application events and build a history per user per time slice of what normal looks like. Never before seen login sources, transaction types or abnormal volume can be detected on a per user basis. Then by comparing the individual against his or her peers (same job, title, division, department, manager, machine function, application…or other similar attributes), the system can detect and alert on “weird” behavior by one object compared to other similar objects.

The data sizes, the products, and the math behind the analysis are crazy and complex, but the goal is simple: “learn normal and tell me when something weird happens so I can correct it fast enough to keep losses down and keep us out of the papers.”
The “take aways”:

  1. Despite massive log collection efforts and IT spend, we’re all still vulnerable
  2. We need tools than can learn what normal looks like in an automated manner
  3. Increasingly we need to monitor user behavior for insider threats
  4. It’s not about malware, it’s about data, loosing data is expen$ive with a big $

If you’re not already in the market, start looking for new tools, processes and people that can spot weird behavior by trusted accounts and systems, or be prepared for costly repercussions.

The Ghost in the Machine: Tracking Stealthy Fileless Malware in the Windows...
5 Cyber Threats Facing the Financial Service Sector in 2024
What is Network Detection and Response (NDR)?
What is the MITRE ATT&CK Framework?