Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Information overload: Finding signals in the noise

Steve Ragan | May 30, 2014
Sometimes it's possible to have too much threat data

However, an active spam bot that's been observed sending mail at high enough rates can cause the organization's external IP address to be black listed. Thus, employees could be prevented from sending legitimate communications, resulting in a hit to performance, the organization's reputation, and its bottom line.

Organizations collect all kinds of data that they've no clear intention of using, such as netflow or syslog for every session encountered by a given network appliance, login data from various application servers, or syslog messages for deny rules on the firewall.

The same is true for syslog messages for each URL that's blocked, which almost always ends with the assumption that a user violated policy, not that the event was triggered by a bot or remote actor.

"Many of these pieces of information are collected and dumped into repositories with some hope (usually unrequited) that someone in the org will find use for them in terms of regular operational cadence. More often than not, they're only looked at during post-breach forensics," Tavakoli said.

So the trick isn't to collect as much data as possible; it's to collect the right data at the right time. Tavakoli suggested a four-step strategy to accomplish this, including prioritizing the data that the organization intends to monitor.

The first step is to know where the organization's most important assets are located, how they're accessed, and what's required to protect them. After that, identify the primary attack surfaces, and consider the likelihood of a breach.

For example, in some cases, certain employees' machines could be an attack surface, or in other cases, contractors pose an outsized risk. Likewise, other examples include guest wireless networks, or Internet-facing portals that access internal systems or accounts.

From there, it's important to monitor those systems for anomalous behavior, especially as they communicate, not just among themselves (such as the application server talking to the database), but with other systems as well.

If these systems are part of the identified attack surface, any alert registered between them should be considered and investigated. This means tuning these security systems to send alerts on abnormal traffic only, so a clear understanding of the baseline is required.

Finally, it isn't enough to monitor inbound traffic for anomalies. Outbound traffic needs to be monitored as well, because this is where command and control communications take place, and ultimately, exfiltration activity can be observed.

Other types of observable anomalies

Reconnaissance can be fast (and noticeable), or slow and stealthy, Tavakoli explained, offering examples of other types of observable anomalies.

"Slow reconnaissance can be detected when a host on your network is contacting a large number of internal IP addresses that have not been active in the recent past. This type of scan occurs over longer periods (e.g. hours or days) than port scans (e.g. minutes or hours). Effective detection requires ignoring contact with systems that do not respond to the scanning host, but which are otherwise active."

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.