Event log management: stop security threats by turning your data to detective

Davey Winder

Davey Winder explores how the predictive power of patterns can turn event log management into the detective of the server room.

Log management is, without a doubt, one of the most boring subjects to set before even the most hardcore of IT admins. Seriously, just the mention of analyzing event logs is enough to send a geek to sleep. Unless, that is, the geek happens to understand that these logs have the power if not always to stop a potential security breach before it starts then certainly to stop it before it succeeds. Think of log management and the alerting capabilities that come attached as being the Agatha Christie of the server room, or perhaps more appropriately the Hercule Poirot: this is where data turns detective!

The predictive power of patterns

DetectiveHow so? Well it's all to do with the predictive power of patterns, and the ability to spot anomalous items within the huge day-to-day flow of event information. Anomaly, or outlier, detection (here comes the science bit: an outlier is an observation that deviates from other observations in a sample) is key to spotting stuff which simply doesn't fit the normal patterns within that logging dataset.

Still with me? Good, because it's actually not really complicated at all. You can sum it up as being the ability to understand what is normal to which you overlay the ability to spot what isn't and alert the business to those anomalies in a timely fashion.

Detection in action

Want a real world example of anomaly detection in action? You try and buy something with your credit card and are told the card has been declined for security reasons, which is then quickly followed by the bank calling you to run a security check. That's because the clever software has spotted an anomaly in the normal pattern of use which could potentially be a fraud about to be committed. Log management and analysis provides the same kind of protection, watching out for potential attacks to the network infrastructure.

The credit card fraud protection example is a good one for another reason; the type of 'outliers' that are often detected in both bank fraud and network intrusion are not the unusual ones that stick out like a sore thumb at all. Instead, it can be a case of unexpected bursts of expected activity which raises the red flag. It's a pattern that predicts the problem, a pattern outside of the norm which is the anomaly and not just an anomalous action in isolation. Spotting this would be, even for an IT Admin suffering from such an extreme case of obsessive compulsive disorder that it merits a tabloid TV show special, pretty much impossible were it not for specialist software that can apply the correct algorithms in real time to that data flow.

Business continuity

When you get that security is really all about business continuity, you should care less about threat tracking or attack attribution and focus more on network visibility in order to define 'normality' and spot the indicators of compromise before an attack can succeed. Those companies which best understand their IT environment are the ones best placed to defend against a breach, no matter where that breach originates.

Compromise indicators can be many and varied, however if you simplify the core indicators they are likely to be unusual network traffic as infrastructure penetration is explored, followed by unorthodox outbound traffic as the breach is monetized through data extraction. Privileged user behavior anomalies are also high on the indicators of compromise list, along with IP irregularities. Oh, and not forgetting system file and configuration monitoring as attackers may attempt to change these in order to gain backdoor access into the network. Even the presence of unexpected or out of schedule system patching can be indicative of a breach as an attacker could already be inside your system and closing down those holes which may allow 'the competition' (other hackers) to gain access.

Misconceptions of event log analysis

Of course, event log analysis is nothing new but it's not been that commonplace towards the smaller end of the enterprise spectrum courtesy of cost and complexity perceptions; or should that be misconceptions? The truth is that far from being a budget burden, security event monitoring of this kind can actually be very cost effective in providing meaningful analysis that leads to pro-active protection of infrastructure and the data within it. Given that the threat surface is effectively expanding all the time, and the security landscape becoming increasingly dynamic, just increasing the spend in perimeter and host-based defense systems would appear to be something of a false economy.

Over 12,000 MSPs and IT support companies rely on MAXfocus to pro-actively monitor and maintain their clients’ servers and workstations, and integral to that is event log management, alerting and detective controls. Think of it as a four-step protective plan:

  1. Scan all the things
  2. Count all the things
  3. Spot the anomalies
  4. Apply policy accordingly