If nobody looks at the alerts in the SIEM, are they really alerts?

It’s amazing how often I come across this statement from organizations. The motivation is good – better detection. Sometimes it involves implementing the latest behavioral-detection, or setting the IDS to alert anomalies, or maybe even just adding a rule to alert on deviations from the mean character-length of User Agent String… Whatever it is, the next step is always the same:

  • Overwhelmed by number of alerts
  • “50 alerts under 5 minutes”
  • Ignore the rest because it’s “low priority”

 startrek

This is not a failure of the detection solutions; this is what they are designed for. This is a problem with us, people, and lack of tools to help us. We’re very good at detecting patterns and, as a result, we’re very bad at noticing details, especially when overwhelmed with similar data.

Just imagine how the investigation goes: You’re looking at a spreadsheet of a million rows. Each row has enough columns to fill your entire screen with small print… Your job is to figure if there’s anything malicious happening that demands immediate action; but as you scroll further and further the lines begin to blur – are they even different?! You jump a hundred rows, and then a thousand at a time – but nothing nothing stands out. Your heart-rate rises and you begin to doubt yourself. You’re wondering if you’re missing something, squinting your eyes to look harder and stay focused. But it all looks the same. It’s one creepy forest of logs…

If you’ve worked with that much data or used a SIEM then you know the feeling. We humans are very bad at “big-data”. Remember Target? – Despite the detected alerts and multiple teams, and escalations, still dismissed the alert? It’s our Confirmation Bias kicking-in when we try to manually interpret meaning from data, and gets worse proportionally to size of data – but more on biases in cyber security later!

While the attack was in progress, monitoring software (FireEye) alerted staff in Bangalore, India. They in turn notified Target staff in Minneapolis but no action was taken (Elgin, 2014).
– Case Study: Critical Controls that Could Have Prevented Target

Over the last year my team and I made incredible progress in dealing with this problem. Our hand-picked partners already get to see the benefits of automatically clustering and summarizing the millions billions of rows – all the existing events & alerts, and transforming them into a handful of actionable reports. No alert is left behind, and all reports are with the context they need; not noise!

More to come! Meanwhile, I have to go review the results of the latest model!

Comments are closed.