Today’s hackers have easy access to sophisticated tools that enable them to launch extremely stealthy attacks at very low cost. These attacks can bypass traditional security mechanisms such as firewall, anti-virus and endpoint detection. In an attempt to fight these new threats, the cybersecurity industry has developed an array of detection technologies that no longer look for heuristic signatures or indicators of compromise (IOCs), but instead seek out subtle behavioral indicators. The downside of this methodology is that it creates a lot of false alerts; in essence, you lower your threshold to pick up weaker signals put end up picking up a lot of noise. The result is that technologies such as user and entity behavior analytics (UEBA) and anomaly detection are prone to producing multiple false positive alerts that overwhelm the security operation center (SOC) teams.
The work of an SOC analyst usually begins with an alert. Whenever a potential threat matches a predetermined signature or rule, the analyst must undertake an investigation and act. But what if an analyst encounters hundred or thousands of alert per day?
The term “alert fatigue” describes a phenomenon in which operators who are bombarded with alerts and notifications gradually lose their ability to consume and digest them, which erodes their ability to act. A recent survey revealed that over 30 percent of IT professionals admit to sometimes ignoring security alerts because of high volumes of false positives.
Ignoring alerts is bad on its own, but chasing alerts all day can also cause stress, anxiety and reduced cognitive capability. Abundant academic studies dating back to World War II demonstrate the gradual reduction in cognitive ability experienced by radar operators over time, a trend that was later observed in CCTV control room operators and most recently in military drone operators. Studies have shown that nearly half of all drone operators suffer high levels of job-related stress, and about 25% have “clinical stress” — enough depression, anxiety or stress to hamper their work or family life. The evidence is overwhelming; humans are simply not designed to sit all day in operation rooms reacting to alerts.
False alerts are expensive.
Acting on false alerts doesn’t just cause cognitive strain and stress; it also wastes money. A lot of money.
Organizations waste approximately 395 hours per week chasing erroneous alerts, according to “The Cost of Malware Containment” report, published in 2015. Chasing false malware alerts can drain an organization’s resources as well, with an average of $1.27 million spent annually.
But what about the ability to act when a real alert comes through?
When a high severity alert occurs, it represents a real incident that must be investigated. But knowing that something has happened is only the beginning of the ordeal. SOC teams usually act on too little information, and waste hours compiling all the data related to the incident, including log files, the relevant threat intel feed, user behavior activity, etc. Given the difficulty of identifying true alerts and the time it takes to investigate them, it’s no wonder that the average time to detect a cyber breach is now 99 days.
It’s not a human problem, so why not automate?
Security automation and orchestration are now being celebrated by some as the solution for the alert flood problem. These processes allow automatic workflow to handle most of the lower level alerts, allowing SOC analysts to focus on the more menacing threats. While this makes perfect sense in theory, the reality is that there is so much you can automate in terms of alert handling- a human analyst will always have to be in the decision making loop. And even if this was a perfect solution, adversaries will surely utilize automation engines to multiply their attacks ten-fold, ultimately overwhelming these security systems. The real answer must come from smarter mechanisms that reduce the number of alerts and automate the information gathering and correlation, so that when analysts receive an alert, they will:
- Have higher certainty that it’s an actual threat
- Have all the information they need to investigate and mitigate the incident
AI/machine learning is key
Artificial intelligence is now integrated into security solutions to enable more accurate alerting and investigation. Big data engines are able to analyze the security log data of the entire network and group events that are significantly correlated and behaviorally unique into distinctive clusters. This greatly improves the accuracy of malicious activity detection. Once the detection process is cluster-wide, the algorithm can identify weak or hidden signals, leading to more accurate detection and fewer false positives. Cluster-wide detection ensures that security analysts receive a comprehensive attack description that enables them to mitigate the threat completely.
The use of unsupervised machine learning algorithms that continuously analyze the massive amount of network security log data for hidden and unknown security incidents delivers immediate results and requires no changes to the network infrastructure.
From defender to hunter and investigator
Using the same technology, organizations can now actively hunt for threats. Skilled analysts can proactively run queries to identify abnormal network activity and investigate further to determine whether it is malicious. Once a breach has been identified, the same technology can be used for much quicker investigation, including collection of all relevant information, analysis and remediation.
Shifting your state of mind from reactive to proactive
Existing endpoint and perimeter defenses generate endless alerts, leaving any detected anomalies drowning in a sea of false positives. This costs organizations millions and leaves them exposed to advanced threats.
Organizations need to reconsider their security strategies and adapt measures accordingly. AI-based, full scope detection of incidents ensures that attacks are completely detected in a timely manner, thus preventing significant damage to the organization and its reputation.