Is Admitting You’re One Step Behind Attackers the Key to Getting in Front of Them?
By Arie Fred, VP of Product, SecBI
While cyber defenders work to protect their networks, hackers and more sophisticated nation-state attackers focus on developing plans (including business plans) to make money from enterprise assets. From crime syndicates to teenagers just “fooling around,” there are many levels of cyberattacks, along with an ever-increasing attack surface due to growing connectivity particularly in the Internet of Things (IoT) world.
Recent Breaches: LockerGoga, NotPetya, and WannaCry
Due to recent Norsk Hydro attack by the ransomware LockerGoga malware is at the forefront of the news. Unfortunately, we’ve read too many headlines like this one before, MAERSK attack by NotPetya, and probably will again in the future. These breaches brought ransomware to the forefront of public knowledge (and fear).
Going further back in time to the WannaCry ransomware attack, then security managers said that this massive ransomware attack exposed significant weaknesses in global IT systems.
Why am I comparing these three attacks: LockerGoga, NotPetya, and WannaCry? Besides the use of ransomware, the impacted companies reverted to manual operations and were “lucky” enough to have a backup, which is not always the case.
Although having a strong backup plan is great, there’s no reason to suffice with backup. It’s a better strategy to have strong controls from the start to prevent such large- scale attacks. To prevent and protect against known and unknown large-scale ransomware attacks, security managers must not only have a backup plan, network segmentation, a security policy to examine gateways and endpoint security (which all too often fails to identify attacks), but also network security and visibility.
The ability to see everything, regardless of endpoint types and security deployment, is crucial in understanding the full scope of an unveiling incident. The best method to have complete network visibility is by looking at logs. However, unlike a typical SIEM deployment that only collects and correlates some logs (alerts mostly), it is best to use all of them with advanced analytics to gain complete visibility and the ability act on the information found.
The Pitfall: Our Confidence Level in Traditional Approaches
People place a lot of emphasis on preventive measures. Once installed, they expect appliances such as firewalls and anti-virus solutions to fulfill their mission of stopping behaviors deemed as “malicious”. The pitfall of this approach is the blind trust in these measures and over-reliance on them.
In fact, the problem resulting from these appliances is primarily ignored. And that problem is how to deal with the multitudes of alerts issued by those appliances. Alerts will typically lead to minor misconducts such as a compliance issue. Yet actual “malicious” activities, such as malware, seldom generates alerts because the “real attacks” are well designed to bypass preventive measures, leaving analysts blind until it is too late.
Problem 1: Lack of Visibility
Many enterprises have either too much or too little visibility, leaving them flooded with alerts, false positives or completely blind. A typical enterprise may see hundreds of alerts daily, generated by their SIEMs on visits to blacklisted sites by their proxies. Prioritizing and investigating these hundreds of alerts each day is insurmountable. Most enterprises simply don’t have the staff or time to be able to investigate all of these alerts.
Eventually, this high volume of alerts typically leads to not investigating them at all since they were already blocked after visiting blacklisted sites. Leaving alerts uninvestigated creates gaps in knowledge and valuable information, making it obvious that fewer false positive alerts and better efficiencies are needed in most security operation teams.
On the other hand, when threats are not found, not alerted and falsely identified as benign, threats can easily enter the system. Without a way to see what is happening within the network as a whole, and by only viewing anomalies, security teams are effectively blind, leaving the organization completely vulnerable to attacks.
Problem 2: Packet Capture Network Traffic Analysis (NTA)
Network Traffic Analysis (NTA) tools have been used for a long time to help improve efficiencies in enterprise networks, locating unused capacity and bandwidth, and eliminating chokepoints. It has been employed as an arm of cybersecurity too.
The problem is that while the logic of using traffic analysis in cybersecurity is solid, the reality is a bit different. In reality:
- The level of encrypted traffic keeps growing, leaving packet capture devices blind.
- Cloud-based services are growing in popularity, eliminating the use of packet capture.
- When the network traffic is internal and not encrypted, there are privacy issues because the payload is visible to them.
What is needed is a next-generation NTA solution that detects the types of cyberattacks that are growing in number and sophistication and provides a faster and more effective solution than hunting through individual packets.
A metadata solution, using unsupervised and supervised machine learning, is more scalable than one based on packet capture, yet provides the same, if not better data quality. It is deployed and up-and-running within hours and comes with a lower total cost of ownership than traditional NTA products.
The Solution: A Combo of Unsupervised & Supervised Machine Learning
In cybersecurity, two machine learning branches (supervised and unsupervised) can beautifully complement each other in cycles to improve network visibility, reduce false positives and improve efficiencies.
Implementation of Machine Learning
What is needed is a method that helps analysts of all experience levels achieve the goal of incident investigation and response programs more efficiently by applying the right context on the alerts, investigating the high priority alerts with the necessary context, and consequently minimizing the risk to their organization.
Machine learning and behavior analytics have shown promise in accurately detecting advanced attacks. Unlike UBA solutions that utilize only statistical techniques, a combination of unsupervised, supervised, and adaptive machine learning with statistical techniques builds behavioral profiles that more reliably link anomalies with malicious intent. It is ideal to apply all analytics in parallel, on all data (with no sub-sampling like many UBA solutions do), and for all entities (i.e., users, hosts, devices, etc.).
Machine learning modules need to use global behavioral patterns: entity specific patterns around historical normal behavior and peer-based pattern analysis to determine the possible threats. As a result, there is detection over a broader range of anomalies than other UBA solutions, and with greater accuracy. Ideally, the machine learning solution would integrate analytics with forensics. This provides analysts with detailed supporting evidence, potentially dating back months. Forensics help analysts determine exactly what happened, when it happened, and who else was affected, making it very easy for them to work their way from detection to investigation to the closure of alerts.
Cluster Analysis and Effective Automation
By using machine learning to group related behaviors, cluster analysis can be performed, followed by automated responses to malicious communications in the network. This reduces dwell-time(TTM) from days to minutes and prevents damage to valuable data.
By allowing the system to compile a full-scope report of each incident, analysts will know the entities involved in the incident as soon as they are made aware of a threat. This allows them to quickly view the data and act accordingly. Simultaneously, this allows the automated playbooks to work effectively as they will no longer be wasting their time on false-positives.
It is crucial to implement and automate playbooks for simple alerts, but if the system can also group anomalies and provide analysts with context to effectively decide how to handle complex threats, there is a drastic improvement. By analyzing massive amounts of network security log data, a collection of events that are significantly correlated and unique in their behavior are able to be found and divided into distinctive clusters to ensure detection as the clusters evolve. The clusters are based on changes in the network’s activity and provide faster response and planning of next steps.
If we agree on the simple fact that currently, we remain one step behind the attackers, then we must admit that no proxy/firewall can ever be updated enough to block what is unknown to it. If we cannot even manage to check the alerts, what are our chances of actually finding and investigating a suspicious incident that was overlooked?
After describing the conundrums of today’s security operations centers (SOC), I know that I am interested in a solution that:
- Is narrow enough to solve a specific problem (e.g. malware, beaconing, crypto-jacking, data exfiltration),
- Generates very smart alerts, including the full scope of what’s going on in the cyberattack or malicious communications, and doesn’t just throw an alert on every simple misconduct,
- Allows for unprecedented network visibility into one specific blind spot, making the life of a hunter much easier,
- Uses technology to present security analysts with the information need to mitigate and remediate suspicious malicious communications within minutes of a breach, and doesn’t require additional manual investigation to acquire that information.
- Allows for targeted and automated response of the full scope of the incident and its specific complete narrative.