Artificial intelligence is the buzzword in the security world today. It is expected to totally revolutionize cybersecurity and incident response, and even solve the impending skills shortage crisis. Investments in AI are expected to grow 300% in the next couple of years. Companies such as IBM and Amazon are building powerful AI tools, as are a host of innovative startups, all employing AI in unique and novel ways.

With all that buzz, it’s difficult to assess whether AI is really a “thing” or just another buzzword that will be forgotten in a year or two.


What is AI?

Artificial intelligence is a subfield of computer science, aimed at enabling the development of computers that are able to do things normally done by people — specifically things associated with people acting intelligently. Stanford researcher John McCarthy coined the term in 1956 during what is now called The Dartmouth Conference, where the core mission of the AI field was defined.


Uses of AI in Cybersecurity

In a sense, cybersecurity has always relied on AI to identify and respond to threats. The original detection and prevention mechanism, the Antivirus, utilizes a simple form of AI called signature analysis, in which the machine compares the file’s attributes (“signature”) to a known list of malicious attributes to find a match. This process mimics the human mind’s basic pattern matching process; think of how you teach a toddler to identify a picture of a horse.

The problem with this simple AI algorithm is that it is easily bypassed by new and unknown threats. As such, more sophisticated mechanisms have been designed to complement it, including behavioral analysis. In this method, the file’s actions are inspected, rather than its appearance. Continuing  our horse analogy, imagine standing outside a horse training ground.  Although you can’t see the horse, you can hear it running and smell manure, so your mind will conclude that there’s a horse inside.

Unfortunately, behavioral analysis engines (such as sandboxes) are also easily fooled by sophisticated threats; the malware simply remains dormant until it is released from the virtual environment.   As a result, a third type of AI is now used: anomaly detection. This method concentrates on activity patterns over long periods of time; it develops a ‘baseline’ for activity and identifies variations from this norm. For instance, let’s say a certain horse gets up at dawn and runs across the track everyday; the system will identify it as Horse-X.  The moment Horse-X deviates from its routine (ex. walks instead of running) the system will raise an alert. The downside of this method is that it results in many “false positive” alerts that shouldn’t have been raised.


Next Gen AI: Machine Learning

Since traditional AI mechanisms are not sufficient to handle today’s threats, a more advanced form is required. “Machine learning” is defined by Stanford University as “the science of getting computers to act without being explicitly programmed.”

Unsupervised machine learning is now considered the forefront of AI development. Unimaginable until a few years ago, ML enables organizations to rapidly analyze large quantities of data and uncover hidden patterns, without needing to “learn” the baseline activity pattern and look for deviations.

Looking at a large data set, ML algorithms can analyze a wealth of multidimensional related information, and identify indications of compromise, anomalies, policy violations, and much more.

But even ML isn’t the final word when it comes to AI; deep learning and neural networks are also being utilized in cybersecurity, although to a lesser degree.


A Word of Caution

AI is here to stay. It will play a large part in our lives in the years to come (think smart homes and self-driving cars) and will be a pillar for all future cybersecurity technologies.

However, since AI and its subsets are available to everyone, there is a real danger they will be adopted and utilized by criminals and fraudsters far more quickly than by the security industry and the organizations it is trying to protect. DARPA certainly recognized this hazardous potential when it launched its Darpa Cyber Grand Challenge, pitting AI against AI in a battle for cyber supremacy.

We shouldn’t be worried about Skynet-style attacks just yet, but AI is likely to play a crucial part in securing our cyber domains in the years ahead.