The concept of AI might feel a bit like futuristic science fiction, but in truth, simple AI has been around for decades, and modern, evolved AI is still rooted in the same essentials. One way to sort out the history of AI is to categorize its evolution into three waves:
- First-wave rules-based AI
- Second-wave supervised AI
- Third-wave unsupervised AI
From these simplified descriptors, it’s easy to conclude that today’s most sophisticated AI is the result of advances that have been focused on less human reliance, which has always been inherently impacted by unintentional (and sometimes, intentional) bias and unavoidable human error. Let’s take a closer look at each of these evolutionary phases in AI development.
Rule-based systems are executed on observations. Early adopters used this form of AI to examine observations related to things like network traffic and user actions, based on rules manually created and applied by human operators. For example, a user might establish a “Largest Outbound File Transfer” (LOFT) rule that triggers alerts based on a predetermined size of files exiting the local network. Another typical rule might relate to a larger-than-expected number of failed login attempts.
These rules make sense and alerts created by these rules can be helpful, but this approach is laborious, quite limited in scope, and apt to trigger large numbers of alerts that amount to unusual, but not truly dangerous, behaviors. Analysts could never hope to create enough rules to get around common, standard, acceptable behaviors that frequently arise at various points along busy enterprise networks.
For a LOFT threshold to be correct, for example, an analyst would need to know in advance that a specific user very rarely sends files larger than a specific size outside the local network and then manually examine traffic patterns and file sizes for every individual user on the network. Worse, these user patterns are prone to shift as job responsibilities shift for any number of reasons, many of which are wholly unpredictable.
Try as they might, analysts stuck working with first-wave AI would be hard pressed to reduce false positives and in fact, would be more likely to increase the load over time, as networks organically grow and change alongside the natural flow of business. First-wave AI is helpful, to a point, but it’s more helpful in theory as a comprehensive cybersecurity solution, especially in the 2020s.
Later in the 20th century, AI became more beneficial as a cybersecurity tool thanks to advances in machine learning — a set of mathematical algorithms that enable detection of patterns in data. Several machine learning algorithms became prominent over the last few decades of the 20th century, including:
- Deep neural networks
- Support vector machines
- Bayesian learning
In contrast to first-wave, rule-based systems, machine learning AI uses historical data on the network to determine the thresholds for rules, alleviating much of the reliance on manual human input and analysis. Machine learning can also analyze deviations from previously observed network behaviors.
The caveat — and it is a large caveat — is that the enormous amount of data flowing across a typical enterprise network requires a massive, computationally expensive learning effort that can take months or years to achieve. Worse, once the learning is “finished,” the dynamics of the network have likely shifted, rendering those months or years of learning incomplete.
Even if all goes to plan and a machine-learning based cybersecurity solution is relatively comprehensive, the very nature of the process of that learning creates inherent, exploitable blindspots attractive to bad actors poised to look for those predictable vulnerabilities. In other words, bad actors were learning how to use machine learning alongside cybersecurity vendors and have been able to quickly swoop in and attack these “protected” AI-secured systems.
Second-wave AI is head and shoulders above first-wave rule-based AI, but the limitations and vulnerabilities inherent to second-wave approaches have rendered this approach to legacy status. Today’s modern cybercriminals are much more sophisticated in their approaches than they were a few decades ago, yet these solutions still appear frequently in the cybersecurity marketplace.
The dynamic nature of enterprise networking has only become more prominent since the early days of AI-based cybersecurity. It’s no wonder second-wave solutions have repeatedly failed in the face of attacks by lightning-fast bad actors wielding novel, zero day deployments. The good news is that the third wave of AI is alive and thriving.
Third-wave AI is based on a generative model of dynamics created in an unsupervised environment. In simple terms, third-wave AI “lives” on a network and develops a comprehensive understanding of expected and unexpected yet acceptable behavior.
From the first 5 minutes it is deployed, MixMode employs unsupervised AI to learn, without a reliance on historical data. The platform constantly adapts to dynamic changes in massive amounts of network and cloud data in a computationally efficient manner, based on mathematical algorithms invented in this century. MixMode represents a sea change in network security.
Supervised vs. unsupervised
As we’ve mentioned, MixMode represents an unsupervised versus a supervised machine learning approach. Let’s break down the difference:
● Supervised learning relies on labeled, historical data
● Unsupervised learning handles classification on its own
In a supervised machine learning approach, humans create labels as inputs to an algorithm that is then trained to recognize labeled patterns (again, as determined by humans). The algorithm is required to recognize newly acquired data as being of a specific type that was already present in the historical data — but, importantly, only historically present data. A typical example is image recognition facilitated by deep neural network environments, which can recognize objects in new images based on similar objects in labeled images on which it has been trained.
By contrast, unsupervised learning classifies objects it “sees” in data with its own internal labels using an approach quite similar to organic human learning. Consider that even a baby will recognize a cat passing by as a thing that moves, differentiating it from a static background environment. Babies can even note specific features, such as the cat having 4 legs, whiskers, ears, eyes, and a long tail. Young babies aren’t aware that this object is related to a label called “cat,” but they are still learning a great deal about cats. The key here is establishing a baseline (the static background) and the deviation (the motion of the object).
The modern threatscape includes a constantly-evolving set of unlabeled deviations in the form of zero day threats and modifications of well-known threats. Unsupervised AI makes it possible to get ahead of such threats, but too often, vendor offerings in this arena include the use of insufficient, off-the-shelf algorithms (such as clustering algorithms) and still rely on enormous data stores to provide any measure of real world protection.
MixMode harnesses the true potential of third-wave AI
MixMode’s unsupervised, third-wave AI computes patterns of interaction over many different timescales, contrasting it over the next 5-minute interval with what was seen previously. Should patterns deviate, the platform performs an assessment of the security risk implied in that deviation and presents it to the user.
Even if a threat is, indeed, a zero day attempt, the unsupervised nature of MixMode’s dynamic learning algorithms can recognize it. And, if the platform determines a low risk, analysts aren’t bombarded with a deluge of intel and notices, eliminating more than 95% of false positives typically triggered by lesser-evolved solutions.
MixMode’s third-wave AI makes its decisions on zero day threats and false positives based on the intuitive transparent concept of interaction of network elements over a variety of timescales — just as a human would — but utilizes its massive computational powers to do it in an efficient way.