Anomaly detection, the “identification of rare occurrences, items, or events of concern due to their differing characteristics from the majority of the processed data,” allows organizations to track “security errors, structural defects and even bank fraud,” according to DeepAI and described in three main forms of anomaly detection as: unsupervised, supervised and semi-supervised. Security Operations Center (SOC) analysts use each of these approaches to varying degrees of effectiveness in Cybersecurity applications.
Often, Cybersecurity vendors make bold claims about artificial intelligence (AI) and its role in their anomaly detection products. Though these vendors imply that their AI enhancements can identify anomalies on their own, the reality often falls short. Even when these systems can identify anomalies (with or without AI tools), anomaly identification is a far cry from actually swatting down threats.
AI in Cybersecurity has become a marketing tool. Systems that rely on machine learning or supervised/unsupervised learning may be using “AI” but the technology is dated. So-called first- and second-wave AI can be taught to recognize anomalies that deviate from an expected norm (a baseline established by human operators), but true, self-learning AI is far less common among the options available in the Cybersecurity marketplace. Most available systems are limited by human interaction.
There’s no doubt that anomaly detection is helpful and necessary as a key component of Cybersecurity. Tools like User Behavioral Analytics (UBA) and Network Traffic Analysis (NTA) are based around anomaly detection. The key difference between Cybersecurity solutions that use these tools lies in what happens once an anomaly is detected.
The Battle with False Positives
Systems limited to supervised machine learning tend to flag so many potential anomalies that analysts are left battling an endlessly growing stack of false positive alerts. Anomalies can range from excessive logins to spikes in traffic between two points to unexpected behaviors like a larger number of remote log-ins than expected. As we learned during the pandemic response in 2020, this latter “anomaly” was necessary for many organizations to keep business moving when workers were stuck at home. Bad actors, familiar with popular Cybersecurity solutions on the market, were more than prepared to take advantage.
The central issue is that anomalies aren’t always representative of analogous behaviors, and differentiating between real threats and false alerts takes up a tremendous amount of resources. Cybersecurity solutions smart enough to make most of these determinations on their own are a clear advantage to SOCs of any size.
Anomaly Detection Powered by Unsupervised AI
The MixMode solution does just that. The system uses predictive, real-time threat and anomaly detection powered by self-learning that triggers 95% fewer false positives. MixMode uses third-wave, unsupervised AI to establish a constantly-evolving baseline of expected network behavior. While typical AI-enhanced systems take around 18 months to establish an understanding of a given network clear enough to identify anomalies and reduce false positive alerts, MixMode starts identifying anomalies in the first hour.
MixMode can even prevent zero-day attacks before damage is done because it is constantly adapting to changing network conditions. Supervised learning systems are only as effective as their last, manual update, giving zero day attackers many opportunities to swoop in.