How AI is Solving the False Positives Problem in Network Security
How AI is Solving the False Positives Problem in Network Security
By Ana Mezic, Marketing Coordinator at
The term “False Positives” is trending in the cybersecurity industry right now. Rightfully so. Managing the impossible amount of alerts IT teams get from their cybersecurity software is an issue that demands a solution as hackers and gatekeepers play tug-of-war with cutting-edge technology.
CSO reports that at large enterprises worldwide, 37 percent receive more than 10,000 alerts each month. Of those alerts, 52 percent were false positives, and 64 percent were redundant alerts.
What is the False Positives Problem?
Before we get too far, we should define what the false positives problem actually is. At the core of the issue is the unique, dynamic nature of large corporate networks - networks that have tens of thousands or hundreds of thousands of IP addresses.
Tracking traffic anomalies in real time across a network this large is very difficult and would require server farms to process this much data. Therefore, most companies fall back on rule-based systems. Most cybersecurity tools utilize a system of rules and thresholds to trigger cybersecurity alerts to save cost and deliver results quickly. As you can imagine, the application of this “one size fits all” approach yields a lot of alerts which are not actionable – i.e. false positives.
This begs the question: why not simply tune the environment to eliminate the false positives problem? Setting aside the difficulty and time-consuming nature of individual environment tuning (and such tuning can sometimes take months and will need to be re-tuned later), the solution is not that simple. Consider this: the more specific an engineer/analyst is in identifying potential attack vectors or signatures, they may prevent some incident of false positives but they are more likely to cause false negatives – which are arguably more dangerous.
The false positives problem begs for a solution as alert fatigue is now a big issue in SOCs globally. Analysts right now are sitting down at their dashboards coming off of a break to see another 25, 50, 100 or more false positive alerts that they need to manually check through after they had just cleared all of their assigned events 15 minutes ago - it’s tiring, tedious, and labor-intensive.
Some companies are already claiming to have solved this issue with articles boasting, “zero false positives.” What they don’t tell you is that simply having zero false positives is not the hard part. It’s managing to reduce false positives while also keeping false negative alerts from skyrocketing. Essentially you want to have as little as possible of both.
What if there was some way to minimize both false positives and false negatives thereby reducing labor costs and increasing stopped attacks?
Dr. Igor Mezic, MixMode’s new CTO and Chief Scientist believes he has found a way to address this issue.
Dr. Mezic explains here what, in his view, truly makes a cybersecurity software company revolutionary, and how MixMode is on the path to building a fully automated network security system.
“So when you’re trying to get rid of false positives, the wrong thing to do is just to make the system produce very few alerts because you might be increasing your rate of false negatives pretty dramatically and then you are at risk of missing some significant events,” said Dr. Mezic. “So it’s really the balance that the AI system should be attuned to, this is called the receiver operating curve.”
According to Dr. Mezic, what separates MixMode from the pack is the AI’s ability to optimize the balance between false positives and false negatives.
“By paying attention to the overall traffic of the network and selecting the events that differ from the normal operation, you’re not really tuning the system up or down, it will naturally select the events that deviate from the normal and therefore produce very few false positives and at the same time very few false negatives, if any,” Mezic said.
AI monitoring that learns from mistakes and a “baseline” understanding of a network’s entire system
Using MixMode’s PacketSled platform, analysts can monitor what the MixMode AI is showing and look into the events that have been flagged by the AI. The AI is going to select, out of all the indicators in the system, a few events that are aggregates of indicators that the analysts should be cognizant of and solve first.
Inside the platform, there are a few events the AI system has let pass through and those will be the types of activity security teams should prioritize. If they want to see all of the underlying data and not just the cherry-picked AI results, they can dig into the indicators and see everything.
In that context, the analyst can spend a lot more of his/her valuable time threat hunting and looking deeply into the events that actually matter rather than wasting massive amounts of time on false positive alerts.
“It is giving anybody on the IT team the ability to spend their time in a more meaningful fashion. So if you have even a few people they can be extremely effective, rather than having a huge team or a small team spread thin,” said Dr. Mezic.
Looking Into the Future
Dr. Mezic stressed that the number of threats is only going to increase and a modern cybersecurity team needs to have tools that are effective and efficient to balance the increased volume of threats with the finite resources available.
He sees the software as useful to all companies, from family-owned businesses to large enterprises.
“I would say that enterprise-level companies that get attacked all the time would have a lot of need for this, for obvious reasons, and then small- and medium-sized companies would need it because of their limited ability to spend money on security professionals so it’s necessary for them to leverage quality software to bridge the gap,” said Dr. Mezic.
Although MixMode appears to have a particularly good grasp on the false positives problem, Dr. Mezic says there’s much more to look forward to in the company’s future.
“The system that we built is actually a lot more than just a false positive detector. It really is a brain that understands how your network works. That’s what we’re really building and we’re building one with memory, prior knowledge, hard-wired stuff— it really should be thought of as a very specialized little brain for network security. It learns a lot and it understands a lot about it,” said Dr. Mezic.
“There is a lot of room for learning for this AI system but the basic components are there and the hardest part is establishing a baseline. Which we’ve already done.”