5 Reasons Why Context-aware Artificial Intelligence (Caai) Is Needed in Cybersecurity
5 reasons why Context-Aware Artificial Intelligence (CAAI) is needed in Cybersecurity
CAAI delivers understanding of the network baseline and reducing false positives
By Dr. Igor Mezic, CTO and Chief Scientist
Artificial Intelligence (AI) has surfaced as the technology of the day, in the same way internet, personal computers, airplanes and cars have in earlier eras. And, just like these others at the beginning of their development, the seemingly infinite potential of AI is not always represented well in early products. The issues with interpretability, false positives, and dynamically changing data are all important. Papers such as Paxson’s (2010)1 issued an early warning to such difficulties in applying machine learning and AI to network security. Today we know that a move needs to be made from the first wave AI (expert systems, rule-based) and second wave (statistics based, classification engines) to the third wave, which is context-aware, interpretable AI2.
Here are the five reasons why context-aware artificial intelligence is a necessary step forward to enable applications of AI in network security:
- False Positives: in the current deployment of (first- and second-wave) AI systems, network security analysts are plagued by false positives and fear false negatives. A CAAI system is needed to minimize both of those. This new system also needs an understanding of the network security team resources and to learn from their actions.
- Big data: The problem of false positives is amplified by the fact that modern corporate networks have large numbers of IP addresses. The internal traffic between source and destination pairs – which scales as the number of IP addresses squared – also produces a large volume of temporal data. It is difficult, even for an experienced analyst, to understand the traffic between all the different parts of the network, as well as its context in this big dynamic data realm. This large amount of data needs to be reduced to actionable intelligence by the CAAI system for the network analyst.
- Predictive ability: The flip side of the coin of big data is that it enables AI with predictive capability, provided it can understand the context of the dynamics on the network. Current systems act in a reactive mode: they alert as something of large magnitude happens. A CAAI system can alert based on a small amount of activity that leads to a large magnitude action sometime later.
- Metadata: Metadata is important. Was this scheduled activity? Did this occur before, in the same context? CAAI needs to take into account not only the nature of the network data but also the context of the situation.
- Human-machine interaction: A system that does not learn from past false positives and context, both local and on other networks, is spamming the network analyst. In contrast, a learning AI-based system reduces spam over time. We need an interactive AI environment -- where human-machine interaction occurs seamlessly and learning is enabled both ways. The autonomy level of the system is important. At the beginning, the user might set level 2 autonomy, where much of the monitoring and commands are provided by the user (such as in Tesla Autopilot) but over time steps up to level 3 or higher3 .
As we move into the era of autonomy, it is not just cars and drones that will require and acquire it. Network security systems will need to become autonomous, and switch from passive tools to active participants in the security assurance process via the AI-human interaction. For this, third-wave AI systems will need to be developed and deployed. These will be capable to adapting to a dynamically changing environment, learning in an unsupervised manner and extrapolating from supervised learning on a small number of cases, just like humans. The outcome will be a safer, better internet infrastructure where AI cyber warriors have better platforms to keep the threats at bay.
 Sommer, Robin, and Vern Paxson. "Outside the closed world: On using machine learning for network intrusion detection." 2010 IEEE symposium on security and privacy. IEEE, 2010.
3 Rise Of The Machines: Understanding The Autonomy Levels Of Self-Driving Cars. Robert J. Szczerba. Forbes, July 19, 2018.
Dr. Igor Mezic is CTO and Chief Scientist at MixMode. For more, visit us at mixmode.ai.