Before You Invest in AI Cybersecurity in 2022, Unravel Misleading Vendor Claims

Modern cyberthreats require modern Cybersecurity solutions — in 2022, those solutions include the latest AI advances.

As Forbes reports in the recent article, “The 7 Biggest Artificial (AI) Trends in 2022,” the World Economic Forum identified cybercrime as “potentially posing a more significant risk to society than terrorism” in 2022.

As we become more and more reliant on machines and advances in fields like the Internet of Things (IoT), cybercriminals have a nearly limitless pool of endpoints to choose from. “Every connected device you add to a network is inevitably a potential point-of-failure an attacker could use against you,” Forbes reports. “As networks of connected devices become more complex, identifying those points of failure becomes more complex.”

Enter AI.

The newest and smartest AI can analyze network traffic and learn to recognize analogous behavior, alerting security teams to potential threats as they are uncovered. MixMode’s third-wave is a prime example; the platform employs third-wave AI to create a baseline of expected network behavior and analyzes traffic in real-time. This approach enables MixMode to uncover vulnerabilities before they are exploited.

Not all “AI Cybersecurity” is created equally, however.

When Human Workers Hide Behind-the-Scenes

Bloomberg examines vendor claims about AI in its article, “Much ‘Artificial Intelligence’ Is Still People Behind a Screen.” Here, the reporter reveals a disturbing statistic uncovered by research firm MMC Ventures: 40% of purported AI startups surveyed by the company in 2019 showed “no evidence of actually using artificial intelligence in their products.”

Instead, the article continues, human workers are conducting “cognitively intensive tasks” to overcome inherent limitations in the “AI” algorithms used by some log-based Cybersecurity platforms. While vendors may claim humans are involved to merely provide “validation” or “oversight,” “some companies have fallen into the gray area between training and operating,” according to the article.

Other companies take pains to actually hide the humans working behind the scenes on manual tasks that feed information into machine learning AI platforms. So-called freelance “microworkers” — for example, Amazon’s army of MTurk microwork freelancers — are often conducting these tasks.

In their book “Ghost Workers,” authors Mary Gray and Siddharth Suri explain that freelancers are part of what the article calls an “invisible workforce,” labeling, editing, and sorting information cybersecurity platforms rely on. “AI doesn’t work without these ‘humans in the loop,’” the article claims, “yet, people are largely undervalued.”

MixMode: The First Self-Supervised AI for Cybersecurity

MixMode turns the typical “AI” approach on its head by utilizing authentic, third-wave AI that is truly self-supervised. The platform does not need to be trained by humans or given a set of rules to abide by. Instead, MixMode is able to independently extract patterns and trends and compare it with expected network behavior based on a continually evolving, generative baseline — no rule-defining or labeling required.

While most AI takes between 6-24 months to learn enough about a network to provide valuable, actionable insight, MixMode can get up to speed within 7 days and deliver much more accurate, reliable network activity insights.Learn more about the MixMode platform, and set up a demo today.

MixMode Articles You Might Like:

Log4j: the Latest Zero-Day Exploit to Log Jam Cybersecurity

Video: The Challenges With Using “Out of the Box” Cloud Security Solutions

Phoenix CISO uses back-to-basics approach for cybersecurity

As Enterprises Embrace 5G, AI-Enhanced Cybersecurity Emerges as Top Security Priority

Healthcare Ransomware Attacks Persist