This is part one of a three-part series about improving the black box approach to cybersecurity.
Looking out across the vast Cybersecurity marketplace, it’s immediately obvious that organizations have more choices than ever before. Countless vendors promise powerful AI solutions that will solve fundamental issues facing modern organizations, yet many fall short on fulfilling those promises
At first glance, it appears that the Cybersecurity landscape has kept up with advancements in networking, including cloud computing. Unfortunately, all that’s shiny is not gold. Expensive, glittery “solutions” that deliver limited insights or, worse, overlook threat behavior can lead to dire consequences for an organization.
To stay ahead of sophisticated threat actors, today’s complex network environments require truly advanced tools. Third-wave, self-supervised AI examines data of any type and format, including data in the cloud, and compares it against a constantly evolving forecast of behavior. MixMode utilizes predictive analysis to identify anomalies and give users true insight into why they were flagged. Unfortunately, many solutions on the market today fall far short of that necessary capability.
This three-part series will examine the current challenges facing organizations when it comes to cloud security and why many Cybersecurity solutions are frequently incapable of providing holistic insights.
The Challenges of Hybrid Cloud Security Approaches
It is increasingly more common for organizations to adopt cloud computing in a hybrid fashion, keeping some on-premise infrastructure in place while moving select data to the cloud. While this approach may save on costs in the short term, hybrid solutions complicate organizational security postures. Because of this, many organizations are seeking solutions that will improve their confidence in the security of their cloud data.
One fundamental reason cloud data is a challenge to secure is simply that many tools and solutions in use weren’t created for the cloud. Cloud environments don’t show the whole picture of what’s happening. There is limited information available to analyze for security purposes. Organizations understandably feel they don’t have a good handle on the security of their cloud data.
Another factor at play is related to the vendor’s role in cloud security. In many cases, vendors are mostly focused on cost and optimization versus true Cybersecurity solutions. When addressing client concerns about issues like regulatory data compliance or specific security concerns, vendors often refer to cloud provider security, placing the onus on services like AWS and CloudWatch. Effectively, vendors are saying, “Well, our security’s AWS security.” In the meantime, cloud data still sits vulnerable to attacks.
The Shared Responsibility Model
Who owns security in the cloud varies depending on whether an organization takes an Infrastructure as a Service (IaaS) approach, a Platform as a Service (PaaS) approach, or if they use a Software as a Service (SaaS) solution.
If we imagine IaaS on the left of a sliding scale and SaaS on the right, cloud data on the left is mostly secured by customers, and as we move to the right, the responsibility falls more to the vendor. Customers tend to have less visibility into what’s going on with SaaS solutions. Because there are hundreds of different types of services and workloads at play, the attack surface has become much more complex with the introduction of the cloud into the mix.
Some security vendors do focus on cloud data security, providing host-level intrusion prevention at the container level. This approach can lead to more and better insights, but typically with a “look back” mindset, where security events affecting endpoints are deconstructed after the fact. Real- or near-real-time cloud data security is another story altogether, one the vast majority of vendors are unable to achieve, even with so-called second wave AI solutions.
AI, as it applies to data analysis, is concerned with the output of computation to predict, detect, or label data from a source and possibly take action. It could be an AI capable of reading X-Rays or MRIs or the AI controlling a self-driving car. In either case, improper results could have disastrous consequences. It’s vital from an organizational standpoint to fully trust the AI guarding valuable proprietary and regulated data.
Next week we will continue the discussion by expanding on the limitations of rules-based Cybersecurity and the scale and scope limitations that are inherent with traditional Cybersecurity solutions.