The following is an excerpt from my recent Forbes Technology Council article, “Why Large Language Models (LLMs) Alone Won’t Save Cybersecurity.” To read the full article, click on the link below.

Why Large Language Models (LLMs) Alone Won’t Save Cybersecurity

Generally speaking, the last couple of months have been a never-ending see-saw of whether AI will lead us to utopia or lead us to ruin. There are sobering documentaries like AI Dilemma on YouTube and optimistic posts like Why AI Will Save The World by Marc Andreessen. Notable researchers like Geoffrey Hinton stepped down from Google to go on a world tour highlighting the dangers of AI, and Sam Altman, the CEO of ChatGPT creator OpenAI, has openly spoken of how it could be used for disinformation and offensive cyberattacks. Goldman Sachs reports that 300 million jobs may be replaced by AI, hopefully leading to new more fulfilling ones.

The star of the moment is Large Language Models (aka LLMs), the foundational model that powers ChatGPT. There are plenty of documented examples of truly impressive feats built on this technology: writing reports or outputting code in seconds. At its core, LLMs basically ingest A LOT of text (e.g., think Internet) as a corpus of training data and rely on human feedback in a type of supervised training called reinforcement learning.

Bad actors in cybersecurity enabled with this technology have been noted fairly broadly. Being able to spoof emails and phone calls from family members or work colleagues will drive up efficacy in phishing campaigns. The NSA recently noted the ability of LLMs to rewrite known existing malware to bypass the signatures historically used for their detection. Even more dangerous is the advanced ability to discover new zero-day exploits in systems and develop new novel attacks.

This narrative has been met with a corresponding “let’s fight fire with fire” type of mentality in the Cybersecurity market. A sort of Flex Tape™ moment of slapping ChatGPT onto the legacy tools and then donning a “Generative AI” moniker. Said another way, many are selling that the LLMs will save the day by detecting these ever-advancing attacks from hackers.

To be fair, it is possible that LLMs can help lower the bar of using some tools in Cybersecurity. If done securely, providing a natural language interface on top of tools that normally take lots of training to operate is not a bad idea. This trend broadly may result in English being the de facto programming language for many tools in the not-too-distant future.

Continue reading this article for more on:

  • The overpromises and shortcomings of LLMs
  • Reframing LLMs and the path forward

Other MixMode Articles You Might Like

eBook: The Inefficiencies of Legacy Tools – Why SIEMs Alone Are Ineffective At Detecting Advanced Attacks

Unleashing the Power of Self-Supervised AI: Insights from 451 Research Report on MixMode’s Dynamic Threat Detection and Response

Verizon’s Annual Data Breach Incident Report (DBIR) Shines Spotlight on Ransomware Trends & Insider Threats

Aligning an Organization’s Attack Surface to Detection Surface is Key to Adversary Defense in Today’s Cloud Era

Detecting Threats in AWS with MixMode AI

Top 5 Takeaways from the CISA 2023-2025 Strategic Plan That the Cybersecurity Community Should Know About