A new report from Microsoft and Open AI shows how attackers are increasingly using artificial intelligence (AI) to improve their cyberattacks. The report found that nation-backed groups use LLMs for research, scripting, and phishing emails.

Microsoft and OpenAI have detected attempts by Russian, North Korean, Iranian, and Chinese-backed groups using large language models (LLMs) to improve their cyberattacks. The report talks about nation-state groups using LLMs for research, scripting, and making phishing emails more convincing.

This aligns with MixMode’s analyst team uncovering increased nation-state activity from China and Russia in recent events. 

Why are attackers using AI?

There are several reasons why attackers are using AI to launch cyberattacks. AI can automate tasks like generating phishing emails or scanning for vulnerabilities. This can free up attackers’ time to focus on other tasks, such as developing new attack methods.

AI can make attacks more targeted and effective. For example, AI can personalize phishing emails to specific victims, making them more likely to be clicked on.

AI can also be used to create new attacks that were impossible before. For example, AI can generate deepfakes, videos, or audio recordings manipulated to make it appear that someone is saying or doing something they did not say or do.


How are attackers using ChatGPT?

Specific examples of how hackers are using ChatGPT to improve their cyberattacks, as mentioned in the article, include:

  • Researching targets and vulnerabilities: Attackers use ChatGPT to research publicly reported vulnerabilities and target organizations. For example, a North Korean hacking group known as Thallium has been using ChatGPT to research publicly reported vulnerabilities and target organizations.
  • Improving scripts: Attackers use ChatGPT to strengthen their scripts for phishing emails and other malicious activities. For example, a Chinese state-affiliated hacking group has been using ChatGPT to generate phishing emails.
  • Drafting content for phishing campaigns: Attackers are using ChatGPT to draft content for phishing campaigns that are more likely to be convincing. For example, a North Korean hacking group, Thallium, has been using ChatGPT to draft content for more convincing phishing emails. 
  • Automating tasks: Attackers use ChatGPT to automate tasks such as file manipulation, data selection, and regular expressions. This can make their attacks more efficient and effective.
  • Translating languages: Attackers use ChatGPT to translate languages, which can help them target victims in different countries.


What types of attacks can AI generate?

AI can be used to generate a wide variety of cyberattacks, including:

  • Phishing emails: AI can create phishing emails that are more likely to be clicked on by victims. For example, AI can personalize phishing emails to specific victims, making them more likely to believe the email is legitimate.
  • Malware: AI can generate new malware that is more difficult to detect and remove. For example, AI can be used to create malware that can change its behavior to avoid detection.
  • Social engineering attacks: AI can create social engineering attacks that are more likely to be successful. For example, AI can be used to develop chatbots that can trick victims into revealing personal information.

How can organizations defend against AI-generated attacks?

Traditional security measures alone cannot defend against AI-generated attacks. Organizations need to use a variety of security measures, including:

  • AI-driven threat detection solutions: AI-driven threat detection solutions can identify and block AI-generated attacks. These solutions use machine learning to identify patterns in attack data indicative of AI-generated attacks.
  • Security awareness training: Security awareness training can help employees to identify and avoid AI-generated attacks. For example, employees can be trained to be suspicious of emails containing unexpected attachments or from unknown senders.
  • Multi-factor authentication: Multi-factor authentication can help prevent attackers from gaining access to systems, even if they have stolen a user’s password.


How can MixMode help?

MixMode’s advanced AI is purpose-built to detect and respond to threats in real-time, at scale, including AI-generated cyber attacks. The MixMode Platform helps protect organizations from AI-based threats by providing:

  • Continuous Monitoring: Continuously monitor cloud, network, and hybrid environments
  • Real-time Detection: Detect known and unknown attacks, including ransomware.
  • Guided Response: Take immediate action on detected threats with remediation recommendations.

Legacy cybersecurity tools rely on rigid machine-learning models that sophisticated attackers can exploit. MixMode’s AI is uniquely born out of dynamical solutions (a branch of applied mathematics) and self-learns an environment without rules or training data. MixMode’s patented AI does not need rules, models, or training data. Instead, it continuously learns and adapts to each customer’s network, identifying the “normal” activity unique to that environment. MixMode AI evolves well beyond legacy AI’s constraints by deeply integrating with network dynamics.

MixMode’s AI was designed to identify and mitigate advanced adversarial attacks, including adversarial generative AI. This enables MixMode to detect and neutralize threats from deceptive AI agents and malicious actors attempting to infiltrate systems and evade detection through intelligent adaptation. Any abnormal behavior triggers instant alerts, empowering security teams to lock down threats before damage.

With MixMode, security teams can increase efficiencies, consolidate tool sets, focus on their most critical threats, and improve overall defenses against today’s sophisticated attacks.

Click here to read more about AI-generated attacks, or reach out to learn how MixMode can help you defend against AI-generated attacks.

Other MixMode Articles You Might Like

City of Dallas Selects the MixMode Platform to Fortify Its Critical Infrastructure

Navigating the Uncertain Path: Why AI Adoption in Cybersecurity Remains Hesitant, and How to Move Forward

The Current State of SOC Operations Shows The Escalating Need for AI in Cybersecurity

MixMode Releases the First-Ever State of AI in Cybersecurity Report 2024

Harnessing the Power of Advanced AI to Optimize Security

Todd DeBell of MixMode Recognized as 2024 CRN® Channel Chief