AI-Written Ransomware: FunkSec’s Code

AI-Written Ransomware: FunkSec’s Code

Researchers at Kaspersky Lab have uncovered a new threat actor—FunkSec—believed to be among the first to deploy large-scale, AI-assisted ransomware operations. Active since late 2024, FunkSec is notable for its aggressive tactics, high automation, and use of generative AI to develop its malware.

Distinct Characteristics

FunkSec’s operations stand out for three key reasons:

  • Use of AI tools for ransomware development
  • High adaptability to different environments
  • Mass-scale attacks across multiple sectors

Targets and Unusual Tactics

The group has primarily targeted:

  • Government agencies
  • IT, financial, and educational institutions
  • Regions across Europe and Asia

What sets FunkSec apart is its ransom strategy. Rather than demanding millions, the group often asks for as little as $10,000. In some cases, stolen data is sold at unusually low prices in criminal marketplaces.

Experts believe this low-cost model enables FunkSec to:

  • Launch a higher volume of attacks
  • Build credibility quickly in underground forums
  • Scale rapidly by automating attack infrastructure using AI

AI-Generated Ransomware in Action

FunkSec’s ransomware is Rust-based, delivered as a single, compact executable. The malware merges full-disk encryption and data exfiltration into one streamlined package—a sign of AI-assisted development.

Notable Features:

  • Terminates over 50 processes to disable antivirus and backup tools
  • Contains self-cleaning routines to erase forensic traces
  • Employs advanced evasion techniques to bypass detection

Bundled Tools:

  • A brute-force password generator
  • A distributed denial-of-service (DDoS) module

Evidence of LLM-Generated Code

Researchers uncovered several hallmarks of generative AI in the malware’s codebase, such as:

  • Placeholder comments like “stub for actual verification”
  • Mixed platform commands, suggesting AI wasn't tuned for a single OS
  • Unused functions, likely remnants from large language model (LLM) code generation

These findings indicate the malware was likely assembled—or at least partially written—by tools such as ChatGPT, Copilot, or similar LLM-based systems.

Expert Insight

“We increasingly see cybercriminals using generative AI to create malicious tools. It speeds up development, lets attackers adapt tactics faster, and lowers the entry barrier. But such code often contains flaws—so criminals can’t fully rely on AI yet.”
Tatiana Shishkova, Senior Expert, Kaspersky GReAT

Key Takeaways

  • FunkSec is scaling cybercrime through AI-assisted ransomware development
  • Targets include governments, financial institutions, and schools, mostly in Europe and Asia
  • Ransom demands are low, but attacks are widespread and frequent
  • Malware is Rust-based, with combined encryption and data-theft functionality
  • LLM-generated code shows both efficiency and error-prone automation

The FunkSec case illustrates a turning point: AI is no longer just a defense tool in cybersecurity—it’s now part of the attacker’s arsenal. While still imperfect, generative AI is enabling faster, cheaper, and more scalable cybercrime.

Read more