The Ransomware Playbook Has Been Rewritten: How AI Is Automating the Attack Chain#

Abstract#

Between 2024 and 2026, artificial intelligence transformed ransomware from a skilled-labor-intensive crime into an automated industrial operation. Threat actors now leverage large language models for reconnaissance and target profiling, generative AI for flawless spear-phishing and deepfake-enabled business email compromise, AI-orchestrated lateral movement that compresses breakout times to a median of 29 minutes, and emerging Ransomware-as-a-Service platforms that advertise AI-powered negotiation as a core product feature. The attack chain that once required weeks and an experienced operator can now be executed — from initial access to extortion demand — in under an hour by a low-skill affiliate equipped with the right tools. This article walks through each stage of the rewritten kill chain from the attacker’s perspective, then examines what has materially changed about the defender’s risk posture and which controls matter most for small and midsize businesses in this environment.

Keywords: ransomware, AI-augmented attack chain, LLM, spear phishing, deepfake BEC, BYOVD, EDR evasion, lateral movement, agentic AI, SMB cybersecurity, extortion-as-a-service


🎯 Executive Summary#

The Thesis#

The ransomware threat is no longer primarily a malware problem. It is an automation problem.

AI has systematically removed human expertise as the bottleneck at every stage of the attack chain. In August 2025, Anthropic disrupted a cybercriminal using Claude Code to conduct data theft and extortion against at least 17 organizations — demanding ransoms exceeding $500,000 with no encryption, no binary payload, and no human operator touching most of the operation.1 One month later, a Chinese state-sponsored group designated GTG-1002 used Claude Code agents to carry out 80–90% of a cyber espionage campaign against approximately 30 targets autonomously.2 In December 2024, a ransomware group called FunkSec posted 85 victims in a single month — outpacing every established operator — using AI to generate malware code that its inexperienced operators almost certainly could not have written themselves.3

These are not isolated incidents. IBM’s 2026 X-Force Threat Intelligence Index identified 109 distinct active extortion groups in 2025, up from 73 in 2024 — a 49% year-over-year increase driven directly by AI collapsing the skill barrier to entry.4 CrowdStrike’s 2026 Global Threat Report found that AI-enabled adversaries increased their activity by 89%, while the average eCrime breakout time — initial access to lateral movement — fell to 29 minutes.5

Three Implications for SMBs#

  1. The perimeter no longer buys time. At 29-minute median breakout, the “detect and respond” model requires near-real-time alerting at the initial access phase — not after lateral movement.
  2. Malware detection is necessary but insufficient. 82% of 2025 attack detections were malware-free — valid credentials, living-off-the-land binaries, and kernel-level driver abuse.5 A stack optimized for binary detection has large structural blind spots.
  3. SMBs are the primary target class. Two-thirds of ransomware attacks target organizations with fewer than 500 employees.6 The mean recovery cost is $1.53 million.7 For most SMBs, a single incident is an existential financial event.

🕰️ Part I — The Old Playbook#

Understanding what changed requires knowing what was. Between roughly 2019 and 2022, a ransomware attack followed a recognizable pattern: a threat actor (or RaaS affiliate) purchased access from an initial-access broker, ran manual reconnaissance against the target, launched a phishing campaign from a handful of reused templates, waited days or weeks for a victim to click, then slowly traversed the network — manually escalating privileges, identifying domain controllers, and staging data — before finally deploying the encryptor. Median dwell time across this era exceeded seventy days. Breakout time from initial access to lateral movement was measured in hours.

That model was already industrializing by 2022 through the Ransomware-as-a-Service ecosystem, which commoditized execution and lowered the skill floor. What AI has done is collapse the floor entirely — while simultaneously accelerating every phase of the chain. SentinelOne Labs characterized the shift precisely: LLMs are not launching ransomware, they are optimizing it — acting as an operational accelerator at each stage rather than a wholesale replacement of human attackers.8


⚡ Part II — The New Playbook: Six Stages Rewritten#

Stage 1: Recon & Target Selection — The AI OSINT Engine#

Reconnaissance has historically been the most time-intensive phase of a ransomware operation. An attacker needed to manually correlate LinkedIn profiles, scrape job postings for technology stack disclosures, cross-reference data-breach dumps, and identify exposed services — work that could take days per target.

AI has turned this into a commodity pipeline.

Modern threat actors use LLM-assisted OSINT frameworks that ingest publicly available data — LinkedIn, job boards, corporate press releases, GitHub commits, DNS records, certificate transparency logs — and generate structured target profiles automatically: which subsidiaries share VPN infrastructure, which employees hold Active Directory admin privileges by job title, which vendors have supply-chain access to the network, and what software versions are running on exposed services.

The IBM 2026 X-Force Threat Intelligence Index observed a 44% increase in attacks beginning with exploitation of public-facing applications, a trend directly linked to AI-enabled vulnerability discovery tools that continuously scan for newly exposed services and match them against known CVE databases within hours of disclosure.4 Where it previously took attackers an average of 4.76 days to weaponize a newly published vulnerability, that window has collapsed to 24–48 hours.9

The 2023 Cl0p/MOVEit campaign — where automated exploitation of a zero-day vulnerability swept approximately 2,500 exposed servers in days — was an early illustration of what mass automated scanning achieves at scale.10 In 2025–2026, the same pattern has been supercharged with AI: GTIG documented the PROMPTFLUX dropper using LLMs to generate novel VBScript obfuscation in real time, and threat actors now routinely use AI-assisted tools to continuously scan the internet for newly exposed services and match them against CVE databases within hours of disclosure.10

Equally important: AI models don’t just scan. They prioritize. Victim profiling tools assess revenue (via public filings or LinkedIn employee counts), insurance coverage (inferred from job postings for insurance and risk roles), cyber maturity (from the absence of certain vendor relationships), and sector (healthcare and manufacturing pay faster and more). The result is a tiered target list, automatically ranked by expected payout probability.

Stage 2: Initial Access — Lures That Beat Humans#

Initial access has always been the ransomware chain’s most human-dependent link — and the one AI has most visibly transformed.

LLM-Generated Spear Phishing

The era of grammar-mangled phishing emails is over. Large language models produce grammatically flawless, contextually personalized lures in any language without discernible tells. An LLM fed a target’s LinkedIn profile, recent company press releases, and publicly available email threads can generate a spear-phishing email indistinguishable from internal communications — referencing real projects, real colleagues, real formatting conventions.

CrowdStrike’s 2026 Global Threat Report found that 87% of security professionals believe AI makes phishing lures meaningfully more convincing.5 That perception reflects operational reality: click rates on AI-generated spear phish are materially higher than template campaigns because they clear the cognitive hurdle that trained employees have been taught to identify.

Deepfake BEC and Vishing

Business email compromise has entered a new era with the convergence of voice cloning and video synthesis. Security researchers and fraud investigators have documented that modern AI tools can produce a usable voice clone from as little as 3–5 seconds of source audio, with accuracy sufficient to deceive listeners in real-time calls — material readily available from earnings calls, conference recordings, or LinkedIn videos.11

In February 2024, a finance employee at the global engineering firm Arup was tricked into wiring $25 million to fraudster-controlled accounts after a multi-person video call featuring deepfake likenesses of the company’s CFO and other senior executives. No email thread. No document to inspect. Just a video call that looked and sounded exactly like his colleagues.11

That incident is no longer an outlier. FBI data shows BEC attacks generated $2.77 billion in losses across 21,442 incidents in 2024.6 Russian-speaking ransomware operators BlackBasta and Cactus have specifically combined vishing campaigns with phishing to accelerate privilege escalation — calling employees posing as IT support while a phishing site collects credentials simultaneously.12

Automated Vulnerability Exploitation

Compromised VPN credentials account for 48% of ransomware initial access as of Q3 2025, up from 38% in Q2, according to Beazley Security’s quarterly incident response data.13 AI-assisted credential-stuffing tools continuously test breach dump credentials against enterprise VPN portals, MFA bypass techniques, and exposed admin interfaces — operating at a scale and persistence no human team can match. Google’s Threat Intelligence Group (GTIG) observed experimental dropper malware tracked as PROMPTFLUX in June 2025, using LLMs to dynamically generate VBScript obfuscation on demand — a concrete example of AI generating evasion logic in real time.10

Stage 3: Persistence & Privilege Escalation — Invisible in Plain Sight#

Once inside, the modern threat actor’s primary objective is to disappear. The old approach — drop a remote access trojan, establish a C2 beacon — is increasingly counterproductive against behavior-based endpoint detection. AI has helped attackers solve this problem through two converging techniques.

Living Off the Land (LOLBin) Orchestration

LOLBin attacks abuse legitimate Windows and system administration tools — PowerShell, WMI, certutil, mshta, PsExec — to execute malicious operations without introducing foreign binaries. The challenge for attackers has historically been knowing which tools to chain together in a specific environment to achieve a specific objective while avoiding specific detection signatures.

AI resolves this. CrowdStrike tracks the eCrime group PUNK SPIDER using AI-generated scripts to select and chain LOLBin techniques dynamically based on what’s available in the target environment — and to erase forensic evidence after each step.5 The result is lateral movement that reads as normal system administration traffic until it’s too late.

CrowdStrike’s 2026 Global Threat Report quantified the outcome: 82% of all 2025 detections were malware-free — attackers using valid credentials, trusted identity flows, and approved SaaS integrations instead of traditional malware payloads.5

BYOVD: Killing the Guardrails

Bring Your Own Vulnerable Driver (BYOVD) attacks load a legitimately signed but vulnerable kernel driver onto the target system. Because the driver carries a valid Microsoft signature, Windows permits it to execute at kernel level. The attacker then exploits the driver vulnerability to terminate EDR processes, unregister kernel callbacks, and disable security controls entirely — effectively blinding the defender before proceeding.

BYOVD adoption has become standard in sophisticated ransomware operations. A single campaign targeting the TrueSight driver deployed over 2,500 distinct driver variants to evade signature-based detection between mid-2024 and early 2025.5 In February 2026, the Reynolds ransomware strain embedded a vulnerable driver directly within the ransomware payload itself, eliminating a separate EDR-killing deployment step and compressing the defender’s detection window.14

Qilin and Warlock ransomware operations have deployed BYOVD techniques capable of silencing more than 300 EDR drivers from nearly every major security vendor — via a malicious DLL (“msimg32.dll”) that terminates EDR kernel callbacks on load.14 EDR bypass tools are now commoditized on underground forums at $300–$10,000, accessible to affiliates with no kernel-level expertise of their own.14

Russia-nexus FANCY BEAR has gone further, deploying LAMEHUG — LLM-enabled malware that automates reconnaissance and document collection post-compromise, reducing the need for any manual operator interaction after the initial foothold.5

⚠️ Defender note: Standard EDR is insufficient against BYOVD attacks that kill the EDR process itself before any alert is generated. Kernel-level tamper protection and monitoring against Microsoft’s Vulnerable Driver Blocklist are required, not optional.

Stage 4: Lateral Movement — The 29-Minute Breakout#

The CrowdStrike 2026 Global Threat Report contains a number that reframes every conversation about detection and response: the average eCrime breakout time — the interval between initial access and lateral movement to additional systems — is now 29 minutes. The fastest observed breakout in 2025 occurred in 27 seconds.5

Mandiant’s M-Trends 2026 report adds precision: the median initial-access-to-handoff interval — the time between a threat cluster gaining access and transferring control to a secondary operator for follow-on activity — has fallen to 22 seconds, down from over eight hours in 2022, reflecting increasing automation and division of labor across the criminal ecosystem.15

These numbers matter because most enterprise detection and response workflows — even mature ones — assume a detection-to-containment cycle measured in minutes to hours. At 29-minute median breakout, that window has effectively closed.

The mechanism is automated network enumeration: AI tools rapidly map Active Directory structures, identify privileged service accounts, locate domain controllers, and determine the fastest path to domain admin — then execute that path using credential material harvested during initial access (typically via infostealer malware or LSASS dumps). IBM X-Force noted that infostealer malware exposed over 300,000 enterprise AI chatbot credentials in 2025 alone, many of which contained embedded SSO tokens providing direct network access.4

Lateral movement speeds dropped 29% in 2025 to an average of 34 minutes, with the time to first Active Directory attack averaging approximately 11 hours after initial access in less-aggressive operations.5

Stage 5: Data Exfiltration — Smarter Staging, Faster Exit#

Data exfiltration has traditionally been a high-signal phase: large volumes of unusual outbound traffic are exactly what network detection tools are built to flag. AI has made this phase quieter and faster simultaneously.

AI-assisted staging tools now analyze the accessible file system and prioritize what to exfiltrate based on value signals — financial records, intellectual property, HR data containing PII, legal documents referencing regulatory compliance or litigation, and anything with “insurance,” “contract,” or “executive” in the metadata. Low-value data (marketing assets, general documentation) is skipped entirely. High-value data is staged and exfiltrated first, ensuring that even a partial operation yields maximum extortion leverage.

Exfiltration times have collapsed accordingly. Data exfiltration now takes a median of 6 minutes — down from approximately 4 hours in 2024 — enabled by AI-orchestrated parallel upload streams that blend into normal cloud synchronization traffic.5

The August 2025 extortion campaign Anthropic disrupted is instructive here: the operator used no encryption at all. The attack was pure exfiltration followed by an extortion demand — targeting at least 17 organizations including hospitals, emergency services, and government agencies. No ransomware binary means no endpoint detonation signature, no shadow copy deletion, no detectable encryption I/O. The “ransomware” threat was entirely psychological: pay or the data is published.1

This “encryption-free” model is becoming more common. It leaves fewer forensic artifacts, requires no decryption key management, and produces the same financial outcome while generating less defender telemetry.

Stage 6: Encryption, Extortion & Negotiation — The AI-Augmented Squeeze#

The final phase has been transformed less by technical automation and more by the professionalization of the extortion apparatus itself.

Negotiation as a Service

GLOBAL GROUP — a RaaS platform that emerged in June 2025 as a rebrand of the disrupted BlackLock operation — explicitly advertises an AI-powered negotiation system as a core platform feature.16 The system analyzes victim communications to calibrate pressure tactics, suggest optimal ransom anchor amounts, and identify which psychological levers (regulatory exposure, reputational damage, customer notification obligations) are most likely to accelerate payment.

Qilin has taken professionalization further still, employing dedicated legal advisory services that review stolen data, assess potential regulatory violations in the victim’s jurisdiction, and prepare documentation for submission to relevant authorities.14 The implicit threat: pay, or we file the regulatory complaint for you. The group also maintains an internal PR and media relations team dedicated to shaping public narratives and intensifying reputational pressure on non-paying victims.

Scale of the Extortion Economy

Ransomware groups posted 3,734 victims on public extortion sites in H1 2025 — a 67% jump compared to the same period in 2024, and a 20% increase over H2 2024.16 IBM X-Force identified 109 distinct extortion groups active in 2025, up from 73 in 2024, with active group count growing 49% year-over-year as AI lowered the barrier to standing up new operations.4

Mentions of malicious AI tools on dark web cybercrime forums surged 219% throughout 2024, according to threat intelligence firm KELA — with “dark AI” tools evolving into AI-as-a-Service subscription offerings that automate phishing content, deepfakes, and credential harvesting at scale.17

The lowering of the skill floor is the direct cause of ecosystem fragmentation. FunkSec — a group that emerged in late 2024 and posted 85 victims in December 2024 alone, outpacing every established operator — was operated by inexperienced actors who used AI-assisted malware development to build their toolkit.3 Check Point Research noted that FunkSec’s code contains extensive LLM-typical comments written in perfect English — markedly inconsistent with the group’s basic English in every other communication channel.3 The AI did the engineering. The humans did the targeting and the extortion calls.

This is the defining characteristic of the new ransomware economy: expertise is no longer the bottleneck.


🛡️ Part III — The Defender’s Response#

What Has Actually Changed About Your Risk Posture#

The threat landscape shift described above has three concrete implications for SMB security posture.

The perimeter no longer buys you time. If median breakout is 29 minutes and can be as fast as 27 seconds,5 the traditional “detect and respond” model — which assumes hours of attacker dwell time — is operationally broken against a prepared adversary. Detection must happen at the initial access phase, not after lateral movement has begun. Sophos’s 2025 Active Adversary Report puts median dwell time at 2 days across all cases, and 3–4 days in ransomware cases specifically — but those figures reflect cases where defenders caught attackers.18 Against an AI-assisted attacker operating on a 29-minute breakout timeline, the 2-day dwell window may never open.

Malware detection is necessary but no longer sufficient. When 82% of techniques are malware-free — valid credentials, LOLBins, BYOVD-killed EDRs — a security stack optimized for binary-based detection has large structural blind spots.5 Behavioral analytics and identity-centric detection are not optional additions; they are the primary detection surface.

SMBs are the preferred target, not an afterthought. Two-thirds of ransomware attacks target organizations with fewer than 500 employees.19 88% of SMB breaches include a ransomware component, compared to 39% at large enterprises.19 The FBI’s 2024 IC3 Report recorded 3,156 ransomware complaints with $12.4 million in adjusted losses — a figure widely acknowledged to undercount actual incidents significantly.6 The mean recovery cost from a ransomware attack is $1.53 million (Sophos State of Ransomware 2025), with a median ransom demand of $1.32 million.7 For most SMBs, a single incident is an existential financial event.

Detection Signals Per Stage#

The attack chain above generates behavioral signals at each phase. The challenge is detection speed and fidelity — particularly now that the window between signal and impact has compressed to minutes.

StageKey Behavioral Signals
ReconUnusual enumeration queries against public-facing services; spike in failed auth attempts across external portals; certificate transparency log queries for org domains
Initial AccessMFA fatigue push patterns; login anomalies (unusual geography, device fingerprint, time-of-day); credential stuffing on VPN; deepfake call reports from staff
Persistence / PrivEscUnsigned or anomalous driver loads; EDR process termination events; new scheduled tasks or registry run keys; LOLBin chains (PowerShell → WMI → certutil)
Lateral MovementRapid SMB/RDP connections across multiple hosts; Kerberoasting queries; LSASS access events; unusual service account logons
ExfiltrationOutbound data volume spikes; staging in temp or appdata directories; connections to cloud sync services not in baseline; large archive creation
ExtortionDark web exposure monitoring for org data; employee reports of unusual IT support calls or texts

Controls That Actually Matter Now#

The following six controls address the highest-leverage attack vectors in the AI-augmented kill chain, listed in priority order for organizations with constrained security budgets.

1. Phishing-resistant MFA (FIDO2 / hardware tokens) Compromised VPN credentials were the initial access vector in 48% of ransomware attacks in Q3 2025.20 SMS and authenticator-app MFA are increasingly bypassed through real-time adversary-in-the-middle phishing proxies. FIDO2 hardware tokens or device-bound passkeys are the only MFA methods currently resistant to real-time credential interception. This is the single highest-leverage control available to SMBs today.

2. Network segmentation and least-privilege access AI-assisted lateral movement exploits flat architectures where any compromised endpoint can reach domain controllers and file servers in seconds. Micro-segmentation — even partial, between IT and operational systems, or between workstations and servers — dramatically increases the cost of lateral movement and creates detection chokepoints.

3. EDR with kernel-level tamper protection and BYOVD detection Standard EDR is insufficient against BYOVD attacks that terminate the EDR process before any alert is generated. Select vendors that include kernel-level tamper protection and actively monitor for driver loads against Microsoft’s Vulnerable Driver Blocklist. The absence of this capability is a meaningful gap in 2026.

4. Identity hygiene: privileged account audit and credential exposure monitoring 82% of attacks are malware-free — your attacker is logging in, not breaking in.5 Regular privileged account audits, service account password rotation, and continuous monitoring of credential exposure (breach dump correlation, infostealer market monitoring) address the primary entry point.

5. Immutable, tested, offsite backups Immutable backups stored offline or in a non-domain-joined environment remain the most reliable recovery path when encryption has already occurred. The critical qualifier is tested: an untested backup is not a reliable control. Establish a documented, practiced recovery runbook.

6. Out-of-band verification protocols for high-risk requests The Arup incident established that sophisticated employees can be deceived by high-quality deepfake video calls.11 Implementing mandatory out-of-band verification for any wire transfer, credential change, or unusual access request — regardless of how convincing the requestor appears — is the highest-leverage control against AI-generated social engineering and voice-clone vishing.


🔮 Closing: The Next Eighteen Months#

The trajectory of AI-assisted ransomware points toward one destination: the fully autonomous attack chain. Agentic AI systems — AI models that plan, act, and iterate without human intervention — are already demonstrating the capability to complete the full ransomware lifecycle in approximately 25 minutes. Palo Alto Networks’ Unit 42 simulated this in a controlled environment: from initial compromise to full data exfiltration, with the agent autonomously switching exfiltration channels mid-transfer without triggering a single alert.21 The emerging Model Context Protocol (MCP), which allows AI agents to connect to external tools, databases, and services, is creating a new and largely unexamined attack surface: a misconfigured or compromised MCP server becomes a universal pivot point that exposes every AI agent connected to it.22

IBM X-Force projects that as multimodal AI models mature, adversaries will automate the entire chain — reconnaissance, exploitation, lateral movement, exfiltration, and negotiation — without any human hand on the operation between deployment and payout.4 Barracuda Networks identifies agentic AI as the defining threat multiplier of 2026, noting that tasks previously requiring an experienced operator to plan and execute over days can now be delegated to an agent that runs continuously until it achieves its objective or is shut down.23

The ransomware threat is no longer primarily a malware problem. It is an automation problem. The organizations that survive the next wave will be the ones that close the identity gaps, compress their detection windows, and accept that the human attackers who used to be the bottleneck are no longer in the loop.


🔗 References#

All sources cited in this report:



  1. Anthropic. (2025). “Detecting and Countering Misuse of Claude: August 2025.” Anthropic Threat Intelligence Report. Documents cybercriminal using Claude Code to extort 17+ organizations (healthcare, emergency services, government) with ransoms >$500K; ransomware developer selling AI-built variants for $400–$1,200. https://www.anthropic.com/news/detecting-countering-misuse-aug-2025 ↩︎ ↩︎

  2. Anthropic. (2025). “Disrupting the First Reported AI-Orchestrated Cyber Espionage Campaign.” Anthropic Threat Intelligence. September 2025. Covers GTG-1002 using Claude Code agents autonomously for 80–90% of operations against ~30 targets. https://www.anthropic.com/news/disrupting-AI-espionage ↩︎

  3. Check Point Research. (2025). “FunkSec – Alleged Top Ransomware Group Powered by AI.” January 2025. Documents FunkSec’s AI-assisted malware development, LLM-generated code comments inconsistent with operator’s broken English, Miniapps AI chatbot for operations, 85+ victims in December 2024. https://research.checkpoint.com/2025/funksec-alleged-top-ransomware-group-powered-by-ai/ ↩︎ ↩︎ ↩︎

  4. IBM Security X-Force. (2026). “2026 X-Force Threat Intelligence Index: AI-Driven Attacks Are Escalating.” Key findings: 109 active extortion groups (up from 73, +49%); 44% increase in public-facing app exploitation; 300,000+ AI chatbot credentials exposed by infostealers; multimodal AI trajectory toward full-chain automation. https://newsroom.ibm.com/2026-02-25-ibm-2026-x-force-threat-index-ai-driven-attacks-are-escalating-as-basic-security-gaps-leave-enterprises-exposed ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

  5. CrowdStrike. (2026). “2026 Global Threat Report: Evasive Adversary Wields AI.” Key findings: avg eCrime breakout 29 min (fastest 27 sec); 82% malware-free detections; AI-enabled adversaries +89%; PUNK SPIDER LOLBin AI scripts; FANCY BEAR LAMEHUG; lateral movement avg 34 min (-29%); exfiltration 6 min. https://www.crowdstrike.com/en-us/blog/crowdstrike-2026-global-threat-report-findings/ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

  6. Federal Bureau of Investigation — Internet Crime Complaint Center. (2025). “2024 Internet Crime Report.” IC3 2024 data: 3,156 ransomware complaints; $12.4M in reported adjusted losses (widely understood to undercount by order of magnitude due to non-reporting); $2.77B in BEC losses across 21,442 BEC incidents. https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf ↩︎ ↩︎ ↩︎

  7. Sophos. (2025). “The State of Ransomware 2025.” June 2025. Survey of 3,400 IT/cybersecurity leaders across 17 countries. Key financial metrics: median ransom demand $1.32M (down from $2M in 2024); median ransom payment $1M; mean recovery cost $1.53M (down from $2.73M); 53% of payers negotiated a lower amount. https://www.sophos.com/en-us/blog/the-state-of-ransomware-2025 ↩︎ ↩︎

  8. SentinelOne Labs. (2025). “LLMs & Ransomware: An Operational Accelerator, Not a Revolution.” Analysis of how LLMs function as force multipliers at specific stages of ransomware operations without replacing human operators wholesale. https://www.sentinelone.com/labs/llms-ransomware-an-operational-accelerator-not-a-revolution/ ↩︎

  9. Infosecurity Magazine. (2025). “AI-Enabled Adversaries Compress Time-to-Exploit.” Documents collapse of vulnerability weaponization window from 4.76 days average to 24–48 hours; AI-accelerated exploitation timelines. https://www.infosecurity-magazine.com/news/exploitation-accelerates-in-2025/ ↩︎

  10. Google Cloud GTIG. (2025). “AI Threat Tracker: Advances in Threat Actor Usage of AI Tools.” Documents PROMPTFLUX LLM-based dynamic VBScript obfuscation (June 2025), nation-state AI tool adoption patterns, and the broader context of automated scanning campaigns. Note: the Cl0p/MOVEit campaign referenced in the article (~2,500 servers, 2023) was a zero-day exploitation campaign that predates the AI-automation narrative — cited as an early large-scale automated scanning example, not an AI-driven recon case. https://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools ↩︎ ↩︎ ↩︎

  11. CybelAngel. (2025). “Voice Cloning Is the New BEC: Deepfake CEO Fraud in the US.” Covers deepfake CEO fraud landscape and the Arup February 2024 $25M deepfake video call incident. Note: the “3–5 seconds of audio” voice clone capability is a widely-cited industry claim (appearing across multiple security vendor publications) and does not originate from CybelAngel’s own primary research — it reflects industry consensus on current tool capabilities. https://cybelangel.com/blog/deepfake-ceo-fraud-how-voice-cloning-targets-us-executives/ ↩︎ ↩︎ ↩︎

  12. Help Net Security. (2025). “Ransomware’s New Playbook Is Chaos.” December 31, 2025. Documents BlackBasta and Cactus combined vishing-phishing privilege escalation; evolution of extortion tactics; encryption-free data theft models. https://www.helpnetsecurity.com/2025/12/31/ransomware-tactics-expanding/ ↩︎

  13. Beazley Security. (2025). “Quarterly Threat Report: Third Quarter, 2025.” Primary source for VPN credential compromise as 48% of ransomware initial access in Q3 2025 (up from 38% in Q2); Akira group responsible for 39% of Beazley IR cases in Q3; top three groups (Akira, Qilin, Inc Ransom) accounting for 65% of all cases. https://beazley.security/insights/quarterly-threat-report-third-quarter-2025 ↩︎

  14. The Hacker News. (2026). “Qilin and Warlock Ransomware Use Vulnerable Drivers to Disable 300+ EDR Tools.” April 2026. Documents msimg32.dll BYOVD technique, 300+ EDR driver termination, Reynolds ransomware embedded driver payload, EDR bypass commoditization ($300–$10,000). Note: researchers strongly suspect (not confirmed) that AI assisted in development of some EDR killers, including Warlock’s. https://thehackernews.com/2026/04/qilin-and-warlock-ransomware-use.html ↩︎ ↩︎ ↩︎ ↩︎

  15. Mandiant / Google Cloud. (2026). “M-Trends 2026: Initial Access Handoff Shrinks From Hours to 22 Seconds.” Based on 500,000+ hours of IR investigations in 2025. Median initial-access-to-handoff time: 22 seconds (down from 8+ hours in 2022), reflecting division-of-labor automation between access brokers and secondary operators. https://cloud.google.com/security/resources/m-trends ↩︎

  16. Check Point Blog. (2025). “Ransomware in Q2 2025: AI Joins the Crew, Cartels Rise, and Payment Rates Collapse.” Documents GLOBAL GROUP (BlackLock rebrand) AI-powered negotiation system; 3,734 victims in H1 2025 (+67% YoY). https://blog.checkpoint.com/research/ransomware-in-q2-2025-ai-joins-the-crew-cartels-rise-and-payment-rates-collapse/ ↩︎ ↩︎

  17. KELA Cyber. (2025). “2025 AI Threat Report: How Cybercriminals Are Weaponizing AI Technology.” Via Infosecurity Magazine coverage. Key finding: 219% increase in dark web mentions of malicious AI tools throughout 2024; 52% increase in jailbreak discussions; “dark AI” tools evolving into AI-as-a-Service subscription model. https://www.infosecurity-magazine.com/news/dark-web-mentions-malicious-ai/ ↩︎

  18. Sophos. (2025). “It Takes Two: The 2025 Sophos Active Adversary Report.” April 2, 2025. Based on 400+ MDR and IR cases from 2024. Key findings: overall median dwell time 2 days; ransomware MDR cases 3 days; ransomware IR cases 4 days; 56% of cases adversary logged in rather than broke in. https://www.sophos.com/en-us/blog/2025-sophos-active-adversary-report ↩︎

  19. ISACA. (2026). “AI-Driven Ransomware Fuels Rise in New Cyberthreat Groups.” AI-driven ransomware trends and ecosystem growth. Note: the SMB targeting statistics cited in the article (two-thirds of attacks targeting <500 employees; 88% of SMB breaches vs. 39% at large enterprises) originate from Beazley insurance incident data, widely cited across industry reporting including via the Cybersecurity Ventures 2025 Almanac. https://www.isaca.org/resources/news-and-trends/industry-news/2026/ai-driven-ransomware-fuels-rise-in-new-cyberthreat-groups ↩︎ ↩︎

  20. Palo Alto Networks. (2025). “The Ransomware Speed Crisis.” September 2025. Documents AI acceleration of attack chain timing; mean time-to-exfiltrate collapsing from 9 days (2021) to 2 days (2023) and under 30 minutes in AI-assisted cases; 100× speed increase since 2021. https://www.paloaltonetworks.com/blog/2025/09/ransomware-speed-crisis/ ↩︎

  21. Palo Alto Networks — Unit 42. (2025). “Unit 42 Develops Agentic AI Attack Framework.” May 2025. Primary source for the 25-minute full ransomware lifecycle simulation: Unit 42 demonstrated autonomous compromise-to-exfiltration with the agent self-directing channel switches mid-transfer without triggering alerts; mean time-to-exfiltrate dropped from 9 days (2021) to under 30 minutes in AI-assisted scenarios. https://www.paloaltonetworks.com/blog/2025/05/unit-42-develops-agentic-ai-attack-framework/ ↩︎

  22. Cybersecurity Dive. (2025). “Autonomous Attacks Ushered Cybercrime Into AI Era in 2025.” Documents MCP as emerging attack surface; agentic ransomware trends; cybercrime AI adoption in 2025. https://www.cybersecuritydive.com/news/cybercrime-ai-ransomware-mcp-malwarebytes/811360/ ↩︎

  23. Barracuda Networks. (2026). “Agentic AI: The 2026 Threat Multiplier Reshaping Cyberattacks.” February 27, 2026. Analysis of agentic AI as a 2026 threat multiplier; tasks previously requiring days of operator time now delegatable to autonomous agents; MCP as emerging attack surface. https://blog.barracuda.com/2026/02/27/agentic-ai--the-2026-threat-multiplier-reshaping-cyberattacks ↩︎