While many companies are still exploring the application scenarios of generative AI, criminal organizations have already thoroughly weaponized it. Recent security research has revealed that AI technology is being used for data theft on a large scale.
Infiltration Case: NCC Group experts successfully manipulated programming tools to leak sensitive corporate data, including databases and source code. Security analyst Dave Brauchler warned: "We have never been so negligent in security."
Hybrid Attack: In August of this year, a first-of-its-kind case combining supply chain attacks and AI manipulation occurred. Hackers implanted malicious code into the code management platform Nx, causing hundreds of thousands of users worldwide to download infected software. The malware also manipulated local AI tools (such as those from Google and Anthropic Systems) to steal passwords, encrypted wallets, and confidential files, transmitting data from over a thousand devices.
Automated Crime: Ransomware campaigns are now fully AI-controlled—from vulnerability detection and data theft to ransom negotiation. Attackers are using AI to uncover unknown vulnerabilities and even automate negotiations with victims.
Adam Meyers of the security firm CrowdStrike predicts: "AI will become the new source of insider threats by 2025." When malware and AI defenses collide, cyberattacks can spiral out of control. SentinelOne expert Alex Delamotte noted that AI applications are exploding, but security protections are lagging behind. "The technology is being forcefully built into products without addressing its emerging risks." The most dangerous is "proxy AI," which can independently make transaction decisions. Experiments have shown that hackers can trick AI browsers into completing purchases at counterfeit stores.