22 November 2025

AI Hacks AI: Cybercriminals Unleash An AI-Powered, Self-Replicating Botnet

Thomas Brewster,

Hackers have started using large language models to code up attacks on AI systems, researchers have warned. They’re then using those hacked AI systems to target other AI machines.

Marking another milestone on the road to a cyber world where AI constantly fights AI, Israel-based Oligo Security found evidence of mass exploitation of software designed to help developers manage and assign power to AI projects, called Ray.

The Oligo researchers were able to find over 230,000 Ray servers that were online despite the company's warning, potentially leaving them open to cyberattacks, according to Oligo’s AI security researcher Avi Lumelsky. Lumelsky said he was “very certain” large language models, such as OpenAI’s ChatGPT and Anthropic’s Claude, were used to generate code to order the hacked servers to mine crypto, though he couldn’t specify which models. He said there were identifiable “hallmarks” when LLMs had been used to produce malicious code, including needless repetition of certain comments and strings in the code.

The Ray servers were also used to autonomously scout out further targets, turning their operation into a self-propagating botnet, showing “AI infrastructure can be hijacked to attack itself,” said Gal Elbaz, CTO and cofounder of Oligo. Oligo has dubbed the attack ShadowRay 2.0, an update to hacks it detected last year.

No comments: