This summer, hackers linked to Russian intelligence introduced a disturbing new tactic in their cyber operations against Ukraine: phishing emails embedded with an artificial intelligence program. The malicious attachment, if installed, would automatically comb through victims’ computers, extract sensitive files, and send them back to Moscow.
According to technical reports released in July by Ukraine’s cybersecurity agencies and independent firms, this is the first known instance of Russian state-backed hackers deploying large language models (LLMs) — the same underlying technology behind popular chatbots — to build malicious code.
AI Becomes the New Weapon in Cyber Offense
The Russian campaign is part of a broader trend: hackers of all stripes — state actors, cybercriminals, and even researchers — are increasingly integrating AI into their operations. While LLMs remain imperfect and prone to errors, their speed and ability to process and generate code have made skilled hackers faster and more efficient.
Scammers and social engineers have been using AI to draft more convincing phishing emails since at least 2024. Now, the technology is moving beyond text manipulation to direct exploitation of vulnerabilities. Security experts warn that the field is entering what they call “the beginning of the beginning” of AI-driven cyberwarfare.
Cyber Defenders Fight Back With AI
Cybersecurity professionals are not sitting idle. Google’s security team, led by Heather Adkins, has used its Gemini LLM to identify overlooked vulnerabilities in widely used software before criminals could exploit them. Since 2024, the project has flagged at least 20 critical bugs, which were subsequently patched by vendors.
CrowdStrike, a global cybersecurity firm, also reports using AI to assist clients during breaches while monitoring increasing evidence of adversaries — from China, Russia, Iran, and criminal syndicates — deploying AI-driven tools to enhance their attacks.
No comments:
Post a Comment