Hackers are now using AI to guide attacks in real time. In a statement that initially attracted little attention among Western analysts, Ukraine’s national cybersecurity agency warned on July 17 that a Russian cyber threat group, known as APT28, is using AI in a novel way as part of its cyberattacks. Once the hackers gain access to their target, the AI instructs the malware how to move through the network and disrupt, destroy, or steal information. This more adaptive methodology makes it harder for defenders to detect and thwart attacks.
AI Is Reshaping the Cyber Threat Landscape
The Computer Emergency Response Team of Ukraine (CERT-UA) warned that during operation in mid-July, the Russian hackers configured their malware to query AI in real time on what it should do next once inside Ukrainian networks. Instead of following static, pre-coded instructions, the malware asked the model for new actions based on its environment, allowing it to adapt on the fly.
Cybercriminals have been increasingly leveraging AI to scale operations. At the February 2025 Munich Security Conference, Western and Ukrainian officials warned that Russian hackers are relying on AI to process large volumes of stolen data and improve attack precision. An April 2025 threat report by cloud security company Zscaler confirmed that adversaries now use generative AI to bypass security systems and craft more convincing phishing scams. One emerging cybercriminal group, FunkSec, uses generative AI to develop advanced malware for less experienced hackers, making cybercrime more accessible than ever.
Public large language model (LLM) hubs have accelerated this shift. These platforms, originally intended to promote research and innovation, now offer easy access to downloadable models that hackers can repurpose for attacks. Dark web forums are compounding this problem by promoting low-cost tools like FraudGPT and ChaosGPT, which help attackers generate malicious code and execute advanced scams.
No comments:
Post a Comment