Nadir Izrael, co-founder and CTO at the cyber exposure management company Armis explores how AI is tipping the scales in cyber warfare and how organisations can reclaim the advantage.
The headlines scream familiar warnings: ‘geopolitical tensions escalate’ and ‘massive AI cyberattacks surge.’ These recurring stories are a stark indicator of the evolving landscape: traditional approaches to cybersecurity are no longer enough.
We must move beyond passive defence and actively prepare for this new era of cyber warfare. The statistics underscore this urgency. Research from Armis found that 88 per cent of UK IT decision-makers expressed concern about the impact of cyber warfare, a 32pc jump from the previous year.
This is against a backdrop of an expanding attack surface, with a projected 50 billion connected devices by the end of 2025, alongside growing global instability. Yet, the biggest driver behind this change is inexplicably linked to AI.
The question then becomes: “What can organisations do to prepare?” Fortunately, the key to effective preparation lies in the very tool being used against us.
Cost of falling behind
The news cycle constantly highlights how AI is rapidly supercharging the capabilities of nation-state attackers, cybercriminal groups and bad actors alike – and with good reason. Most, 70pc of UK IT decision-makers now agree that AI-powered attacks pose a significant threat to their organisation’s security. Yet, those threats are already slipping through the cracks.
Four in five (82pc) UK IT leaders admit that offensive techniques regularly bypass their existing security tools. Additionally, as IT and operational technology (OT) environments become more connected, the illusion of isolated ‘air-gapped’ systems offering fool-proof protection is fading fast. AI cyber threats are bypassing traditional barriers as organisations integrate IT and OT systems for efficiency, unintentionally exposing new attack surfaces.
And the attack surface is only becoming more complex. Alongside traditional sources such as networks, endpoints and applications, bad actors are also using AI to power more deceptive techniques, such as voice cloning, deepfakes and synthetic social engineering. And yet, most defences remain reactive.
No comments:
Post a Comment