Caleb Withers
Russia’s invasion of Ukraine in 2022 confounded expectations around the role of cyber operations in modern conflict. Although many experts predicted a sweeping, highly coordinated cyber offensive would play a decisive role alongside conventional forces, the reality proved otherwise. In a war between a cyber-savvy great power and a digitally advanced state, cyberattacks played a relatively modest role. This limited impact underscores a key limitation of offensive cyber operations—sophisticated attacks require months of planning and thousands of hours of labor. Consequently, the need to plan and synchronize cyber operations well in advance of execution can be an obstacle to achieving strategic military objectives. Human timelines often bottleneck the fullest realization of cyber aggression.1
Sufficiently capable artificial intelligence (AI) systems could overcome this bottleneck. While current systems show only nascent capabilities to autonomously execute the complex, multistep tasks required for sophisticated cyber operations, progress in these capabilities has been real and rapid, with no indication of slowing. Today, AI systems primarily serve as tools to automate specific tasks, such as research or code generation. In the future, AI systems might become capable of autonomously executing operations across the full cyber kill chain, from reconnaissance to impact.
This report examines how emerging AI capabilities could disrupt the cyber offense-defense balance. Historically, attackers have had significant structural advantages in cyberspace: defenders must secure vast attack surfaces, while attackers need only succeed once. AI has, on balance, helped defenders, allowing them to mitigate these challenges by scaling defensive activities and responding to attacks in real time. But policymakers should not assume this dynamic will hold indefinitely. Three challenges could lead AI to disproportionately empower attackers in the future.
First, growing inference costs at the frontier of capabilities may benefit well-resourced attackers who can selectively target high-value assets, while defenders struggle to protect their entire attack surface. Second, automating the full cyber kill chain could accelerate operations from human to machine speed, dramatically enhancing the potential of cyberattacks to support military and geopolitical objectives. Third, persistent technical challenges in model safety and reliability create asymmetric advantages for attackers with higher risk appetites who can better tolerate both system failures and collateral damage from their operations. Moreover, these technical challenges will not occur in isolation. Organizations and nations will need to navigate sociotechnical challenges as they look to integrate AI more deeply into their cyber defenses, along with commercial and geopolitical pressures to develop and deploy AI systems at the potential expense of identifying and mitigating offensive risks.
No comments:
Post a Comment