16 March 2019

STUXNET: A DIGITAL STAFF RIDE

James Long 

At the risk of stating the obvious, the era of cyber war is here. Russia’s use of cyber capabilities in Ukraine to deny communications and locate Ukrainian military units for destruction by artillery demonstrates cyber’s battlefield utility. And it has become an assumed truth that in any future conflict, the United States’ adversaries will pursue asymmetric strategies to mitigate traditional US advantages in overwhelming firepower. As US military forces around the world expand their digital footprints, cyberattacks will grow in frequency and sophistication. As a result, commanders at all levels must take defensive measures or risk being exposed to what the 2018 Department of Defense Cyber Strategy described as “urgent and unacceptable risk to the Nation.”

The creation and growth of the US Army’s cyber branch speaks to institutional acceptance of cyber operations in modern warfare, but many leaders remain ignorant about the nature of cyberattacks and best practices to protect their formations. The Army’s preferred use of historical case studies to inform current tactics, techniques, and procedures runs into a problem here, given the dearth of such cyber case studies. One of the few is Stuxnet, the first recognized cyberattack to physically destroy key infrastructure—Iranian enrichment centrifuges in Natanz. So how should an analysis of it influence the way Army leaders think about cyber?


Stuxnet, for the purposes of this analysis, is a collective term for the malware’s multiple permutations from 2007 to 2009. Lockheed Martin has created a model it calls the “Cyber Kill Chain,” which offers a useful framework within which to analyze Stuxnet. The model deconstructs cyberattacks into kill chains, comprised of a fluid sequence of reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on objectives. This analysis allows US commanders to learn how hackers exploited Iranian vulnerabilities to comprise vital security infrastructure, allowing them to guard against similar threats in multi-domain environments.

The Stuxnet Kill Chain

Reconnaissance

Successful kill chains precisely exploit vulnerable centers of gravity within target networks to inflict disproportionate impact. Open-source intelligence, from press releases to social media, are combined with social network engineering to identify targets and reveal entry points.

Stuxnet’s developers likely targeted the Iranian nuclear program’s enrichment centrifuges because enrichment is capital-intensive and a prerequisite for weapons development. Enrichment centrifuges are bundled into cascadeswhose operations are precisely regulated by industrial control systems (ICSs) that monitor and control particle movements, presenting network-control and mechanical vulnerabilities. The omnipresence of ICSs in key infrastructure, and their role as the insurance against problems, underscores the potency of Stuxnet attacking this vulnerability.

Reconnaissance efforts were likely aided by Iranian press releases and photographs of key leader visits, which were intended to generate internal support for the program but also revealed technical equipment information. These media releases could be complemented by inspection reports from the International Atomic Energy Agency describing the program’s scale and the types of enrichment equipment employed.

These sources would have helped identify Iranian enrichment centrifuges as designs that came from A.Q. Khan’s P-1 centrifuge (Pakistan’s first generation), comparable to Libyan and North Korean models. The P-1’s cascades connect centrifuges spinning at hypersonic speeds with pipes separating uranium isotopes. Natanz was the heart of the enrichment program and contained over six thousand centrifuges organized into 164 cascades in a series of stages. The centrifuge count expanded to 7,052 by June 2009, including 4,092 enriching gas within eighteen cascades in unit A24 plus another twelve in A26. Iranian expansion in centrifuge volume was complemented by technological improvements as output from three thousand IR-1 systems was replaced with only twelve hundred IR-2 centrifuges, producing 839 kilograms of low-enriched uranium—enough to produce two nuclear weapons.

Analysis of open-source intelligence could also have revealed the particular systems—manufactured by the German company Siemens—that Iran employed to operate centrifuges, creating the initial target and subsequent transition to researching exploitable vulnerabilities.

Weaponization

After deciding to target the Siemens control system, exploring centrifuge vulnerabilities was essential for turning the systems responsible for safe plant operations into weapons. Research into vulnerabilities of industrial control systems was undertaken by a Department of Homeland Security program with the Idaho National Laboratory, which had run the 2007 Aurora Project that discovered that malware could cause physical damage to physical equipment like the large turbines used in the power grid by forcing valve releases and other components out of synchronization. In 2008 the Idaho National Laboratory conducted extensive vulnerability testing of Siemens ICSs, finding vulnerabilities later exploited by Stuxnet.

These vulnerabilities could have been used to develop the malware through tests at the Israeli nuclear enrichment facility at Dimona. What emerged was a “dual warhead” virus that destroyed centrifuges by hyper-accelerating rotations and avoided detection through a “man in the middle” attack showing false readings to mechanical control stations. The 500-kilobyte malware had tailored engagement criteria for a three-phase attack: first it would replicate within Microsoft networks, then target specific Siemens software used in ICSs, before finally focusing on programmable logic controllers.

Delivery

Introducing the dual warhead’s malicious payload into Natanz required overcoming the air gap that intentionally isolated it from the internet, designed to prevent direct-attack strategies like spear phishing. Hackers gained indirect access through third-party attacks of contractors with weaker digital security systems and access to Natanz. The virus lay dormant in their systems before automatically writing itself onto USB drives inserted into their computers that were carried into Natanz.

Multiple third-party hosts could have played this role, but Behpajooh was a prime candidate, as the company worked on ICSs, was located near Iran’s Nuclear Technology Center, and had been targeted by US federal investigators for illegal procurement activities.

Exploitation

After accessing Natanz through USB drives, Stuxnet circumvented existing malware-detection systems protecting enrichment infrastructure through zero-day exploits (previously unused security exploits) of Microsoft systems, including faking digital security certificates and spreading between computers connected to networked hardware like printers. This enabled rapid network exploitation.

Installation

After exploiting security gaps, Stuxnet infected computers by worming across networks and mass-replicating through self-installation, avoiding detection by remaining dormant until finding files indicating the host was associated with centrifuge control systems.

Command and Control

While malware often operates as an extension of an external attacker’s commands, the complex security breaches required to access Natanz forced Stuxnet to operate semi-independently to prevent network security from detecting signals reaching outside Natanz. Hackers mitigated some command limitations by writing the code to automatically update on contact with any Stuxnet code with a more recent date stamp.

Actions on Objectives

After accessing Siemens computers, Stuxnet gathered operational intelligence on network baselines before attacking centrifuges. When it did attack, it used the dual-purpose warhead to send “accurate” false reports to scientists monitoring the centrifuges while manipulating cascade rotational frequency until the centrifuges self-destructed. By 2009, Stuxnet destroyed an estimated 984 centrifuges and decreased enrichment efficiency by 30 percent, sending the enrichment program back years and causing significantly delays in establishing the Bushehr Nuclear Power Plant.

Discovery

The virus was discovered after Iranian scientists contacted security specialists at Kaspersky Labs in Belarus to diagnose Microsoft operating system errors that were causing computers to continuously reboot. Stuxnet’s sophistication led experts to believe development required nation-state support and Kasperky warned that it would “lead to the creation of a new arms race in the world.” As security experts studied the code, they found the zero-day attacks, including false digital certificates that bypassed screening methods used by most anti-virus programs, which led cybersecurity experts like Germany’s Ralph Lagner to fear that the virus could be reverse-engineered to target civilian infrastructure.

Lessons learned

1. Respect open-source intelligence

Social media provides troves of open-source intelligence, from Vice News tracking Russian troop movements to dating app vulnerabilities like Tinder enabling phone hacking. Hackers can use open-source intelligence to identify and target individuals that enable breaching entire social networks, through strategies like using a target’s phone to transmit malware. These efforts can be complemented by social-media mining programs using artificial intelligence–enabled chat bots so sophisticated that a Stanford experiment found students cannot differentiate them from actual teaching assistants.

The power of information operations fed by social-media exploitation was studied by NATO’s Strategic Communications Center of Excellence, using a “red-teamed” social-media campaign to interfere with an international security cooperation exercise. The team spent sixty dollars and created fake Facebook groups to investigate what they could find out about a military exercise just from open-source data, including details of the participants, and whether they could use this data to influence the participants’ behavior.

The effort identified 150 specific soldiers, found the location of several battalions, tracked troop movements, and compelled service members to engage in “undesirable” behavior, including leaving their positions against orders. Other experiments replicated these results to varying degrees, using bot personalities like “Robin Sage” to penetrate the social networks of the Joint Chiefs of Staff, the chief information officer of the National Security Agency, a senior congressional staffer, and others.

Risks posed by open-source intelligence are especially acute for grey-zone operations. Russia passed a law forbidding military personnel from posting photos or other sources of geolocation data on the internet after investigative journalists revealed “covert” military actions via social media posts. The United States has also experienced operational-security breaches, like when a twenty-year-old Australian student identified military base locations by looking at heat maps produced by a fitness-tracking app showing run routes scattered in global hot spots including Iraq and Syria. These challenges will grow as more devices get connected, requiring leaders to mitigate threats from poor data management, or risk jeopardizing the safety of their soldiers.

2. There is no perfect security solution

While air gaps are the gold standard for protecting key systems, overreliance on perimeter defense systems can trigger underinvestment in internal security, resulting in defenseless core systems. Cybersecurity expert Leigh-Anne Galloway notes that the outcome is that it is “often only at the point of exfiltration that an organization will realize they have a compromise,” as illustrated by Stuxnet’s detection only after it caused Microsoft malfunctions.

Failure to aggressively enforce security protocols risks negating the benefits of air gaps to prevent third-party-enabled breaches, including denying external devices from entering the facility and disabling USB device reading. Even with effective perimeter security protocols, passive security measures must be complemented by redundant systems such as training, internal audits, and active detection measures like threat hunting and penetration testing.

3. Change the culture dominating cyber operations

Avoiding cyber discussions to maintain security risks exposing commanders to vulnerabilities through ignorance. While operational sources and methods must be protected, it may be time to challenge the norm of secrecy, arguably a vestige of military cyber’s emergence from organizations like the NSA. While high classification levels help avoid detection in espionage operations and computer network exploitation, they are less relevant in overt computer network attacks. This tension is described by retired Rear Admiral William Leigher, the former director of warfare integration for information dominance:

If you [are] collecting intelligence, it’s foreign espionage. You don’t want to get caught. The measure of success is: “collect intelligence and don’t get caught.” If you’re going to war, I would argue that the measure of performance is what we do has to have the characteristics of a legal weapon.

Educating military leaders can be accomplished by importing civilian sector case studies, creating modern staff rides of the digital battlefield. This approach has been traditionally used with historic battlefields to educate commanders on how terrain shapes decision making and battle outcomes. Indeed, the analysis in this article is a sort of “digital staff ride,” but the much larger number of private-sector case studies—like the Target hack where vulnerable third-party vendors were exploited to access information from forty million credit cards—makes them worthy of study, as well.

4. Protect your baselines

Many computer systems, like ICSs, predictably run the same applications over time, creating “baseline” operational patterns. These baselines identify software, files, and processes, and deviations can help detect an attack using methods like monitoring file integrity and endpoint detection (continuous monitoring of network events for analysis, which works even if malware uses unprecedented zero-day exploits like Stuxnet did). Monitoring can also detect malware reaching back to external command nodes for operational guidance; while this method would not have detected Stuxnet, it is capable of uncovering more common malware attacks. “Whitelisting” can further protect systems by explicitly listing all programs permitted to run on a given system and denying all others.

Data analysis amplifies the impact of these strategies by integrating sources like physical access and network activity logs to find deviations like “superman” reports, where a user appears to hop between different geographic locations in moments, caused by the use of a virtual private network to avoid revealing the user’s actual location. These tools can also reveal non-logical access data that could be a sign of an intrusion or attack.

5. Train effectively

Current Army digital-security training focuses on basic cyber awareness, and while training fundamental security practices like not opening files from unknown addresses and avoiding cloned social-media pages is important, it is not sufficient.

While hacker exploits constantly change, training leader response to breaches and maintaining current incident response plans is vital to guarding against cyber threats. Commanders can develop these plans using resources like the National Institute of Standards and Technology’s Guide to Test, Training, and Exercise Programs for IT Plans and Capabilities. The guide, and industry best practices, recommend executing wargames and table-top exercises to stress decision making and internal systems to find capability gaps, especially at the lowest tactical levels which are often neglected in the Army’s current wargames. These exercises should include quantifiable tests to stress and validate system response capabilities, training that delineates responsibilities and ensures accountability for network users and administrators, and regular execution of well-developed emergency response plans to ensure accuracy and feasibility to protect vital internal infrastructure.

6. Invest before it becomes necessary

Strong executive action is needed to close cyber capability gaps, and not confronting that challenge at the national level invites strategic risks. The failure to develop a collective cybersecurity strategy at the national level was identified as an issue as early as 2002, when former CIA Director James Woolsey wrote to President Bush calling for aggressive investment in cyber-defense to “avoid a national disaster.” The issue was revisited by cybersecurity experts in the aftermath of the Aurora Project, which estimated a cyberattack against the United States would cost “$700 billion. . . the equivalent of 40 to 50 large hurricanes striking all at once.”

Despite these risks, there has been comparatively little investment in enhancing security infrastructure, primarily because, as one analysis explains, “there is no obvious incentive for any utility operator to take any of the relatively simple costs necessary to defend against it.” This extends past traditional utilities and into core elements of the national economy, underscored by the story of a Chinese company’s planting of tiny chips on motherboards that would enable infiltration of networks that used the servers—which could be found in Department of Defense and Central Intelligence Agency data centers, along with over thirty major US companies. In the absence of strong executive leadership and direct regulation, these lapses within supply chains and internal communication systems will endure and present enormous risks to national security.

While the White House’s 2018 National Cyber Strategy is progress, given it is the “first fully articulated cyber strategy for the United States since 2003,” implementation and execution require continued leadership. The plan acknowledged omnipresent cyber threats from rogue and near-peer rivals and called for offensive strength through a “Cyber Deterrence Initiative,” intended “to create the structures of deterrence that will demonstrate to adversaries that the cost of their engaging in operations against us is higher than they want to bear,” according to National Security Advisor John Bolton. Digitally literate and effective government leadership is required to execute the plan’s four pillars of: enhancing resilience, developing a vibrant digital economy and work force, aggressively pursuing cyber threats, and advancing America’s interests in cyberspace.

Army leaders must take ownership of cybersecurity within the Multi-Domain Operations concept, including education and strategy development, since expanding military networks increase attack probability. Studying Stuxnet, and understanding the digital terrain hackers exploit to attack key infrastructure, allows military leaders to safeguard against these threats.

Despite the tendency to assume this challenge is solely the responsibility of cyber experts, educating the force is essential to following Vice Adm. Michael Gilday’s advice to “fully integrate cyberspace into battle plans, ensuring timing and tempo are set by the commanders for use of cyberspace effects in the field based on their operational scheme of maneuver.”

Leaders must understand kill chains within the comprehensive multi-domain operational environment so they can leverage these tools to achieve effects like degrading enemy capabilities or setting conditions for maneuver operations. Failure to understand these tools, and their vulnerabilities, will allow technologically savvy rivals to outmaneuver the US military and expose US forces to enormous risks.

Capt. James Long is an Army infantry officer, MD5 Innovation Fellow, and experienced tactical innovator. He currently serves as an operations officer with United Nations Command Security Battalion–Joint Security Area. The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

No comments: