Pages

17 July 2020

WWIII: Cyber conflicts - war by other means

By Chris Edwards

A decade ago, Russian tanks rolled into South Ossetia, a northern province of Georgia. Even before they started moving, a ‘third force’ had already embarked on an exercise of hacking into government computer networks and local news agencies.

A few weeks before the Russian forces arrived on 8 August 2008, hackers performed a denial-of-service (DoS) attack on the website of Georgian President Mikheil Saakashvili. In early August, Georgian internet traffic wound up being routed through servers in Russia and Turkey where much of it was blocked. As the tanks arrived, hackers stepped up their DoS attacks on local news-agency and government websites.

The Russo-Georgia cyber conflict stayed in the virtual world. This stepped up a notch in the capture of Eastern Ukraine by Russian-backed forces. For this round, hackers went after infrastructure as well. Malware adapted from that originally developed to attack Georgia’s communication networks was used at the end of 2015 in a coordinated attack on Ukraine’s power grid. The attack succeeded at three regional electric power distribution companies, cutting power to a quarter of a million people for up to six hours and wiping out some of the power companies’ databases.


Although the move to disable infrastructure seems to indicate a crossover between cyber and traditional ‘kinetic’ warfare, the primary aim of the internet-based attacks was not to disable the country but to demoralise the targeted populations and inconvenience the government. In that respect, the attacks follow in the footsteps of broadcast propaganda and mass-leafleting campaigns from earlier wars rather than indicating a new form of war: the internet-based tactics are simply capable of being wider ranging and cheaper.

The ‘hot’ cyber war scenarios seem foremost in the minds of some military strategists. To help portray its view of how cyber warfare will change its operations, the US Army published a series of graphic novels earlier this year. One, ‘Dark Hammer’, has for graphic-novel fans a familiar ‘capture the flag’ storyline. But the story happens to be about a network worm used in an enemy’s computer centre responsible for running automated armed drones. By taking out the computer centre digitally, the drones become useless and real-world troops can move in.

A second US Army graphic novel, ‘Engineering a Traitor’, invokes a fifth column rather than a third force. It describes the descent of a loyal soldier into terrorism, nudged gradually by artificial intelligence and spies into planting a device in downtown Houston that makes it possible to hit the city with guided missiles. Such precise targeting by external forces may be unnecessary; cyber warfare may, in reality, follow a less obvious path. Why get involved in a shooting war when tactics persuade an enemy to surrender or at least attune policies to its hostile neighbour with barely a shot being fired? Tactics that are already in play may do the job just as well.

Shen Weiguang, considered by western experts to be one of the Chinese government’s key information-warfare specialists, has taken the view that such warfare is a bloodless associate of its kinetic counterpart. Weiguang’s information warfare tactics are about “disrupting the enemy’s cognitive system and its trust system” because “whoever controls information society will have the opportunity to dominate the world”, he wrote almost 20 years ago in his book ‘The Third World War – Total Information Warfare’.

“If a population loses faith in its government or military, the adversary has won,” Weiguang says. “Because one can win an information war without fighting, it is thousands of times more efficient than armed aggression,” he adds, in an echo of ancient philosopher Sun Tzu’s famous advice to break an enemy’s resistance without fighting.

It was only recently that the West embraced the idea of such a soft war being war at all. In early 2013, Nato chiefs signed off on the ‘Tallinn Manual’: a study on how international law applies to cyber attacks and cyber warfare. That first edition focused on “high risk, low probability” events that looked more like ‘Dark Hammer’ than even the DoS attacks that went alongside the invasion of South Ossetia. A fundamental choice made by the authors was to adopt the doctrine of kinetic equivalence: if nothing was physically damaged, it was not a potential act of war. In this view, disabling a power grid would not be treated as an attack while a hack to burn out a military server by disabling its thermal shutdown could be.

The second edition of the ‘Tallinn Manual’ took a broader view, although there are significant grey areas. As the launch of ‘Tallinn Manual 2.0’ took place shortly after Donald Trump was sworn in as US President, discussions focused on what some of the authors considered a grey area: the hack of the Democratic National Congress (DNC) computer systems. The 19 international law experts that collaborated on this edition ruled the DNC hack out of being an act of cyber warfare, as well as North Korea’s hacking of Sony ahead of the release of comic movie ‘The Interview’.

At the time, lead author Professor Michael Schmitt of the University of Exeter explained that the reason for excluding these types of events from a very early stage was primarily to avoid the situation where single acts that were not life-​threatening in themselves became destabilising. “We didn’t want to create the environment where someone would ask ‘was that an act of war?’. We didn’t want people to think that because it’s very, very destabilising,” he said.

Schmitt said the authors were careful to assess whether data could be considered an ‘object’ through the lens of international law. The reason was simple: under existing laws of armed conflict, war against civilian objects is, at least in principle, ruled out. If civilian data is deleted or corrupted in an attack, escalation seemed inevitable.

Although the latest edition of the ‘Tallinn Manual’ on cyber warfare tackles the question of legal retaliation for attacks launched across the internet, a major problem will be identifying who is responsible quickly enough to respond in a useful way. Defence looks to be the only realistic option for countering the wide range of potential attacks that could be staged by a hostile nation.

The Stuxnet worm showed that using software to damage physical hardware remotely and not just shut it down for a few hours is entirely plausible. A number of proof-of-concept hacks have shown that all that is needed to take out a pump or generator is to have the software run at an unsafe level until bearings or other parts simply fail.

In late 2014, hackers struck at a steel mill in Germany, first by penetrating sales software. From there they moved onto the critical system, altering control software so that operators could not shut down the furnace. The heat ultimately caused serious damage.

As with previous wars, a state in fear of these kinds of attack may simply attempt to sever communications with the enemy. In the world of the internet that means filtering out packets from known hostile sources or simply disabling the ports on those routers. In practice, such actions are likely to fail.

The position in international law itself is hazy. The Hague Convention V forbids combatants from using neutral territory to host “wireless telegraphy station[s] or other apparatus” to communicate with belligerent forces in Article 3. But Article 8 argues neutral powers need not stop foreign combatants from using their existing communications networks. Using proxies to conduct denial-of-service attacks on infrastructure or to deliver viruses and Trojans to servers is already commonplace and unlikely to change in a cyber-enhanced war.

In this environment, there is little alternative but to build much better security into electronic systems and to isolate critical infrastructure from more vulnerable, customer-facing servers.

Similarly, international law gives the nod to espionage, if not to the often illegal tactics used to liberate information from targets. However, in the view of the manual’s authors, hacking of elections directly is a more clear-cut case of cyber warfare.

While far from everybody has agreed to adopt Nato’s rules, nation states and their clients are mostly working in the Tallinn Manual’s grey area: choosing to target data and people’s attitudes rather than physical systems, with only a few rare exceptions such as the Stuxnet attack on the centrifuges used by Iran to process nuclear fuel.

Nato secretary general Jens Stoltenberg said at the Cyber Defence Pledge Conference in May this year: “The very nature of these attacks is a challenge. It is often difficult to know who has attacked you or even if you have been attacked at all. There are many different actors: governments, criminal gangs, terrorist groups and lone individuals. Nowhere is the ‘fog of war’ thicker than it is in cyberspace. Low-cost and high-impact cyber attacks are now a part of our lives. Some seek to damage or destroy. If these were hard attacks, using bombs or missiles instead of computer code, they could be considered an act of war. But instead, some are using software to wage a soft war: a soft war with very real, and potentially deadly, consequences.”

Although attention has mostly focused on the actions of the old Cold War adversaries, low-level information warfare has gathered pace around the world. At a Nato conference in July, Laura Rosenberger, director of the Alliance for Securing Democracy, said: “We see that Russia has been the biggest purveyor of these tactics. But we see actors within other countries who are beginning to understand the power of these operations.”

Rosenberger pointed to the conflict between Qatar and other Middle Eastern states during which they had obtained each others’ secret documents, inserted their own propaganda and then sent the results to news reporters. “China appears to have been hacking some opposition leaders in Cambodia,” she added.

Such operations may have had corrosive effects not just on the delineation between war and peace but on international versus civil wars, with far-reaching effects on what happens if the operations move beyond propaganda to inflicting damage. If the strategies of Shen Weiguang and others work, the attacks could easily come from inside the nation under attack.

However, to win a war without firing a shot, all that needs to happen is for propaganda to convince members of the public or military that they are more at home with those that their own government calls enemies. In this scenario, a hot war accompanied by attacks designed to cripple civil infrastructure would be a tacit admission that the attacker’s information-operations salvoes failed.

Spies in the wires

When it comes to attacks on computers, the natural assumption is that hackers will want to deploy viruses and Trojan horses written purely in software. However, it could be that it is the hardware on which they run that could be equally responsible.

Darpa and other defence-research groups have sponsored work on detection methods for hardware Trojans. But publicly disclosed real-world examples have proved elusive with only a handful of candidates being exposed. Some of them may not be deliberate Trojans but back doors left in the silicon in the mistaken belief no-one but the designers would be able to use them.

One example in 2012 was a programmable-logic IC used by military contractors. Although it required physical access to the device, the back door made it easy to transfer out encryption keys that were thought unreadable except by the chip’s own cryptocontroller.

For defence agencies, the hardware Trojan remains a plausible threat partly because of the way commercial ICs are now made. Few manufacturers have their own fabs, preferring to have advanced parts made at foundries, which are mostly based in the Far East.

The tools and much of the intellectual property they insert into ICs come from third parties. Criminals and state military have plenty of avenues they could use to insert Trojans although it would take some effort to identify and subvert specific targets.

They would also be faced with the problem of trying to open the back door. The attacker would need to be sure they could gain access remotely. IC-level hardware Trojans may be rare simply because they are more costly and difficult to deploy than conventional software attacks, which are just as effective – as Stuxnet demonstrated.

A simpler attack that may prove to be far harder to detect is to permanently weaken the device. In 2013, Georg Becker and colleagues from the University of Massachusetts at Amherst found that a tiny change to the low-level layout would render a transistor useless. If embedded in a pseudorandom number generator, the inverter’s problem would be very hard to track down.

The team argued such a fault would result in the device generating very weak cryptographic keys and so therefore would be easy to hack.

A ‘THREATCASTING’ STUDY
A nudge too far

Last year the Army Cyber Institute at West Point and researchers from Arizona State University held a workshop to try to work out the ways in which artificial intelligence (AI) could be used in future wars. The “threatcasting” study was one of the sources for the US Army’s graphic novel, “Engineering a Traitor”, on how AI might be deployed to turn the country’s soldiers against their country.

A common theme in the work lay in the combination of surveillance and coercion that the researchers thought AI might soon be able to deliver. In several scenarios, algorithms track people and subtly adjust what they see or hear, pushing them away from friends and colleagues. A series of poor performance reports compiled by machines and artificially induced mishaps might slowly encourage disgruntled employees into acts of sabotage.

The ability of AI to carry out such sophisticated operations remains questionable. Today’s systems are capable of some level of surveillance in that they can recognise different people from their voices or movements. And social-media companies use comparatively simple systems to tune what they show to users. But bringing them together in a way that will steer a victim’s behaviour remains highly speculative.

In the near term, there are greater concerns over the propaganda value of AI. At a NATO conference in the summer, White House senior technology adviser Alvin Salehi pointed to the concept of deepfakes: “The use of deep-learning algorithms to fake someone’s image. And create voices that sound identical to any human being”.

The US DARPA research agency has called for forensic tools to try to spot their use in the wild, although researchers have warned that as with anti-malware software there may be no detection technique that works long-term. With the ability to synthesise or adjust audio and video, hackers could create highly compromising material to destroy the reputations of politicians or promote the idea to citizens that their leaders have turned against them.

Even though western countries may choose to pass legislation to prevent the unethical exploitation of AI and its use in information warfare, Salehi points out without international agreement, such legislation may be meaningless as hostile states embrace the technology.

No comments:

Post a Comment