Pages

2 September 2014

Manipulating the Web Surfer’s Opinion

26/6/2014

After Stuxnet, DDoS attacks, viruses and ransomware, it is the time of social malware. The objective: manipulating public opinion for the purpose of accomplishing political and economic goals

One of the questions being asked with regard to cyber attack tools is: where are these tools heading? Mass distribution viruses have evolved into Advanced Persistent Threats (APTs) and attack tools originally developed for sabotage purposes have gone from the virtual dimension to the physical dimension with the exposure of Stuxnet. Well, a new trend has been signaling a new direction – switching from direct attacks against web surfers to attacks intended to persuade. In other words – attack tools intended to achieve a social change by manipulating public opinion, perceptions, concepts, emotions, et al.

From Direct Attacks to PersuasionWhy steal or blackmail if you can cause someone to do something legitimately? Well, this question has driven organized crime, espionage agencies and activist groups to seek more elegant solutions than software used to attack the victim directly. One should bear in mind that today, all Internet defensive mechanisms are based on the assumption that the attacker will actually attack. 

There aren’t any mechanisms capable of coping with legitimate activities intended to motivate audiences or individuals to perform specific acts.The fact that the world of information security shifted from rule-based defenses to defenses based on machine learning (identifying irregular activities and anomalies) has not solved the problem. The solutions are based on a philosophy of searching for attacks through various vectors, rather than on tools or methods capable of detecting attempts to influence users psychologically/cognitively. 

So what if the attacker does not actually attack and still gets what he wants? What then? In such cases, the world of information security does not offer any solutions.Let us assume that a criminal organization wants to prevent web surfers from accessing a certain website – a form of attack known as Distributed Denial of Service (DDoS). At present, that organization will have to maintain networks of hundreds or thousands of botnets it can control remotely and activate according to its needs. It can also employ amplification attacks utilizing such protocols as NTP. Regardless of the attack method the attacker chooses, he will still be required to attack a user or a system directly in order to accomplish his objective – a form of attack that may activate defensive mechanisms.As opposed to the direct attack methods, what will happen if the same organization stages an indirect attack by legitimately manipulating a specific audience? For example, it may lead a large audience to believe that at a certain time on a certain date a special sale will be conducted through a website it wishes to shut down. 

An actual incident that demonstrates this concept occurred last December when the ticketing system of Delta Airlines started selling New York – Los Angeles return flight tickets for a price of US$ 47.00. A single tweet on Twitter would have been sufficient to shut down the website. In another scenario, the attackers managed to convince a large audience that the White House was under attack and that President Obama had been injured. 

This actually happened last October. In that case, the US stock exchange sustained a loss of US$ 136 billion. Once again – this attack involved the changing of web surfers’ perception regarding a specific event that caused them to act in a certain way in the real world.But these are the more simple cases. What would happen is someone wanted to topple a serving government and appoint a new government in a certain country? The events of the Arab Spring are the most current example of how the social media on the web can motivate masses in the physical dimension without attacking even a single server.

 Admittedly, there is no proof (except, possibly, some conspiracy theories) that a specific guiding party had stood behind the events of the Arab Spring, but some recent evidence pertaining to several similar cases suggests that the US government (allegedly the CIA) has been responsible for all of them. In early April of this year, Associated Press unveiled a CIA plot that came to be known as “Cuban Twitter”. This project was under way for two years, through US government funding, and was intended to accomplish two objectives: draw thousands of Cuban youngsters into the project network and cause them to resist the Communist regime overtly.In another case, the US State Department attempted to manipulate the mood on the street in Afghanistan through an SMS-based social network, known as Paywast. This US-funded cellular social network for Afghans began to operate in January 2011. 

The formal objective of the network was to provide a convenient environment that would lead to positive social and economic changes for the local population. The investment of the USA in the project is estimated at US$ 5 million. In addition to Cuba and Afghanistan, the New York Times revealed, in April of this year, the establishment of an independent communication network in Tunisia. This network runs independently of the Tunisian communication infrastructure, so it may not be monitored by the government. The objective, in this case, is to enable dissidents to communicate freely among themselves and access information otherwise kept inaccessible by the government. 

The project is based on a network of rooftop mounted antennae managed by a dedicated software (mesh network). In this case, too, the US$ 2.8 million funding for erecting the network had come from the US State Department.Such concepts for exploiting the Internet, and the social media in particular, in order to secretly disseminate points of view that are friendly to western interests and false or harmful information regarding objectives of change are mentioned repeatedly in the materials made public by Edward Snowden. 

Documents prepared by the US NSA and its British counterpart, GCHQ, which were made public in the past by The Intercept as well as by NBC provide details of these plans, which include a specialist unit dedicated (in part) to “undermining the legitimacy” of the agency’s enemies by disseminating false information through the Internet.An article published in 2003 at Dartmouth College defines this type of attack as “cognitive hacking”. It is a category of attacks staged against computer or information systems, which is based on changing the perceptions of human users and adapting their behavior patterns for the purpose of accomplishing specific objectives. 

The article cites several examples of real cases, including a false report posted through the CNN website regarding the alleged death of singer Britney Spears and the manipulation of Emulex’s stock through the issuance of a false press release in 2000.

Supporting TechnologiesTo implement the manipulative tactics outlined above, some supporting technologies are required. Whether the attacker is an espionage agency, an organized crime group or an activist organization, the actual manipulation of the user public is carried out using stimulus-response methods until the required change of perception is achieved. In other words, technologies are required to enable stimulating the users (for example – by planting a false report on a news website) and receiving their responses (for example, web surfers’ Twitter tweets in response to that false report). 

If the stimulus has not triggered any user response – neither emotional (anger) nor practical (accessing a specific website/physical arrival at a certain location), it should be refined and perfected until the desired result has been achieved. Throughout the offensive campaign, the web surfer audience must not be aware of the fact that it is being subjected to an attempt to manipulate its opinion.Among the technologies that support this attack category: big data, sentiment analysis, semantic web and social media management systems. 

Additionally smart software-based tools for automatic generation of contents should be noted as well.Big data is a technology that enables storage of massive amounts of data. Large audiences using social media involve a tremendous amount of data, including texts, video footage and sound segments, and collecting and storing these data is the first stage – which is critical to the success of the attack.Once the raw data have been collected, they should be analyzed in order to predict the sentimental response of the audience to the stimulus. Whether the initiator wishes to topple a government or motivate the masses to take advantage of a special sale at a specific store or chain, the audience must be understood. 

At this point, sentiment analysis methods come into the picture, along with semantic web tools. These tools can automatically understand the meaning and significance of texts without a human element in the loop. In this way, the attacker can “fly a balloon” in the form of a news story, an opinion article or a Facebook status message, and see how it affects the audience receiving it. Needless to say, none of the existing information security mechanisms are able to cope with this type of attack.In addition to the data storage, analysis and understanding tools, there are tools that help the attacker improve the efficiency of his attack by focusing on social leaders. The concept is fairly simple – in any given society there are individual opinion leaders who set an example for everyone else. This phenomenon can take place within the household (someone dictating fashion trends, for example), within the close social circle (one of the friends who always determines the route of the trip for everyone), or in wider circles, such as the workplace, a student group at the university, a social protest organization, within a political party and even at the state or international community level.In order to spot the opinion leaders within the social realm to be manipulated, the attacker can employ social media management systems. One example is the Tracx system (whose founders hail from Israel). 

Tracx is a state-of-the-art system for managing activities in social media/networks which enables the user to improve the efficiency of his marketing efforts and consolidate his reputation within a given social medium/network. One of the capabilities of this tool is spotting opinion leaders according to key words (areas of interest) within a predetermined social realm, using such characteristics as state, gender, age range, et al. Other systems of the same category include Salesforce Marketing Cloud, Sprinklr, Spredfast, Visibletechnologies, et al.Another technology which should be noted in this context involves tools capable of automatically generating high-quality contents. A study conducted recently at the University of Karlstad, Sweden, pondered the question of how human web surfers respond to software-generated content. 

The results of this study showed that there was hardly any difference in the way people responded to an article, whether it had been written by a professional journalist or by software means. The Narrative Science Company is one of the companies already involved in this field, along with the algorithm Quakebot developed by a journalist writing for the Los Angeles Times.

Playing with People’s FeelingsSo, although only very few attacks of this type have been spotted to this day, and they were mostly attributed to state espionage agencies, this phenomenon will undoubtedly spread, with momentum provided by criminal organizations and activists. Only the thought of an activist organization that can covertly incite one state against another just to achieve the ideological change it desires by manipulating public opinion within a specific social realm is downright appalling to the international community.What would happen, for example, if such an organization wanted to stir up a third Intifada in Israel, under the auspices of Egypt and Jordan? All it needs to achieve is a ‘critical mass’ in those countries, as well as among the Arab inhabitants of Israel, of people believing that Israel is destroying the al-Aqsa mosque. One should bear in mind that this involves an emotional manipulation of the public opinion, regardless of the facts on the ground. 

What would happen if, in the context of a different scenario, a certain ideological organization managed to incite India and Pakistan against one another by convincing the masses in those countries that one of the parties intends to capture the Kashmir region? This particular scenario involves the possibility of the outbreak of a nuclear war. With the tools currently available, as outlined above, this is definitely a plausible option for which no effective countermeasures are currently available.In the context of yet another scenario, a political candidate from a certain country can seek the assistance of a criminal organization in seizing power by winning the next elections. In this case, too, the use of the tools outlined above can delegitimize the other candidates in the voters’ mind, thereby effecting a political change in that country.

 In this case, the voters would not even know that they had voted for that particular candidate because of a cyber attack intended to manipulate the public opinion.These are only a few possible examples. The range of possible scenarios can include any social or economic change, as the attacker may fancy, and he will be able to accomplish his goals without concerning himself with the information security measures currently available. 

These measures will continue to search for suspect IP addresses, for behavioral changes in identity verification mechanisms, for irregular traffic patterns and for other attack vectors that are totally irrelevant to social/psychological attacks.Another point that might contribute to the development of such attacks is the transition from manual management of such attacks to the use of automatic malware. Unlike espionage agencies that operate in the service of states, whose attack objectives are supervised by their respective governments, in the world of criminal organizations it is just a matter of profit and loss. In other words, a criminal organization may be interested in including all of the capabilities outlined above into malwares that ‘live’ in the virtual world and possess the ability to manipulate human users. 

In this way, the profits generated by each offensive campaign will be greater than those that may have been generated by managing such a campaign manually.In conclusion, the question is not “if” but “when”. The cases that did become public knowledge, like Cuba, Tunisia and Afghanistan, are just the tip of the iceberg. It is reasonable to assume that such campaigns are being conducted, covertly, by states seeking to achieve geo-political objectives and by international corporations seeking favorable public opinion regarding their products. In this context, the involvement of criminal organizations and activist groups such as Anonymous will only accelerate the use of such attack methods. 

After all, this is the ideal weapon – it is employed legitimately, it does not alert the existing defenses, its effect takes longer to achieve but has a greater impact and with the ability to generate contents automatically it is also cheaper to operate.

No comments:

Post a Comment