Pages

3 April 2019

The Impact of Cyber Security theory in the World

By Sajad Abedi

The correct control of cyber security often depends on decisions under uncertainty. Using quantified information about risk, one may hope to achieve more precise control by making better decisions.

Information technology (IT) is critical and valuable to our society. IT systems support business processes by storing, processing, and communicating critical and sensitive business data. In addition, IT systems are often used to control and monitor physical industrial processes. For example, our electrical power supply, water supply and railroads are controlled by IT systems. These “controlling” systems have many names. In this Notes they are referred to as SCADA (Supervisory Control and Data Acquisition) systems, or occasionally, as industrial control systems. They are complex real-time systems that include components like databases, application servers, web interfaces, human machine interfaces, dedicated communication equipment, process control logic, and numerous sensors and actuators that measure and control the state of the industrial process. In many industrial processes (e.g., electrical power transmission) these components are also distributed over a large geographical area. SCADA systems can be seen as the nervous system of industrial processes and since our society is heavily dependent on the industrial processes that SCADA systems manage, we are also dependent on the behavior of our SCADA systems.

Over the last two decades our SCADA systems and their environments have changed. They used to be built on proprietary and specialized protocols and platforms. Today, however, SCADA systems operate on top of common and widely used operating systems (Windows XP) and use protocols that are standardized and publicly available. These changes have altered the threat environment for SCADA systems.


The move to more well-known and open solutions lowers the threshold for attackers who seek to exploit vulnerabilities in these SCADA systems. Vulnerabilities are regularly found in the software components used in SCADA systems (the operating systems) and instructions that can be used to exploit these vulnerabilities are often made available in the public domain. The increased openness also lowers the thresholds for attacks targeting special-purpose SCADA components, programmable logic controllers (PLCs). Today there is an interest in the vulnerabilities they have and there is information available in the public domain about their design and internal components. In fact, it is even possible to buy a subscription to exploit code specifically targeting SCADA systems’ components. In other words, a successful cyber attack against a SCADA system today does not require the SCADA-expertise that was required prior to the move to more open, standardized and common components.

In parallel with the move to more common and widely known solutions, SCADA systems have moved from being isolated and standalone to be interwoven in the larger IT environment of enterprises. Process data collected by SCADA systems, production plans, and facility drawings are often exchanged over enterprises’ computer networks. It is also common to allow users to remotely connect to operator interfaces, for instance, so that process-operators can connect remotely when they are on standby duty and so that suppliers are able to perform maintenance remotely.

The increased integration with more administrative enterprise systems has also contributed to a changed threat environment. Administrative systems are, with few exceptions, connected (directly or indirectly) to the internet. Hence, the possibility for administrative systems to exchange data with SCADA systems is also a possibility for attackers or malware to come in contact with these systems and exploit their vulnerabilities, without physical proximity.

The lowered threshold to find and use SCADA-related vulnerabilities and tighter integration with enterprise systems are two cyber security problems that add to the volume of cyber security issues related to architecture and configuration of the actual SCADA systems. Historically, SCADA systems were built to be reliable and available, but not to be secure against attacks with a malicious intent.

SCADA systems are thus critical assets, have exploitable vulnerabilities, and are interwoven into the enterprise architectures. Decision makers who wish to manage their cyber security need to be able to assess the vulnerabilities associated with different solution architectures. However, assessing the cyber security of an enterprise environment is difficult. The budget allocated for cyber security assessments is usually limited. This prohibits assessments from covering and investigating all factors that could be of importance. The set of variables that should be investigated, and how important they are, is also hazy and partly unknown. For instance, guidelines such as do not prioritize their cyber security recommendations. Such prioritizations are also difficult to do in a generic guideline since the importance of many variables are contingent on the systems architecture and environment and guidelines are limited to one or few typical architectures. Variables are also dependent on each other. An attack against a SCADA system may be performed in a number of ways and can involve a series of steps where different vulnerabilities are exploited. Thus, some combinations of vulnerabilities can make an attack easy, but a slightly different combination may make attacks extremely difficult. Thus, informed decisions require an analysis of the vulnerabilities associated with different architectural scenarios, and at the same time, an analysis of how these vulnerabilities relate to each other.

These problems are not unique for SCADA systems. Many administrative IT systems also have complex environments; administrative IT systems often need to be analyzed on a high level of abstraction; the importance of different variables is hazy also for administrative IT systems. Like the administrative environment, the SCADA environment consists of software, hardware, humans, and management processes. And as described above, there is a substantial overlap between the components which are used in both environments today. However, there is a difference in what needs to be protected in these environments. Security is often thought of as a triage of confidentiality, integrity and availability. For SCADA systems, integrity and availability of functionality are crucial, but confidentiality of business data is not. Because of this, cyber security assessments of SCADA systems have a different focus than for many other systems. The importance of availability and integrity has also other implications. For instance, because of the consequence of a potential malfunction, it is recommended that SCADA systems should not be updated before extensive testing, and network based vulnerability scanners should be used with care in SCADA environments.

Information security is increasingly seen as not only fulfillment of Confidentiality, Integrity and Availability, but as protecting against a number of threats having by doing correct economic tradeoffs. A growing research into the economics of information security during the last decade aims to understand security problems in terms of economic factors and incentives among agents making decisions about security, typically assumed to aim at maximizing their utility. Such analysis is made by treating economic factors as equally important in explaining security problems as properties inherent in the systems that are to be protected. It is thus natural to view the control of security as a sequence of decisions that have to be made as new information appears about an uncertain threat environment. Seen in the light of this and that obtaining security information usually in it is cost, I think that any usage of security metrics must be related to allowing more rational decisions with respect to security. It is in this way I consider security metrics and decisions in the following. The basic way to understand any decision-making situation is to consider which kind of information the decision-maker will have available to form the basis of judgments. For people, both the available information, but also potentially the way in which it is framed (presented), may affect how well decisions will be made to ensure goals.

One of the common requirements on security metrics is that they should be able to guide decisions and actions to reach security goals. However, it is an open question how to make a security metric usable and ensuring such usage will be correct (with respect to achieving goals) comes with challenges. The idea to use quantified risk as a metric for decisions can be split up into two steps. First do objective risk analysis using both assessment of system vulnerabilities and available threats in order to measure security risk. Second, present these results in a usable way so that the decision-maker can make correct and rational decisions. While both of these steps present considerable challenges to using good security metrics, I consider why decisions using quantified security risk as a metric may go wrong in the second step. Lacking information about security properties of a system clearly limits the security decisions, but I fear that introducing metrics do not necessarily improve them; this may be due to 1) that information is incorrect or imprecise, or 2) that usage will be incorrect. This work takes the second view and we argue that even with perfect risk assessment, it may not be obvious that security decisions will always improve. I am thus seeking properties in risky decision problems that actually predict the overall goal – maximizing utility – to be, or not to be, fulfilled. More specifically, we need to find properties in quantifications that may put decision-making at risk of going wrong.

The way to understand where security decisions go wrong is by using how people are predicted to act on perceived rather than actual risk. I thus need to use both normative and descriptive models of decision-making under risk. For normative decisions, I use the well-established economic principle of maximizing expected utility. But for the descriptive part, I note that decision faults on risky decisions not only happen in various situations, but have remarkably been shown to happen systematically describe by models from behavioral economics.

I have considered when quantified risk is being used by people making security decisions. An exploration of the parameter space in two simple problems showed that results from behavioral economics may have impact on the usability of quantitative risk methods. The results visualized do not lend themselves to easy and intuitive explanations, but I view my results as a first systematic step towards understanding security problems with quantitative information.

There have been many proposals to quantify risk for information security, mostly in order to allow better security decisions. But a blind belief in quantification itself seems unwise, even if it is made correctly. Behavioral economics shows systematic deviations of weighting when people act on explicit risk. This is likely to threaten security and its goals as security is increasingly seen as the management of economical trade-offs. I think that these findings can be used partially to predict or understand wrong security decisions depending on risk information. Furthermore, this motivates the study how strategic agents may manipulate, or attack, the perception of a risky decision.

Even though any descriptive model of human decision-making is approximate at best, I still believe this work gives a well-articulated argument regarding threats with using explicit risk as security metric. My approach may also be understood in terms of standard system specification and threat models: economic rationality in this case is the specification, and the threat depends on bias for risk information. I also studied a way of correcting the problem with reframing for two simple security decision scenarios, but only got partial predictive support for fixing problems this way. Furthermore, I have not found such numerical examinations in behavioral economics to date.

Further work on this topic needs to empirically confirm or reject these predictions and study to which degree they occur (even though previous work clearly makes the hypothesis clearly plausible at least to some degree) in a security context. Furthermore, I think that similar issues may also arise with several forms of quantified information for security decisions.

These questions may also be extended to consider several self-interested parties, in game-theoretical situations. Another topic is using different utility functions, and where it may be normative to be economically risk-aversive rather than risk-neutral. With respect to the problems outlined, rational decision-making is a natural way to understand and motivate the control of security and requirements on security metrics. But when selecting the format of information, a problem is also partially about usability. Usability faults often turn into security problems, which is also likely for quantified risk. In the end the challenge is to provide users with usable security information, and even more broadly investigate what kind of support is required for decisions. This is clearly a topic for further research since introducing quantified risk is not without problems. Using knowledge from economics and psychology seems necessary to understand the correct control of security.

No comments:

Post a Comment