process control input output feedback response system environment

Report 25 Downloads 152 Views
109

A Diagrammatical Framework for Information Systems Attacks Terry Roebuck Department of Computer Science, University of Saskatchewan, 57 Campus Drive, Saskatoon SK., S7N 5A9, Canada [email protected] Abstract. All information systems can be represented by a conceptual and abstracted systems diagram. Therefore all attacks against information systems can also be represented. By studying the security of systems in an abstracted manner, it is possible to show significant areas of potential security concern and to provide a framework to develop and implement holistic defense strategies that may not be typically considered by system designers and operators. The simple taxonomy presented is easily understood and can be used to explain security concepts.

Introduction A comprehensive approach to system security requires a holistic, systemic view of information technology attacks and defenses. While it may be that attackers normally target certain elements of a system over others, effective information system defense should address all of the potential points of attack of the target system. Furthermore, active cooperation of the user community is often required to secure systems from physical attack. One issue that might arise is the requirement to maintain a user focus and awareness on environmental system security concerns. Systemic Representation Since all information systems can be considered in a systemic representation by a generic systems diagram, it is therefore the case that all types of attacks against information systems can also be represented in this way. Consider the abstracted system model in Figure 1, which can be used to demonstrate the scope and inter-relationship of the various system components of an information system delimited within a system environment.

output

input process

response

feedback control system environment

Figure 1: Abstracted System Model

110

Notice that the system exists within an environment that is of undetermined size. Understanding the size and scope of the system environment is an important consideration in understanding the system. The boundaries of the system (shown as the edge of a process or a flow) are the places where the system interacts with the environment. One can see from the diagram that since the environment impacts the system at all points, a change in the environment must also have the potential to affect the system at all points. Of course, not all changes in the environment will cause a change in the system, but one can safely assume that some environmental changes will affect the system; others will not; and still others might only affect the system if the right combination of other events is taking place. Systemic Attack Representation Consider how unauthorized change to such a generic system might be effected.

Modified Process

output

input process

response

feedback control system environment

Figure 2: Process Attack Most obviously, an attacker could modify the main system process by directly making changes in the way that data is processed (see Figure 2). A typical example of such an event would be the alteration of program code by an insider. Another example might be forcing an overflow of some internal component to cause a race condition or to exploit a code weakness. However, making non-authorized and covert changes to the main process

111

is usually complicated and requires considerable technical sophistication. Further, as can be seen from the diagram, to make such a change would not only require deep system knowledge but also requires direct system access at an internal level (in other words, the attacker must already have bypassed any security controls in the system environment). In most cases, the nature and design of the main system process should act to prevent such an attack. Even with access to the central process, it is usually quite difficult to make modifications that have only small effects (such as changing only a selected record), as an attacker would need even deeper and more specific knowledge of the process and the input stream. Such attacks would usually require insider knowledge of the computer process, and considerable advance planning. [1] Lacking such knowledge, it more reasonable to expect that covert changes in the main process would be more likely to cause the system to simply fail, or else create wholesale damage, which is easily detected (though perhaps not easily repaired).

output

input process

response

feedback control system environment

Modified Control Figure 3: Control Attack

In Figure 3, the main system process is neither affected nor changed. The attack is directed to modification of the business rules that control the main process. In general, it can be assumed that this approach will make it easier for an attacker to effect smaller and more discrete changes. For example, Ghosh describes an example control penetration of an e-commerce server where CGI scripts are used implement flawed logic in an online investment application, but points out that “one of the key problems … is the poor input sanity checking; that developers fail to impose limits on acceptable input”. [2] The business rules that control calculations often need to change at different times of the system life cycle and so the control process may well be designed to be flexible and to

112

support and control change. It is likely easier to make an illegitimate change in an area that is designed to accept and process legitimate change then to attempt change in an area designed to process in a structured fashion. It should be noted that since the control process is designed to accept change, there are also accepted roles and procedures defined for operators and other users for making changes. These ‘change agents’ will themselves have a process to effect an approved change. This means that there are many more avenues of entry for attackers into the control process than there are into the main (working) process itself. Still, the designers of the control process will hopefully have considered these elements and will have designed checks and balances into the process to ensure that only ‘approved’ changes are made (in a timely and appropriate manner). Indeed, Cohen points out that: “Programmers are typically watched more closely by management and administrators then others. It is easier to trace an intentional program alteration to a programmer than to trace the exploitation of a design fault in the overall system to a data entry clerk or executive.”[3] For an attack to succeed, the attacker must circumvent a series of controls designed to stop exactly what they are attempting to do (make unauthorized changes).

output

input process Change induced in feedback channel

Normal response to induced change

response

feedback system environment

control

Figure 4: Inter-process Communication Attack In the view of a target system illustrated in Figure 4, the attacker has decided to concentrate on inter-process communications to allow insertion of false data into the control stream or even to replace the existing control process with one of his or her own (a ‘person-in-the-middle’ approach). This attack is much more subtle. For instance, the system designers may not have considered that their communication would be interfered with, and may have therefore inadvertently designed a trust relationship between the two

113

communicating entities. Since there would usually not be natural access to inter-process communication provided by the system design, security controls on a process, or on the system environment must have been bypassed. A typical example of such attacks would be the bypassing of Kerberos security through taking control of the network time communication between a process and a network time device [4]. A simpler example would be a replay attack against a system [Cohen 7], where access is granted when the feedback control of some authentication process receives crafted feedback from a previous valid access. The physical communication media carrying such signals can be attacked as well, and there may be little security or control on how the wires, cable, wireless signal or whatever media are physically connected.

Falsified or appended forms or screens

Falsified or appended forms or keystrokes

output

input process

response

feedback system environment

control

Figure 5: Input or Output Communication Attack In the attack shown in Figure 5, the target is the input and output flows for the system. This style of system abuse (sometimes referred to as ‘data diddling’) is much more common, and can be seen in a variety of examples. One example of this type of attack is an identify theft and the process involved in an application for a credit card. The attacker submits a false dataset that uses a correct name and banking information, but has a deliberately incorrect address. The system will work as designed, and given appropriate clearances (and a reasonable credit history), will issue a credit card, which will be mailed to the incorrect address. The deliberate insertion of false or misleading data into the input flow should be of major concern not only to operators but also to designers of systems. It is also possible to attack systems using output (verses input) data diddling. By simply routing the desired information to the right printer, in the appropriate format, it might be

114

possible to produce what is readily accepted as valid system output. The same effect can be obtained with other output media as well, such as faked email or spoofed websites, news or mailing list postings, or IRC communication. One simple example of output data diddling would be spam e-mail. Virus and Trojan programs such as W32/BabyBear-A[5] and other variations of BugBear use output diddling to convince users to click on an attachment. System designers have developed a variety of input and output defense controls (such as digital signatures, sophisticated check digits and hashes, etc.) but as the number of potential examples can easily demonstrate, there are many instances where input or output data diddling can lead to successful system breach and this remains a significant attack point.

output

input process

response

feedback Modified

control

system environment Figure 6: Environment Attack In Figure 6, the attacker has decided to ignore the system itself, and concentrate on the system environment instead. Many of the attacks that can cause significant problems may not be directly aimed at the system at all and therefore are usually not well considered at design time. Indeed, since the operating environment of many systems is sometimes considered to be out of the scope of control of the system designer, it may be the case that significant and erroneous assumptions about the environment may have been made. Consider for example, that an attacker might want to get access to a database located on a password protected, standalone computer in a private and locked office. To achieve that goal, not only does the attacker need to get physical access to the system, but also needs to bypass the system security enabled by design (for example, a password and username that will let them make the desired changes).

115

There are many potential attacks that an attacker might consider. Access might be possible using some technical attack based on a known design flaw in the software after first exploiting a weakness in the physical security. One approach might be to bypass the physical security of the private office by stealing a key and then attack the computer’s authentication process, perhaps by attacking a control process or by a control process communication attack. This method will probably require a reasonably deep understanding of the system, the physical environment, any potential flaws, as well as a sophisticated technical attack method and may well include significant risk. On the other hand, the attacker could simply set off the fire alarm in the office area during a normal working day. Will the staff using the database stay working – or will they get up and leave? Will they shut down their current work or will the attacker be able to walk into the office and find a device logged in and ready for use? Thus a fairly simple environmental disruption but may well be successful. Obviously, this is an easy attack to thwart – staff merely must be told (and of course must comply) with a directive to log out when they leave their desks (defenders could also install some password based screen saver that would lock the unattended system after a few minutes of inactivity to help this control be more effective, or even bind processing capability with the building alarm system in more sensitive environments). The fact is that the attack is simple to defeat – as long as one has considered the potential implications on the system security of the directed environmental change of a fire alarm. Clearly, attacks aimed at a system environment, unlike other possible attack points, can be very simple to initiate, inexpensive to conduct, fairly low risk, and still very effective. Since the system environment can be considered as being much broader than the immediate system vicinity and the environment that a system operates within, and since the environment usually predates the system and therefore will predate the system security and system design assumptions, security in the environment may be significantly weaker than security of other parts of the system. Further, since the environment of a system is also very much impacted by the actions of people, appropriate system security needs to fully consider the potential adverse affect of human action and reaction within a system environment. Effective controls in this area seem very difficult to implement and maintain, as reflected in practice with the success of such attacks as phishing and social engineering or where controls are defeated by the misuse or display of user passwords. Several authors including Cobb et. al. [6] point out that environment security depends on human beings to understand and carry out security procedures and so, information systems security must be integral to the corporate culture. Another implication of the broad scope and reactive capacity of a system environment is that regardless of where one determines the system boundary to be, any system, since it must interact with an external environment, has the potential to be responsive to remote manipulation. As systems designers come under increasing pressure to build more secure systems through better design and programming practice, it is important to consider that clear

116

environmental control will also need to be developed and implemented to ensure overall system security.

Modified Process Falsified or appended forms or keystrokes

Falsified or appended forms or screens

output

input process Changes induced in feedback channel

Normal response to induced change

response

feedback Modified

control

system environment

Modified Control

Figure 7: Possible Attacks Against a System Putting all of these target points together into one diagram (Figure 7), we can see that there are a limited number of areas at which to direct an attack, and any attack can be classified according to the target. Information systems will be subject not only to attacks against the hardware and software itself, but also against the entire system (and environment). The nature of a system environment suggests that controls in this area must be exceptionally robust, since defeating these controls will provide access to any part of the system boundary. This further suggests that to be effective, the set of system controls must encompass the entire system; information technology security must be systemic and holistic. Conclusion and Further Examination Through this analysis, we classify and categorize potential attacks and security weaknesses according to general system characteristics, providing a framework for classifying attacks and therefore for developing defenses. Using this simple structure, system designers can plan defenses in a systematic way and can provide users with strategies for minimizing possible security weaknesses in the environment in which the system is deployed. The model easily lends itself to teaching the importance of the role of

117

the environment on security and can be easily understood by non-computer professionals. Most importantly, a systemic approach to security can help designers, developers, and users to forsake naïve views that technology alone will provide a solution and to adopt a more sophisticated and holistic view of system security by proper identification of the areas of potential risk. An evaluation of current threat models from the perspective of systemic representation may prove valuable. Presentation of this model for user awareness purposes, using the metaphor of a home heating system has been undertaken during user training sessions and has received significant anecdotal success, but has not been further verified. Acknowledgements I wish to acknowledge the incentive and assistance provided for this paper by Jim Greer in the Department of Computer Science at the University of Saskatchewan. References [1] Cohen, Frederick .B. “Protection and Security on the Information Superhighway” 1995 – 97 Chapter 3. p 18. Online version @ all.net/books/superhighway/disrupt.html [2] Ghosh, Anup K. “E-Commerce Vulnerabilities” in Computer Security Handbook 4th Edition; ed: Bosworth/Kabay Wiley 2002 Chapter 13 pg1 [3] Cohen, Frederick .B. “Protection and Security on the Information Superhighway” 1995 – 97 Chapter 3. p 18. Online version @ all.net/books/superhighway/disrupt.html [4] Kashin, K., Tikkanen, A. “Attacks on Kerberos V in a Windows 2000 Environment” http://www.hut.fi/~autikkan/kerberos/docs/phase1/pdf/LATEST_final_report.pdf 2003 [5] Arkansas Computer Information Center Information Security Briefing; August 2003 http://www.acic.org/law/InfoSec/080403.pdf [6] Cobb, C., Cobb, S., and Kabay, M.E. “Penetrating Computer Systems and Networks” in Computer Security Handbook 4th Edition; ed: Bosworth/Kabay Wiley 2002 Chapter 8 [7] Cohen, Frederick B. “Information System Attacks: A Preliminary Classification Scheme”, Computers and Security, 16(1):29-46 1997