Table of Contents
1. Introduction by Computer & Communications Industry Association (CCIA)
2. Author Listing
3. CyberInsecurity Report
Copyright By PowCoder代写 加微信 powcoder
4. Biographies of Authors
Authors of the report , Sc.D – Chief Technical Officer, @Stake
Charles P. Pfleeger, Ph.D – Master Security Architect, Exodus Communications, Inc.
– Founder, Chief Technical Officer, Counterpane Internet Security
. Quarterman – Founder, InternetPerils, Matrix NetSystems, Inc.
– Independent Consultant – CEO, Infidel
– Researcher, Department of Computer Science, University of Auckland
CyberInsecurity Report Introduction by
Computer & Communications Industry Association
No software is perfect. This much is known from academia and every-day experience. Yet our industry knows how to design and deploy software so as to minimize security risks. However, when other goals are deemed more important than security, the consequences can be dangerous for software users and society at large.
Microsoft’s efforts to design its software in evermore complex ways so as to illegally shut out efforts by others to interoperate or compete with their products has succeeded. The monopoly product we all now rely on is thus both used by nearly everyone and riddled with flaws. A special burden rests upon Microsoft because of this ubiquity of its product, and we all need to be aware of the dangers that result from reliance upon such a widely used and essential product.
CCIA warned of the security dangers posed by software monopolies during the US antitrust proceeding against Microsoft in the mid and late 1990’s. We later urged the European Union to take measures to avoid a software “monoculture” that each day becomes more susceptible to computer viruses, and other digital pathogens.
Our conclusions have now been confirmed and amplified by the appearance of this important report by leading authorities in the field of cybersecurity: , , , , . Quarterman, , and .
CCIA and the report’s authors have arrived at their conclusions independently. Indeed, the views of the authors are their views and theirs alone. However, the growing consensus within the computer security community and industry at large is striking, and had become obvious: The presence of this single, dominant operating system in the hands of nearly all end users is inherently dangerous. The increased migration of that same operating system into the server world increases the danger even more. CCIA is
pleased to have served as a catalyst and a publisher of the ideas of these distinguished authorities.
Over the years, Microsoft has deliberately added more and more features into its operating system in such a way that no end user could easily remove them. Yet, in so doing, the world’s PC operating system monopoly has created unacceptable levels of complexity to its software, in direct contradiction of the most basic tenets of computer security.
Microsoft, as the US trial record and experience has shown, has added these complex chunks of code to its operating system not because such programming complexity is necessary, but because it all but guarantees that computer makers, users and consumers will use Microsoft products rather than a competitor’s.
These competition related security problems have been with us, and getting worse, for years. The recent spate of virus attacks on the Internet is one more sign that we must realize the danger we are in. The report CyberInsecurity – The Cost of Monopoly is a wake up call that government and industry need to hear.
September 24, 2003
CYBERINSECURITY: THE COST OF MONOPOLY HOW THE DOMINANCE OF MICROSOFT’S PRODUCTS POSES A RISK TO SECURITY
Executive Summary
Computing is crucial to the infrastructure of advanced countries. Yet, as fast as the worldʹs computing infrastructure is growing, security vulnerabilities within it are growing faster still. The security situation is deteriorating, and that deterioration compounds when nearly all computers in the hands of end users rely on a single operating system subject to the same vulnerabilities the world over.
Most of the world’s computers run Microsoft’s operating systems, thus most of the world’s computers are vulnerable to the same viruses and worms at the same time. The only way to stop this is to avoid monoculture in computer operating systems, and for reasons just as reasonable and obvious as avoiding monoculture in farming. Microsoft exacerbates this problem via a wide range of practices that lock users to its platform. The impact on security of this lock-in is real and endangers society.
Because Microsoftʹs near-monopoly status itself magnifies security risk, it is essential that society become less dependent on a single operating system from a single vendor if our critical infrastructure is not to be disrupted in a single blow. The goal must be to break the monoculture. Efforts by Microsoft to improve security will fail if their side effect is to increase user-level lock-in. Microsoft must not be allowed to impose new restrictions on its customers – imposed in the way only a monopoly can do – and then claim that such exercise of monopoly power is somehow a solution to the security problems inherent in its products. The prevalence of security flaw in Microsoft’s products is an effect of monopoly power; it must not be allowed to become a reinforcer.
Governments must set an example with their own internal policies and with the regulations they impose on industries critical to their societies. They must confront the security effects of monopoly and acknowledge that competition policy is entangled with security policy from this point forward.
======================
The threats to international security posed by Windows are significant, and must be addressed quickly. We discuss here in turn the problem in principle, Microsoft and its actions in relation to those principles, and the social and economic implications for risk management and policy. The points to be made are enumerated at the outset of each section, and then discussed.
1. THE PROBLEM IN PRINCIPLE
To sum up this section:
• Our society’s infrastructure can no longer function without computers and networks.
• The sum of the world’s networked computers is a rapidly increasing force multiplier.
• A monoculture of networked computers is a convenient and susceptible reservoir of platforms from which to launch attacks; these attacks can and do cascade.
• This susceptibility cannot be mitigated without addressing the issue of that monoculture.
• Risk diversification is a primary defense against aggregated risk when that risk cannot otherwise be addressed; monocultures create aggregated risk like nothing else.
• The growth in risk is chiefly amongst unsophisticated users and is accelerating.
• Uncorrected market failures can create and perpetuate societal threat; the
existence of societal threat may indicate the need for corrective intervention. Discussion
Computing is essential to industrialized societies. As time passes, all societal functions become more deeply dependent on it: power infrastructure, food distribution, air traffic control, emergency services, banking, telecommunications, and virtually every other large scale endeavor is today coordinated and controlled by networked computers. Attacking national infrastructures is also done with computers – often hijacked computers. Thus, threats to computing infrastructures are explicitly and inherently risk harm to those very societies in proportion to those society’s dependence on them. A prior history of catastrophe is not required to make such a finding. You should not have to wait until people die to address risks of the scale and scope discussed here.
Regardless of where or how it is used, computing increases the capabilities and the power of those who use it. Using strategic or military terminology that means what it
sounds like, computing is a “force multiplier” to those who use them – it magnifies their power, for good or ill. The best estimates of the number of network connected computers show an increase of 50% per year on a worldwide basis. By most general measures what you can buy for the same amount of money doubles every eighteen months (“Mooreʹs Law”). With a conservative estimate of a four year lifetime for a computer – in other words, consumers replace computers every four years on average – the total computing power on the Internet therefore increases by a factor of 2.7 per annum (or doubles every 10 months). If a constant fraction of computers are under threat of misuse, then the force available to misusers will thus double every 10 months. In other words, the power available to misusers – computer hackers, in popular parlance – is rising both because what they can buy grows in power per dollar spent and because the total number of networked computers grows, too. Note also that this analysis does not even include attacks enabled by storage capacity, which doubles in price-performance twice as fast as CPU (doubles every nine months rather than eighteen).
Internetworked computing power makes communication feasible. Communication is of such high value that it has been the focus of much study and much conjecture and not just recently. For one-way broadcast communication, the value of the network itself rises proportionally to N, the potential number of listeners (“Sarnoffʹs Law”). By way of example, advertisers pay for television time in rough proportion to the number of people viewing a given program.
For two-way interactive communications – such as between fax machines or personal e- mail – the value of the network rises proportionally to N2, the square of the potential number of users (“Metcalfeʹs Law”). Thus, if the number of people on email doubles in a given year, the number of possible communications rises by a factor of four.
Growth in communications rises even more when people can organize in groups, so that any random group of people can communicate with another. Web pages, electronic mailing lists and online newsgroups are good examples of such communications. In these cases, the value of the network rises proportionally to 2N, the potential number of groups being an exponential growth in N (“Reedʹs Law”).
Assume for now that the Internet is somewhere between the Metcalfe model, where communications vary according to the square of the number of participants (N^2), and the Reed model, where communications vary according to two raised to the Nth power (2^N).
If we make this assumption, then the potential value of communications that the Internet enables will rise somewhere between 1.52 = 2.3 and 21.5 = 2.8 times per annum. These laws are likely not precisely accurate. Nonetheless, their wide acceptance and historic record show that they are good indicators of the importance of communication technology.
To extend this simple mathematical model one final step, we have assumed so far that all communications are good, and assigned to the value of the network a positive number. Nonetheless, it is obvious that not all communications (over computer networks, at least) are positive. Hackers, crackers, terrorists and garden-variety criminals use the network to defraud, spy and generally wreak havoc on a continual basis. To these communications we assign a negative value.
The fraction of communications that has positive value is one crucial measure, and the absolute number of negative communications is another. Both are dependent on the number of networked devices in total. This growth in the number of networked devices, however, is almost entirely at the “edges” of networked computing – the desktop, the workstation, the home, the embedded system, the automated apparatus. In other words, the growth in “N” is not in the core infrastructure of the Internet where highly trained specialists watch over costly equipment with an eye towards preventing and responding to attacks. Growth, rather, is occurring mostly among ordinary consumers and non-technical personnel who are the most vulnerable to illegal intrusions, viruses, Trojan horse programs and the like. This growth at the periphery, furthermore, is accelerating as mobile, wireless devices come into their own and bring with them still more vulnerabilities.
Viruses, worms, Trojan horses and the like permit malicious attackers to seize control of large numbers of computers at the edge of the network. Malicious attackers do not, in other words, have to invest in these computers themselves – they have only to exploit the vulnerabilities in other people’s investments.
Barring such physical events as 9/11, an attack on computing is a set of communications that take advantage of latent flaws already then present in those computersʹ software. Given enough knowledge of how a piece of software works, an attacker can force it to do things for which it was never designed. Such abuse can take many forms; a naturalist would say that attacks are a broad genus with many species. Within this genus of attacks, species include everything from denial of service, to escalation of authority, to diversion of funds or data, and on. As in nature, some species are more common than others.
Similarly, not all attacks are created equal. An annoying message that pops up once a year on screen to tell a computer user that he has been infected by Virus XYZ is no more than that; an annoyance. Other exploitations cost society many, many dollars in lost data, lost productivity and projects destroyed from data crashes. Examples are many and familiar including the well known ILOVE YOU, NIMDA, and Slammer attacks not to mention taking over users’ machines for spamming, porn distribution, and so forth. Still other vulnerabilities, though exploited every day and costing society substantial sums of time and money, seldom appear in the popular press. According to London- based computer security firm, mi2g Ltd., global damage from malicious software inflicted as much as $107 billion in global economic damage this year. It estimates that the SoBig worm, which helped make August the costliest month in terms of economic damage, was responsible for nearly $30 billion in damage alone.1
For an attack to be a genuine societal-scale threat, either the target must be unique and indispensable – a military or government computer, authoritative time lookup, the computer handling emergency response (911) calls, airport flight control, say – or the attack must be one which once triggered uncontrollably cascades from one machine to the next. The NIMDA and Slammer worms that attacked millions of Windows-based computers were examples of such “cascade failure” – they spread from one to another computer at high rates. Why? Because these worms did not have to guess much about the target computers because nearly all computers have the same vulnerabilities.
Unique, valuable targets are identifiable so we, as a society, can concentrate force around them. Given enough people and training (a tall order to be sure), it is possible to protect the unique and core assets. Advanced societies have largely made these investments, and unmitigated failures do not generally occur in these systems.
Not so outside this core: As a practical and perhaps obvious fact, the risk of cascade failure rises at the edges of the network where end users are far more likely to be deceived by a clever virus writer or a random intruder. To put the problem in military terms, we are the most vulnerable when the ratio of available operational skill to available force multiplication is minimized and thus effective control is weakest. Low available skill coupled to high potential force multiplication is a fair description of what is today accumulating on the periphery of the computing infrastructures of every advanced nation. In plainer terms, the power on the average desktop goes up very fast while the spread of computers to new places ensures the average skill of the user goes down. The average user is not, does not want to be, and should not need to be a computer security expert any more than an airplane passenger wants to or should need
“Government Issue,” , The Baltimore Sun/SunSpot.net. September 18, 2003 10
to be an expert in aerodynamics or piloting. This very lack of sophisticated end users renders our society at risk to a threat that is becoming more prevalent and more sophisticated.
Regardless of the topic – computing versus electric power generation versus air defense – survivability is all about preparing for failure so as to survive it. Survivability, whether as a concept or as a measure, is built on two pillars: replicated provisioning and diversified risk. Replicated (“redundant”) provisioning ensures that any entityʹs activities can be duplicated by some other activity; high availability database systems are such an example in computing just as backup generators are in electric power. The ability of redundant systems to protect against random faults is cost effective and well documented.
By contrast, redundancy has little ability to protect against cascade failure; having more computers with the same vulnerabilities cannot help if an attack can reach them all. Protection from cascade failure is instead the province of risk diversification – that is, using more than one kind of computer or device, more than one brand of operating system, which in turns assures that attacks will be limited in their effectiveness. This fundamental principle assures that, like farmers who grow more than one crop, those of us who depend on computers will not see them all fail when the next blight hits. This sort of diversification is widely accepted in almost every sector of society from finance to agriculture to telecommunications. In the broadest sense, economic diversification is as much the hallmark of free societies as monopoly is the hallmark of central planning.
Governments in free market societies have intervened in market failures – preemptively where failure was be intolerable and responsively when failure had become self- evident. In free market economies as in life, some failure is essential; the “creative destruction” of markets builds more than it breaks. Wise governments are those able to distinguish that which must be tolerated as it cannot be changed from that which must be changed as it cannot be tolerated. The reapportionment of risk and responsibility through regulatory intervention embodies that wisdom in action. If governments are going to be responsible for the survivability of our technological infrastructure, then whatever governments do will have to take Microsoft’s dominance into consideration.
2. MICROSOFT
To sum up this section:
• Microsoft is a near-monopoly controlling the overwhelming majority of systems.
• Microsoft has a high level of user-level lock-in; there are strong disincentives to
switching operating systems.
• This inability of consumers to find alternatives to Microsoft products is exacerbated by tight integration between applications and operating systems, and that integration is a long-standing practice.
• Microsoft’s operating systems are notable for their incredible complexity and complexity is the first enemy of security.
• The near universal deployment of Microsoft operating systems is highly conducive to cascade failure; these cascades have already been shown to disable critical infrastructure.
• After a threshold of complexity is exceeded, fixing one flaw will tend to create new flaws; Microsoft has crossed that threshold.
• Even non-Microsoft systems can and do suffer when Microsoft systems are infected.
• Security has become a strategic concern at Microsoft but security must not be permitted to become a tool of further monopolization.
Discussion:
Near-monopoly dominance of computing by Microsoft is obvious beyond the f
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com