Chapter 1 — Introduction

Introduction ♦ The Good, the Bad, and the In Between ♦ Making Tradeoffs ♦ Values ♦ Politics ♦ Today and Tomorrow: Web 1.0, 2.0, 3.0 ♦ A Look Ahead ♦ Further Reading and References ♦ Endnotes

Introduction

Many excellent books offer illuminating descriptions of the current crises in online privacy and security. We take the next step and offer solutions. Our solutions are public policy recommendations. Society needs innovative policies to reap the proper benefits from rapid technological change. We hope our recommendations will be adopted and be successful, but we also have another important goal: a shared understanding of the problems and a common language in which to discuss and analyze solutions. Finding adequate solutions to today’s online privacy and security problems requires combining a computer scientist’s expertise with a lawyer’s understanding of how to forge sound public policy and laws. You don’t need to be a legal scholar or to know any computer science to read this book, even though it contains sophisticated and accurate computer science and law. We have written as much as possible in plain English, but our plain English descriptions should be of interest even to experts. Solving the privacy and security problems means experts in one field have to find ways to communicate with experts in another.

We limit our discussion of privacy and security to the private sector. For security, this is not a significant limitation. Everything we say also applies to governmental computers and networks. Privacy is a different matter. Governmental intrusions into privacy raise legal and political issues that doe arise when private businesses encroach on privacy. The governmental threat is serious and increasingly worrisome. However, the chorus of concern over government intrusion into our private lives is already large and strong. Moreover, the threat from private business merits consideration on its own. Indeed, as the New York Times said in a 2012 article, the database marketing firm Acxiom “knows who you are. It knows where you live. It knows what you do. It peers deeper into American life than the F.B.I. or the I.R.S.” [1] Businesses routinely watch and record massive amounts of information about people’s Internet activities. Businesses now have the technological means to merge your online and offline footprints into profiles of surprising intrusiveness and accuracy. They can know where you work, how you spend your time, with whom, and even “with 87% certainty . . . where you’ll be next Thursday at 5:35 p.m.” [2]

The Good, the Bad, and the In Between

[top]

We divide the new twenty-first century world of rich online lives and vast computing power into the good, the bad, and the in between. We begin with the good, and then turn quickly to our primary concern—the bad and the in between. We need a short way to refer today’s combination of online society, the entire World Wide Web, huge databases, data mining, and vast computing power, so, for this section only, we will call it “the Internet,” even though the Internet is only part of the world that concerns us.

The Good

Before the Internet, media-rich, worldwide communication was the prerogative of governments and well-funded publishing companies. The Internet makes it possible for anyone with Internet access to communicate with anyone in the world, and to access almost all the world’s information in an instant. People can communicate and coordinate no matter how geographically diverse, no matter how few or how many. Freedom of association and expression flourish, as does innovation. Novel achievements include, among many others, marvels of decentralized coordination like Wikipedia and communication platforms like Facebook. While such marvels of the Internet age have at times been double-edged, society has benefitted greatly from them.

The Bad

Hackers exploit the very information-sharing features of the Internet that have brought so much good. To gain access to people’s computers, they reuse old techniques in the new media, such as exploiting human gullibility. They also avail themselves of new tools, such as viruses, worms, Trojans, rootkits, cross-site scripting, and SQL injection. Unauthorized access to computers and computer networks is rampant. Once hackers get into a computer, they deploy numerous threats, including denial of service attacks, packet sniffing, and session hijacking, and they commit crimes such as identity theft, extortion, theft of money or data, and industrial espionage. How can individuals, companies, and government together control crime while preserving everyone’s freedom to communicate and coordinate? Our suggestions focus on both improving software quality and eliminating malicious software, malware for short.

Current software development practices make it too easy for hackers to turn our own beneficial software against us and use it to infiltrate computers and networks. Richard C. Clarke, cyber security advisor to Presidents Clinton, Bush, and Obama, relates an anecdote that captures the severity of the problem. “When I asked the head of network security for AT&T what he would do if someone made him Cyber Czar for a day, he didn’t hesitate. ‘Software.’ ” [3] But serious as it is, software is only part of the problem. Once hackers get inside a computer, they install their tool of choice—malware. Malware includes viruses, worms, and Trojans, but there are also other inhabitants of the “malware zoo.” We describe them in Chapter 9. Current defenses against malware are woefully inadequate.

Making things better is no simple task. The practices that make us all vulnerable also give us benefits. They give software developers and Internet service providers the freedom and flexibility they need to respond and innovate in a rapidly changing technological environment, and they ensure that both software and Internet access is inexpensive and easy to use. Securing users against Internet crime will decrease flexibility and innovation, and make software and Internet access more expensive and more difficult to use. Throughout this book, we focus on finding the optimal tradeoff between these costs and more security.

The In Between

A lot of activity on Internet falls in our “in between” category: activity that makes many people uncomfortable, but about which, in the spirit of the Facebook age, we say, “It’s complicated,” rather than “It’s bad.” For example, let’s look at business and government surveillance. If the surveillance is illegal, it belongs in our “bad” category but a vast amount of surveillance is legal, or at least not clearly illegal. There are significant benefits: new services, conveniences, and efficiencies. But the price is deep inroads into privacy, inroads that threaten to destroy the freedom of association and expression the Internet has offered so far. The result of the legal surveillance is an ambiguous mix of positives and negatives that lies in between the clearly good and clearly bad. As we note in Chapter 5, twenty-plus years of studies and surveys show that people want to have their cake and eat it too. People want the benefits of personalization that increased computing power offers, but people also want their private information to stay private. Having both means finding tradeoffs that give everyone enough of each.

Making Tradeoffs

[top]

A common feature of the tradeoffs we examine is that they are unavoidable. For example, you must choose, if only by default, either to spend money on insurance against fraudulent impersonation—generally called identity theft—or not. To illustrate tradeoffs, think about what is involved in explicitly making the decision. If you are a victim of identity theft, according to one recent report, it will take you about 12 hours and $354 to resolve to resolve the problem. [4] Let’s suppose you value your time at $200 an hour. You then face a total quantifiable financial loss of $2754. However, the tradeoffs that concern us involve both quantifiable and non-quantifiable costs and benefits, and this considerably complicates matters. Suppose you can buy insurance against identity theft for about $275 a year. Should you do so? The answer is easy if we just consider quantifiable factors. We just need to know how likely it is that you will be a victim of identity theft. The probability turns out to be low—about 5 percent. This makes your expected loss (the $2474 discounted by the 95 percent probability of not being a victim) just under $138. So, as long as we consider only quantitative factors, it would be wasteful to buy the insurance because you would spend $275 to save $138.

Quantifiable concerns are not all that matter, however. Suppose, like many, you dislike taking risks. Imagine that if you were a victim, you would suffer from a very disturbing and long lasting sense of personal invasion resulting in significantly increased anxiety about online financial transactions. Is it worth spending $275 to save $138 and to get protection against these losses? It is extremely difficult, if not impossible, to quantify your sense of personal invasion in any meaningful way, so there is no quantitative calculation that can answer the question. Deciding whether to trade $275 for the protection offered requires finding a way to balance spending $275 against both quantitative and non-quantitative gains. The tradeoffs that will concern us when we consider software, malware, and privacy require balancing quantitative and non-quantitative interests. Society as a whole, together with the software industry, must choose some tradeoff between making mass-market software more resistant to malware attacks versus making software less expensive. Choosing a tradeoff requires balancing economic efficiency, improved security, innovation, competition, freedom of expression, and a vast array of other concerns, including several implicated in privacy. Today, in most cases, consumers and government make very poor tradeoffs in the areas of online security and privacy, and often make the tradeoffs implicitly, with very little thought. There is currently no consensus about how these tradeoffs should be made.

The lack of consensus would not matter much if we were still living in the mid-twentieth century, but, as is illustrated in Figure 1.1 and discussed at some length in Chapter 2, our world has changed.

warner ch 1 graph

Figure 1.1 Timeline of growth of the Internet and computer processing speed.

Before the Internet and modern digital technologies, security was largely a matter of protecting places and persons, and within the limits of the law, everyone could decide for himself or herself how much security was required. Sally’s decisions about security might still have affected Jim’s security, since if Sally’s house was extremely well protected, burglars might have preferred to target Jim’s, but Jim’s degree of security was still typically under Jim’s control because he could choose to protect his house better. As we illustrate in Chapters 6–10, in the Internet world everyone is interconnected in ways that make each person’s degree of online security highly dependent on the degree of security of others. If anyone individually is to have adequate security, then weak links in the chain must be avoided through a consensus about how much security everyone requires.

Similar remarks apply to privacy. In the mid-twentieth century, everybody had considerable power to ensure that what he or she thought should be private would in fact be so. Online data collection did not exist, although a precursor had appeared: credit card companies kept reports on customers in manila file folders. Surveillance techniques were primitive by today’s standards. Today, as we discuss in Chapters 4–5 and 11–12, the Internet and an astonishing increase in information processing power have deprived people of the control over their privacy that they once had. To adequately protect privacy, society needs consensus about how to use the twenty-first century’s new information processing power. In Chapter 12 we present our solution, which is to design decision processes that will lead to more or less general agreement about which uses are legitimate. The following discussion of values provides background essential to developing our decision procedures.

Values

[top]

Values always guide tradeoffs. The decision whether to buy insurance against identity theft is a matter of making tradeoffs among what you value—freedom from risk and protection against a sense of personal invasion against the other things for which you could use the money. A good non-technical explanation of the ideal tradeoff for you is that it is the one that is the best for you in terms of all the different things you care about: the tradeoff that is the best justified in light of all your values. We discuss this in detail in Chapter 3. But people cannot always live up to that ideal. To begin with, there may be no single tradeoff that is best justified. Your values might argue equally strongly in favor incompatible options: for example, you may value exercising after work and coming home earlier to your children equally. In such cases, our theory aims at choosing any of the equally good alternatives. There are other hurdles in the way of finding best justified tradeoffs. You might lack the necessary time, energy, or insight to identify a tradeoff, and, in some cases, there may be no tradeoff to find. Values and information about the likely outcome of your choices may be too incomplete, or too inconsistent to pick out even one best justified tradeoff. In many cases, especially cases involving the sorts novel situations that we are concerned with in this book, you will need to develop new, additional values. After all, unless you are under 25 or so, you didn’t develop values about how to behave on Facebook in your formative years. In other cases, you will need to resolve inconsistencies in your values to work toward a best justified tradeoff. Finding best justified tradeoffs is an ideal that can be only approximated in practice.

Some may think that the goal of developing values that even approximate best justified tradeoffs is a mistake. If anything is clear about values, it is that people disagree about them. As the philosopher John Rawls emphasizes, the appropriate view of social organization “takes deep and unresolvable differences on matters of fundamental significance as a permanent condition of human life.” [5] Rawls’s view, which we heartily agree with, seems to show our project is ill-conceived. We will aim at defining tradeoffs that are best justified more or less society-wide. How can we hope to do that in the face of “unresolvable differences on matters of fundamental significance”? The processes we have designed to identify best justified tradeoffs address that problem. They lead to general agreement that those tradeoffs are best justified. For convenience, we will usually describe things as if all consumers agreed, but, in fact, the processes we propose could lead to different tradeoffs for different groups. Each group would regard its tradeoff as best justified, but there might not be any tradeoff that everyone in every group regarded as best justified.

Some may still think we are wrong to resort to values. Philosophers have argued over the nature of values for over two thousand years. We are not going to answer any of the questions that concern the philosophers. Our foundation is the uncontroversial, everyday fact that we value all sorts of things: pictures by Goya, celebrating children’s birthdays, freedom of expression, pure mathematics, innovative kitchen appliances, and so on. Philosophers debate how to describe and explain values. Nothing we say depends on the outcome of that debate. Our point is the entirely uncontroversial one that our actions are the result of a complex interplay among a diverse array values.

Profit-Motive-Driven Businesses

We take a much simpler view of businesses. We assume businesses are dominated by the profit motive, so they will always act in ways they believe will maximize their profits over some given time period. This is a simplification. Business decision makers have values like everyone else, and those values manifest themselves in, among other ways, ideological commitments, moral beliefs, and personal loyalties. Our view of businesses as exclusively profit-motive driven is a fiction—a convenient fiction that is close enough to the truth that we can ignore the more complicated reality. Media companies are a good example. Spurred in part by the rise of the Internet, they have sought to reduce competition through mergers and acquisitions, and they have aggressively—and largely successfully—lobbied Washington for relief from the legal regulations that limit their ability to consolidate. In their lobbying efforts, the media companies claimed that relief from regulation would lead to an explosion of activity that would greatly increase the flow of more diverse content, create non-commercial, public interest programming, and promote ethnic minority ownership of media concerns. These promises have not been fulfilled. In radio, TV, and cable, media companies have increased profits by cutting costs through consolidation and standardization that has led to less diversity of content, very little non-commercial public interest programming, and minimal minority ownership. Critics contend that the dominant goal of maximizing profits defeated any attempt to realize other values. [6]

This is the kind of problem that concerns us in regard to software development, malware defense, and privacy. We argue that profit-motive-driven mass-market sellers often offer consumers products and services inconsistent with consumers’ values. Our proposed solution does not try to change sellers’ motives. We focus instead on buyers. Our view is that, for a variety of reasons, consumers tolerate, and sometimes even demand, products and services that are in fact inconsistent with what they value. Our solutions bring buyers’ demands in line with their values with the result that buyers will demand greater security and more privacy. We argue that the profit-maximizing strategy for a mass-market seller is to meet the changed demand and offer products and services consistent with it.

With the notable exception of behavioral advertising, discussed in Chapter 12, we need to rely on legal regulation to achieve our results. However, effective legal regulation is difficult and expensive, and for this reason, we rely on legal regulation only temporarily, in the initial stages of our solution. Our processes ultimately lead to social norms with which business will voluntarily comply because compliance is profit maximizing. We pursue this strategy until midway through Chapter 12. There are some key results about privacy we cannot achieve unless businesses internalize values that constrain their pursuit of profit. This would be a major cultural shift. [*]  It should not be surprising that we cannot adequately address all privacy concerns without developing a more privacy-respecting culture.

Politics

[top]

We believe that most of the solutions we propose are broadly applicable in democracies worldwide. However, both authors are American, and when relevant, we use the details of the American legal and political system. We are writing these words on the eve of the 2012 US presidential election.

Some may object that we have ignored political reality. We propose new legislation at various points, and, in some cases, the new regulations will trade privacy off against other gains. Aren’t our proposals just naïve? As the presidential cyber security advisor Richard C. Clarke quipped, “In Washington, one might as well advocate random forced abortions as suggest new regulation or create any greater privacy risks.” [7] Indeed, we do not just propose new regulation, we actually assume a reasonably well-informed and educated citizenry is represented in political processes that yield workable, unbiased, reasonably well justified decisions in a timely fashion. The reality is that solving the problems we raise requires well-reasoned legislation at various points. There is no alternative. This may be unlikely in the current political climate, but this means the United States has another prior problem to solve. Solving the problems raised by software, malware, and privacy requires achieving a societal consensus on tradeoffs, and that requires viable political processes.

Today and Tomorrow: Web 1.0, 2.0, 3.0

[top]

Internet technology changes with astonishing rapidity. You may be wondering, “How can you two be sure that the policy recommendations you make today will apply to the Internet of tomorrow?” This is a good question to which we owe an answer. We begin by adopting the fairly standard division of the Web into three stages: 1.0, 2.0, and 3.0. The classifications are imprecise, and as we will emphasize in the next chapter, the Web is just one aspect of the Internet, but here we can put both of these concerns aside. Web 1.0 was the Web in its early stages—before the blogs, wikis, social networking sites, and explosion of Web-based applications, which are characteristic of Web 2.0.

It is common now to see the Web as entering a new phase, and to call that new phase Web 3.0. The Web 3.0 future may well be the “always on everywhere” web. [**] Mobile devices, cars, energy meters, and even refrigerators will all connect to the Internet; most data will be stored “in the cloud,” on remote servers maintained by others, and typical consumers will run an enormous number of relatively small, customizable apps. Our “always connected” devices will become our personal assistants, answering questions and taking care of tasks. [8] Most of our discussion in the following chapters draws on “Web 2.0” examples—thus the worry about today’s recommendations being valid tomorrow. We have two reasons for keeping our feet firmly on the Web 2.0 ground.

First, predicting technological changes is a dangerous business: predictions typically turn out to be so wrong that they seem laughable later. When lending libraries first appeared, people predicted the death of print publishing; the actual effect was to greatly increase book buying because people wanted to own what they read. With the advent of VCRs came predictions of the end of the movie industry because people wouldn’t pay to watch in a theater what they could watch at home on a free copy. In fact, the revenue from video rentals was the movie companies’ salvation. The 1960s vision of videophones was the vision of The Jetsons cartoon: every call would be a video call. Today, all smartphones can function as videophones, but people most certainly do not consistently use that technology. Indeed, calling in any format has declined as texting has risen. We could go on and on with examples. We think it is better to ground our public policy recommendations in the present that we do know instead of the future that we do not.

Second, we don’t lose anything by sticking with Web 2.0 examples. Our discussions of privacy, software, and malware are applications of a general theory of the role of norms in governing market transactions. The model applies in a wide variety of cases in which rapid change has outstripped the evolution of norms, and it will extend readily to Web 3.0 issues as they arise.

A Look Ahead

[top]

We develop a general theory of norms and markets in Chapter 3. This approach reveals important similarities and differences among our three topics. The book begins and ends with privacy with software and malware in the middle. Our general theory is most intuitively presented in the context of privacy, which occupies Chapters 4 and 5. However, working out the solutions to the privacy problems posed in Chapters 4 and 5 involves complexities that are absent from software and malware. So we address those “easy” cases in Chapters 6–10. We build on those results when we return to privacy in Chapters 11 and 12. A crucial contrast emerges in Chapter 12 that requires a further elaboration.

First, however, we begin in Chapter 2 with a crash course in the history of computing, where we explain how we got to where we are today.


Further Reading and References

[top]

Statistics on the Internet in Recent Times

International Telecommunications Union. “International Telecommunication Union Data and Statistics.” Accessed September 9, 2012. http://www.itu.int/ITU-D/ict/statistics/.

The International Telecommunication Union (ITU) is the UN agency for information and communication technologies. The ITU maintains a rich and interesting collection of statistics, which are the source for many of the entries on Internet penetration in this chapter’s timeline.

Future of the Web

Anderson, Janna, and Lee Rainie. The Future of Apps and Web. Pew Internet, March 23, 2012. http://www.pewinternet.org/Reports/2012/Future-of-Apps-and-Web.aspx.

Considers the impact of apps and mobile devices on the web and argues “that mobile revolution, the popularity of targeted apps, the monetization of online products and services, and innovations in cloud computing will drive Web evolution.”

———. The Future of the Internet. Pew Internet, June 29, 2012. http://www.pewinternet.org/~/media//Files/Reports/2012/PIP_Future_of_Internet_2012_Big_Data.pdf.

Notes that “Cisco predicts that there will be 25 billion connected devices in 2015 and 50 billion by 2020, each generating data and insights that might prove helpful to those who monitor and collect such things,” and suggests that the “profusion of connectivity and data should facilitate a new understanding of how living environments can be improved.” But they worry about “humanity’s dashboard” being in government and corporate hands and they are anxious about people’s ability to analyze it wisely.


Endnotes

[top]

[*] We originally wrote, “major and unlikely,” but, as we discuss in Chapter 12, that may not be quite as clear as it seems.

[**] This is one of the meanings given to “Web 3.0.” Different commentators use the expression in quite different ways, which illustrates our argument that, as Neils Bohr is supposed to have said, “Prediction is very difficult—especially about the future.”


[1] Natasha Singer, “You for Sale: Mapping, and Sharing, the Consumer Genome,” New York Times, June 16, 2012, http://www.nytimes.com/2012/06/17/technology/acxiom-the-quiet-giant-of-consumer-database-marketing.html.

[2] Lucas Mearian, “Big Data to Drive a Surveillance Society,” Computerworld, March 24, 2011, http://www.computerworld.com/s/article/9215033/Big_data_to_drive_a_surveillance_society.

[3] Richard A. Clarke and Robert Knake, Cyber War: The Next Threat to National Security and What to Do About It (Harper Collins, 2010), 272.

[4] Javelin Strategy & Research, 2012 Identity Fraud Report: Consumers Taking Control to Reduce Their Risk of Fraud, February 2012, https://www.javelinstrategy.com/brochure/240.

[5] J. Rawls, “Kantian Constructivism in Moral Theory,” The Journal of Philosophy 77, no. 9 (1980): 542.

[6] Jeff Chester, Digital Destiny (The New Press, 2007).

[7] Clarke and Knake, Cyber War, 133.

[8] See, for example, Jonathan Strickland, “How Web 3.0 Will Work,” How Stuff Works, last accessed Sept. 14, 2012, http://computer.howstuffworks.com/web-30.htm.

This chapter is excerpted with the publisher’s permission.

 

Save 20% and get free shipping on Unauthorized Access by entering
promo code 193CM during checkout at www.crcpress.com

 

Buy at Amazon