Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets and Their Origins

Geoffrey Manne & Dirk Auer
Volume 28
,  Issue 4

Introduction

The dystopian novel is a powerful literary genre. It has given us such masterpieces as Nineteen Eighty-Four,1George Orwell, Nineteen Eighty-Four: A Novel (1949). Brave New World,2Aldous Huxley, Brave New World (1932). Fahrenheit 451,3Ray Bradbury, Fahrenheit 451 (1953). and Animal Farm.4George Orwell, Animal Farm: A Fairy Story (1945). Though these novels often shed light on some of the risks that contemporary society faces and the zeitgeist of the time when they were written, they almost always systematically overshoot the mark (whether intentionally or not) and severely underestimate the radical improvements commensurate with the technology (or other causes) that they fear. Nineteen Eighty-Four, for example, presciently saw in 1949 the coming ravages of communism, but it did not guess that markets would prevail, allowing us all to live freer and more comfortable lives than any preceding generation.5See, e.g., Matt Ridley, The Rational Optimist: How Prosperity Evolves 12 (2010) (noting that the present generation has access to more resources and opportunities than any previous generation). Fahrenheit 451 accurately feared that books would lose their monopoly as the foremost medium of communication, but it completely missed the unparalleled access to knowledge that today’s generations enjoy.6Obviously this has been enabled by the internet and the emergence of online knowledge sources, such as Google Search and Wikipedia, which have both expanded the extent of the world’s information and brought it—in usable form—into the palm of every consumer. And while Animal Farm portrayed a metaphorical world where increasing inequality is inexorably linked to totalitarianism and immiseration, global poverty has reached historic lows in the twenty-first century,7See, e.g., Max Roser, Global Economic Inequality, Our World in Data, https://perma.cc/9U92-VMWD (showing that the Gini coefficient of global inequality declined from 68.7 to 64.9 between 2003 and 2013). This data is sourced from Tomáš Hellebrandt & Paolo Mauro, The Future of Worldwide Income Distribution 13 (Peterson Inst. for Int’l Econ., Working Paper No. 15-7, 2015) (“Taking the global distribution of income as a whole, the Gini coefficient was 64.9 in 2013, down from 68.7 in 2003.”). and this is likely also true of global inequality.8World Bank, Summary of Chapter 1: Ending Global Poverty, World Bank (Sept. 19, 2018), https://perma.cc/A94M-VDGP (“In 2015, an estimated 736 million people were living below the international poverty line of $1.90 in 2011 purchasing power parity. This is down from 1.9 billion people in 1990. Over the course of a quarter-century, 1.1 billion people (on net) have escaped poverty and improved their standard of living.”). In short, for all their literary merit, dystopian novels appear to be terrible predictors of the quality of future human existence. The fact that popular depictions of the future often take the shape of dystopias is more likely reflective of the genre’s entertainment value than of society’s impending demise.9See, e.g., Devon Maloney, Why Is Science Fiction So Afraid of the Future?, The Verge (Nov. 6, 2017, 10:41 AM), https://perma.cc/T7FE-YDSP (“When tangible signs of humanity’s collapse are omnipresent, it can feel impossible to imagine humans surviving the next hundred years, let alone emerging into a utopic technological wonderland in the 26th century. This goes for consumers just as much as creators; truly imaginative futures like that of Valerian [and the City of a Thousand Planets], for example, bomb with audiences for being too far-flung without real critical purpose. They’re untethered and tone-deaf to the existential issues we’re facing in this very instant.” (emphasis omitted)); see also Joe Queenan, From Insurgent to Blade Runner: Why Is the Future on Film Always So Grim?, The Guardian (Mar. 19, 2015, 2:51 PM), https://perma.cc/A764-JEQU (“Why do films such as The Hunger Games and Elysium and Dredd always depict a world where everyone is miserable? . . . [M]aybe it’s because mature adults are envious of their grandchildren, and figure that if they themselves are not going to be around to enjoy the future, nobody else should enjoy it either. This isn’t very nice, but it’s exactly the way the middle-aged mind operates: après moi, le déluge, as Louis XVI’s granddad once put it.”).

But dystopias are not just a literary phenomenon; they are also a powerful force in policy circles. For example, in the early 1970s, the so-called Club of Rome published an influential report titled The Limits to Growth.10Donella H. Meadows, Dennis L. Meadows, Jørgen Randers & William W. Behrens III, The Limits to Growth (1972) (a report for The Club of Rome’s Project on the Predicament of Mankind). The report argued that absent rapid and far-reaching policy shifts, the planet was on a clear path to self-destruction:

If the present growth trends in world population, industrialization, pollution, food production, and resource depletion continue unchanged, the limits to growth on this planet will be reached sometime within the next one hundred years. The most probable result will be a rather sudden and uncontrollable decline in both population and industrial capacity.11Id. at 23.

Halfway through the authors’ 100-year timeline, however, available data suggests that their predictions were way off the mark. While the world’s economic growth has continued at a breakneck pace,12See GDP Per Capita, 1820 to 2018, Our World in Data, https://perma.cc/YP24-YNQM. extreme poverty,13See Max Roser & Esteban Ortiz-Ospina, Global Extreme Poverty, Our World in Data, https://perma.cc/5CDA-K75Z. famine,14See Joe Hasell & Max Roser, Famines, Our World in Data, https://perma.cc/8529-5AGD. and the depletion of natural resources15See Andrew McAfee, More from Less: The Surprising Story of How We Learned to Prosper Using Fewer Resources—and What Happens Next 1 (2019) (“For just about all human history our prosperity has been tightly coupled to our ability to take resources from the earth. So as we became more numerous and prosperous, we inevitably took more: more minerals, more fossil fuels, more land for crops, more trees, more water, and so on. But not anymore. In recent years, we’ve seen a different pattern emerge: the pattern of more from less.”). have all decreased tremendously.

For all its inaccurate and misguided predictions, dire tracts such as The Limits to Growth perhaps deserve some of the credit for the environmental movements that followed. But taken at face value, the dystopian future along with the attendant policy demands put forward by works like The Limits to Growth would have had cataclysmic consequences for, apparently, extremely limited gain. The policy incentive is to strongly claim impending doom. There’s no incentive to suggest “all is well,” and little incentive even to offer realistic, caveated predictions.

As we argue in this Article, antitrust scholarship and commentary is also afflicted by dystopian thinking. Today, antitrust pessimists have set their sights predominantly on the digital economy—“big tech” and “big data”—alleging a vast array of potential harms. Scholars have argued that the data created and employed by the digital economy produces network effects that inevitably lead to tipping and more concentrated markets.16See, e.g., Maurice E. Stucke & Allen P. Grunes, Debunking the Myths over Big Data and Antitrust, CPI Antitrust Chronicle, May 2015, at 2 (attempting to dispel the “myth[]” that competition can prosper in data-driven markets without far-reaching government intervention); see also Nathan Newman, Search, Antitrust, and the Economics of the Control of User Data, 31 Yale J. on Reg. 401, 453 (2014) (“The complex challenge of displacing a dominant incumbent such as Google in information-related markets should serve as a lesson that problems of network effects, technology lock-in, and the speed with which a dominant player can take control of a sector, all call for earlier intervention in technology markets. It would be better for regulators to maintain an open environment for innovation early, rather than depend on a post-facto, drawn-out court fight to displace a monopolist.”). In other words, firms will allegedly accumulate insurmountable data advantages and thus thwart competitors for extended periods of time. Some have gone so far as to argue that this threatens the very fabric of western democracy.17See, e.g., Tim Wu, The Curse of Bigness: Antitrust in the New Gilded Age 14 (2018) (“We have managed to recreate both the economics and politics of a century ago—the first Gilded Age—and remain in grave danger of repeating more of the signature errors of the twentieth century. As that era has taught us, extreme economic concentration yields gross inequality and material suffering, feeding an appetite for nationalistic and extremist leadership. Yet, as if blind to the greatest lessons of the last century, we are going down the same path. If we learned one thing from the Gilded Age, it should have been this: The road to fascism and dictatorship is paved with failures of economic policy to serve the needs of the general public.”); id. at 21 (“The most visible manifestations of the consolidation trend sit right in front of our faces: the centralization of the once open and competitive tech industries into just a handful of giants: Facebook, Amazon, Google, and Apple. The power that these companies wield seems to capture the sense of concern we have that the problems we face transcend the narrowly economic. Big tech is ubiquitous, seems to know too much about us, and seems to have too much power over what we see, hear, do, and even feel. It has reignited debates over who really rules, when the decisions of just a few people have great influence over everyone. Their power feels like ‘a kingly prerogative, inconsistent with our form of government’ in the words of Senator John Sherman, for whom the Sherman Act is named.”). Other commentators have voiced fears that companies may implement abusive privacy policies to shortchange consumers.18See, e.g., Tommaso Valletti, Chief Economist, Directorate-General for Competition, Keynote Address at CRA Annual Brussels Conference: Economic Developments in Competition Policy (Dec. 5, 2018). It has also been said that the widespread adoption of pricing algorithms will almost inevitably lead to rampant price discrimination and algorithmic collusion.19See Ariel Ezrachi & Maurice E. Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy 36–37 (2016) (“We note how Big Data and Big Analytics—in increasing the speed of communicating price changes, detecting any cheating or deviations, and punishing such deviations—can provide new and enhanced means to foster collusion.”). Indeed, “pollution” from data has even been likened to the environmental pollution that spawned The Limits to Growth: “If indeed ‘data are to this century what oil was to the last one,’ then—[it’s] argue[d]—data pollution is to our century what industrial pollution was to the last one.”20Omri Ben-Shahar, Data Pollution, 11 J. Legal Analysis 104, 106 (2019).

Some scholars have drawn explicit parallels between the emergence of the tech industry and famous dystopian novels. Professor Shoshana Zuboff, for instance, refers to today’s tech giants as “Big Other.”21See Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 376 (2019) (“I now name the apparatus Big Other: it is the sensate, computational, connected puppet that renders, monitors, computes, and modifies human behavior. Big Other combines these functions of knowing and doing to achieve a pervasive and unprecedented means of behavioral modification. Surveillance capitalism’s economic logic is directed through Big Other’s vast capabilities to produce instrumentarian power, replacing the engineering of souls with the engineering of behavior.” (emphasis omitted)). In an article called “Only You Can Prevent Dystopia,” one New York Times columnist surmised:

The new year is here, and online, the forecast calls for several seasons of hell. Tech giants and the media have scarcely figured out all that went wrong during the last presidential election—viral misinformation, state-sponsored propaganda, bots aplenty, all of us cleaved into our own tribal reality bubbles—yet here we go again, headlong into another experiment in digitally mediated democracy.

I’ll be honest with you: I’m terrified . . . There’s a good chance the internet will help break the world this year, and I’m not confident we have the tools to stop it.22Farhad Manjoo, Only You Can Prevent Dystopia, N.Y. Times (Jan. 1, 2020), https://perma.cc/KN7U-XYHC.

Parallels between the novel Nineteen Eighty-Four and the power of large digital platforms were also plain to see when Epic Games launched an antitrust suit against Apple and its App Store in August 2020.23Epic Games antitrust complaint starts with the following ominous sentences: “In 1984, the fledgling Apple computer company released the Macintosh—the first mass-market, consumer-friendly home computer. The product launch was announced with a breathtaking advertisement evoking George Orwell’s 1984 that cast Apple as a beneficial, revolutionary force breaking IBM’s monopoly over the computing technology market. Apple’s founder Steve Jobs introduced the first showing of the 1984 advertisement by explaining, ‘it appears IBM wants it all. Apple is perceived to be the only hope to offer IBM a run for its money . . . . Will Big Blue dominate the entire computer industry? The entire information age? Was George Orwell right about 1984?’ [ ] Fast forward to 2020, and Apple has become what it once railed against: the behemoth seeking to control markets, block competition, and stifle innovation. Apple is bigger, more powerful, more entrenched, and more pernicious than the monopolists of yesteryear. At a market cap of nearly $2 trillion, Apple’s size and reach far exceeds that of any technology monopolist in history.” Complaint at 1, Epic Games, Inc. vs. Apple, Inc., 4:20-cv-05640 (N.D. Cal. Aug. 13, 2020). Indeed, Epic Games released a short video clip parodying Apple’s famous “1984” ad (which upon its release was itself widely seen as a critique of the tech incumbents of the time).24Epic Games, Nineteen Eighty-Fortnite – #FreeFortnite, YouTube (Aug. 13, 2020), https://perma.cc/78SF-KY5X.

Similarly, a piece in the New Statesman, titled “Slouching Towards Dystopia: The Rise of Surveillance Capitalism and the Death of Privacy,” concluded that: “Our lives and behaviour have been turned into profit for the Big Tech giants—and we meekly click ‘Accept.’ How did we sleepwalk into a world without privacy?”25John Naughton, Slouching Towards Dystopia: The Rise of Surveillance Capitalism and the Death of Privacy, New Statesman (Feb. 26, 2020), https://perma.cc/S67W-HXTC.

Finally, a piece published in the online magazine Gizmodo asked a number of experts whether we are “already living in a tech dystopia.”26Daniel Kolitz, Are We Already Living in a Tech Dystopia?, Gizmodo (Aug. 24, 2020, 8:00 AM), https://perma.cc/V3YZ-5C4B. Some of the responses were alarming, to say the least:

I’ve started thinking of some of our most promising tech, including machine learning, as like asbestos: . . . it’s really hard to account for, much less remove, once it’s in place; and it carries with it the possibility of deep injury both now and down the line.

. . . .

We live in a world saturated with technological surveillance, democracy-negating media, and technology companies that put themselves above the law while helping to spread hate and abuse all over the world.

Yet the most dystopian aspect of the current technology world may be that so many people actively promote these technologies as utopian.27Id. (quoting Professor Jonathan Zittrain and Professor David Golumbia).

Antitrust pessimism is not a new phenomenon, and antitrust enforcers and scholars have long been fascinated with—and skeptical of—high tech markets. From early interventions against the champions of the Second Industrial Revolution (oil, railways, steel, etc.)28See United States v. U.S. Steel Corp., 251 U.S. 417 (1920); Standard Oil Co. v. United States, 221 U.S. 1 (1911); United States v. Trans-Missouri Freight Ass’n, 166 U.S. 290 (1897). through the mid-twentieth century innovations such as telecommunications and early computing (most notably the RCA, IBM, and Bell Labs consent decrees in the US)29See, e.g., In re Int’l Bus. Mach. Corp., 618 F.2d 923 (2d Cir. 1980); see also Martin Watzinger, Thomas A. Fackler, Markus Nagler & Monika Schnitzer, How Antitrust Enforcement Can Spur Innovation: Bell Labs and the 1956 Consent Decree, 12 Am. Econ. J. Econ. Pol’y 328, 328–29 (2020). to today’s technology giants, each wave of innovation has been met with a rapid response from antitrust authorities, copious intervention-minded scholarship, and waves of pessimistic press coverage.30See Dirk Auer & Nicolas Petit, Two Systems of Belief About Monopoly: The Press vs. Antitrust, 39 Cato J. 99, 99 (2019). This is hardly surprising given that the adoption of antitrust statutes was in part a response to the emergence of those large corporations that came to dominate the Second Industrial Revolution (despite the numerous radical innovations that these firms introduced in the process).31See Thomas J. DiLorenzo, The Origins of Antitrust: Rhetoric vs. Reality, Regulation, Fall 2019, at 26, 29-30. Especially for unilateral conduct issues, it has long been innovative firms that have drawn the lion’s share of cases, scholarly writings, and press coverage.

Underlying this pessimism is a pervasive assumption that new technologies will somehow undermine the competitiveness of markets, imperil innovation, and entrench dominant technology firms for decades to come. This is a form of antitrust dystopia. For its proponents, the future ushered in by digital platforms will be a bleak one—despite abundant evidence that information technology and competition in technology markets have played significant roles in the positive transformation of society.32See, e.g., Ridley, supra note 5, at 12; see generally Steven Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (2018). This tendency was highlighted by economist Ronald Coase:

[I]f an economist finds something—a business practice of one sort or another—that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be rather large, and the reliance on a monopoly explanation, frequent.33R. H. Coase, Industrial Organization: A Proposal for Research, in 3 Policy Issues and Research Opportunities in Industrial Organization 59, 67 (Victor R. Fuchs ed., 1972).

“The fear of the new—and the assumption that ‘ununderstandable practices’ emerge from anticompetitive impulses and generate anticompetitive effects—permeates not only much antitrust scholarship, but antitrust doctrine as well.”34Geoffrey A. Manne, Error Costs in Digital Markets, in The Global Antitrust Institute Report on the Digital Economy 33, 83 (Joshua D. Wright & Douglas H. Ginsburg eds., 2020) (footnote omitted). While much antitrust doctrine is capable of accommodating novel conduct and innovative business practices, antitrust law—like all common law-based legal regimes—is inherently backward looking: it primarily evaluates novel arrangements with reference to existing or prior structures, contracts, and practices, often responding to any deviations with “inhospitality.”35See Alan J. Meese, Price Theory, Competition, and the Rule of Reason, 2003 U. Ill. L. Rev. 77, 124 (2003) (describing the “inhospitality tradition of antitrust” as “extreme hostility toward any contractual restraint on the freedom of individuals or firms to engage in head-to-head rivalry”). For a discussion of the “inhospitality tradition” and its problematic consequences generally, see Oliver E. Williamson, The Economic Institutions of Capitalism: Firms, Markets, Relational Contracting 370–73 (1985), and Meese, supra. As a result, there is a built-in “nostalgia bias” throughout much of antitrust that casts a deeply skeptical eye upon novel conduct.

“The upshot is that antitrust scholarship often emphasizes the risks that new market realities create for competition, while idealizing the extent to which previous market realities led to procompetitive outcomes.”36Manne, supra note 34, at 83. Against this backdrop, our Article argues that the current wave of antitrust pessimism is premised on particularly questionable assumptions about competition in data-intensive markets.

Part I lays out the theory and identifies the sources and likely magnitude of both the dystopia and nostalgia biases. Having examined various expressions of these two biases, the Article argues that their exponents ultimately seek to apply a precautionary principle within the field of antitrust enforcement, made most evident in critics’ calls for authorities to shift the burden of proof in a subset of proceedings.

Part II discusses how these arguments play out in the context of digital markets. It argues that economic forces may undermine many of the ills that allegedly plague these markets—and thus the case for implementing a form of precautionary antitrust enforcement. For instance, because data is ultimately just information, it will prove exceedingly difficult for firms to hoard data for extended periods of time. Instead, a more plausible risk is that firms will underinvest in the generation of data. Likewise, the main challenge for digital economy firms is not so much to obtain data, but to create valuable goods and hire talented engineers to draw insights from the data these goods generate. Recent empirical findings suggest, for example, that data economy firms don’t benefit as much as often claimed from data network effects or increasing returns to scale.

Part III reconsiders the United States v. Microsoft Corp.3784 F. Supp. 2d 9 (D.D.C. 1999), aff’d in part and rev’d in part, 253 F.3d 34 (D.C. Cir. 2001). antitrust litigation—the most important precursor to today’s “big tech” antitrust enforcement efforts—and shows how it undermines, rather than supports, pessimistic antitrust thinking. It shows that many of the fears that were raised at the time didn’t transpire (for reasons unrelated to antitrust intervention). Rather, pessimists missed the emergence of key developments that greatly undermined Microsoft’s market position, and greatly overestimated Microsoft’s ability to thwart its competitors. Those circumstances—particularly revolving around the alleged “applications barrier to entry”—have uncanny analogues in the data markets of today. We thus explain how and why the Microsoft case should serve as a cautionary tale for current enforcers confronted with dystopian antitrust theories.

In short, the Article exposes a form of bias within the antitrust community. Unlike entrepreneurs, antitrust scholars and policy makers often lack the imagination to see how competition will emerge and enable entrants to overthrow seemingly untouchable incumbents. New technologies are particularly prone to this bias because there is a shorter history of competition to go on and thus less tangible evidence of attrition in these specific markets. The digital future is almost certainly far less bleak than many antitrust critics have suggested and yet the current swath of interventions aimed at reining in “big tech” presume. This does not mean that antitrust authorities should throw caution to the wind. Instead, policy makers should strive to maintain existing enforcement thresholds, which exclude interventions that are based solely on highly speculative theories of harm.

I.     The Precautionary Principle and Antitrust

Much of the momentum to reform antitrust enforcement for the twenty-first century is firmly rooted in the precautionary principle. The precautionary principle can be thought of as a form of cost-benefit analysis that gives significantly more weight to potential harms than potential benefits. In its most extreme form, the precautionary principle might even give an infinite weight to potential harms, implying that no action should be taken unless it is certain that it will cause no harm at all, however large its countervailing benefits may be.38See Julian Morris, Rethinking Risk and the Precautionary Principle 1 (2000) (“Whilst there are many definitions of the precautionary principle (hereinafter, PP), it is worth distinguishing two broad classes: first, the Strong PP, which says basically, take no action unless you are certain that it will do no harm; and second, the Weak PP, which says that lack of full certainty is not a justification for preventing an action that might be harmful.”).

Precautionary reasoning is often, but not always, reserved for situations that involve an element of uncertainty rather than mere risk.39See Nassim Nicholas Taleb, Rupert Read, Raphael Douady, Joseph Norman & Yaneer Bar-Yam, The Precautionary Principle (with Application to the Genetic Modification of Organisms) 4 (NYU Sch. of Eng’g Working Paper Series, 2014) (“In some classes of complex systems, controlled experiments cannot evaluate all of the possible systemic consequences under real-world conditions. In these circumstances, efforts to provide assurance of the ‘lack of harm’ are insufficiently reliable. This runs counter to both the use of empirical approaches (including controlled experiments) to evaluate risks, and to the expectation that uncertainty can be eliminated by any means.”). As economist Frank Knight observed, risk describes situations where the outcome of an action is unknown, but where the probabilities and harms associated with that action are known.40Frank H. Knight, Risk, Uncertainty, and Profit 19–20 (1921) (“Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. The term ‘risk,’ as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organization, are categorically different . . . . The essential fact is that ‘risk’ means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomenon depending on which of the two is really present and operating . . . . It will appear that a measurable uncertainty, or ‘risk’ proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We shall accordingly restrict the term ‘uncertainty’ to cases of the non-quantitative type.”). Risk thus lends itself to precise expected gain computations. Conversely, uncertainty is present when the payoffs or probabilities of an action are unknown.41Id. Precise expected gain calculations are thus impossible. The precautionary principle has often been cited as a method of dealing with uncertainty.42See, e.g., Cass R. Sunstein, The Paralyzing Principle, 25 Regulation, Winter 2002, at 32, 36 (“In a situation of uncertainty, when existing knowledge does not permit regulators to assign probabilities to outcomes, some argue that people should follow the ‘Maximin’ Principle: Choose the policy with the best worst-case outcome.”). Proponents of mild versions of the precautionary principle thus tend to circumscribe its application to situations of uncertainty, whereas proponents of stronger versions of the principle also urge authorities to apply it to situations of risk.43See, e.g., Taleb et al., supra note 39, at 1 (“Traditional decision-making strategies focus on the case where harm is localized and risk is easy to calculate from past data. Under these circumstances, cost-benefit analyses and mitigation techniques are appropriate. The potential harm from miscalculation is bounded. On the other hand, the possibility of irreversible and widespread damage raises different questions about the nature of decision making and what risks can be reasonably taken. This is the domain of the [precautionary principle].”).

The distinction between risk and uncertainty is critical. It is widely accepted that embracing localized risk (properly defined) is not just acceptable, but eminently desirable.44See id. Driving a car may increase a person’s risk of mortality compared to sitting at home, but the benefits probably outweigh the costs. More importantly, when only risk is involved, good and bad outcomes can and should be weighed against each other, accounting for both sides of the coin. There is thus no obvious reason to reject cost-benefit analysis as a basis for risky decision-making.

Real questions—and ensuing policy debates—arise in situations of uncertainty.45Although the line between risk and uncertainty is blurry. Indeed, there is no such thing as absolute scientific certainty, because scientific statements must, by definition, be falsifiable. For a discussion of science and falsification, see Karl Popper, The Logic of Scientific Discovery 312–16 (1959). See also Hilary Putnam, The ‘Corroboration’ of Theories, in The Philosophy of Science 126–27 (Richard Boyd, Philip Gasper & J.D. Trout eds., 1991) (arguing that theories should not automatically be rejected because they have not been falsified in particular instances). Uncertainty gives rise to two interrelated policy questions. First, how should decisionmakers account for hypothetical costs and benefits (although the latter side of the equation is often ignored by critics)? Second, who should bear the burden of proving which outcomes are plausible or most likely?

Undergirding these two questions is a fear that uncertainty may conceal harmful, fat-tailed outcomes (i.e., low probability/high impact events, sometimes referred to as “Black Swans”).46See Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable xviii (2007). For instance, the seemingly never-ending policy debates surrounding the use of genetically modified organisms for human consumption have this characteristic. On one side of the debate, commentators cite significant possible risks, while proponents point towards a veritable cornucopia of potential upsides.47See, e.g., Taleb et al., supra note 39; Mark Spitznagel & Nassim Nicholas Taleb, Another ‘Too Big to Fail’ System in G.M.O.s, N.Y. Times (July 13, 2015), https://perma.cc/AJ88-NWJP. But see Ronald Bailey, GMO Alarmist Nassim Taleb Backs Out of Debate. I Refute Him Anyway, Reason (Feb. 19, 2016, 1:30 PM), https://perma.cc/39VB-FFR4. Whatever one’s views on this matter, the key challenge for policy makers clearly lies in weighing harms (and benefits) that are often hypothetical against tangible upsides (and drawbacks).

In order to address these questions, policy makers and scholars have put forward various iterations of the precautionary principle.48See, e.g., Cass R. Sunstein, Beyond the Precautionary Principle, 151 U. Pa. L. Rev. 1003, 1014 (2003). At one end of the spectrum, proponents argue that the precautionary principle should preclude any activity that is not proven to be safe (though, taken literally, this command is impossible), and that parties undertaking an activity should have the burden of proving its safety.49Id. (describing the “Prohibitory Precautionary Principle”). On the other side, proponents argue that absence of scientific uncertainty is not sufficient to preclude regulation and that the burden of proving potential risks should fall upon those who seek to impose precautionary measures.50Id. (describing the “Nonpreclusion Precautionary Principle”).

The goal of this Article is certainly not to provide an exhaustive survey of these various embodiments of the precautionary principle, and much less to argue in favor of one, or none of them. Instead, we wish to point out that precautionary principle-type reasoning has increasingly permeated antitrust policy discourse.51See, e.g., Aurelien Portueuse, The Rise of Precautionary Antitrust: An Illustration with the EU Google Android Decision, Competition Pol’y Int’l (Nov. 17, 2019), https://perma.cc/TE97-4NJ5 (“To a non-negligible extent, the Google Android decision illustrates the coming to the fore of a form of precautionary antitrust whereby, even without proven consumer harm, competition authorities are not barred from ex ante intervention to protect what can be seen as irreversible damage—an ‘effective competitive structure’—enabling competitors to emerge and compete.” (footnote omitted)).

This is best evidenced by two phenomena, which we refer to as “Antitrust Dystopia” and “Antitrust Nostalgia.” Antitrust Dystopia is the pessimistic tendency for competition scholars and enforcers to assert that novel business conduct will cause technological advances to have unprecedented, anticompetitive consequences. This is almost always grounded in the belief that “this time is different”—that, despite the benign or positive consequences of previous, similar technological advances, this time those advances will have dire, adverse consequences absent enforcement to stave off abuse.

Complementary to Antitrust Dystopia is Antitrust Nostalgia: the biased assumption—often built into antitrust doctrine itself—that change is bad. Antitrust Nostalgia holds that because a business practice has seemingly benefited competition before, changing it will harm competition going forward. Thus, antitrust enforcement is often skeptical of, and triggered by, various deviations from status quo conduct and relationships (i.e., “non-standard” business arrangements) when, to a first approximation (and at the very least in digital marketplaces), change is the hallmark of competition itself.52See Thomas M. Jorde & David J. Teece, Antitrust Policy and Innovation: Taking Account of Performance Competition and Competitor Cooperation, 147 J. Inst’l & Theoretical Econ. 118, 122 (1991) (“At minimum, we would propose that when the promotion of static consumer welfare and innovation are in conflict, the courts and administrative agencies should favor innovation. Adopting dynamic competition and innovation as the goal of antitrust would, in our view, serve consumer welfare over time more assuredly than would the current focus on short-run consumer welfare.”).

The following Sections illustrate these two tendencies. Section A discusses Antitrust Dystopia. Section B focuses on Antitrust Nostalgia. Both Sections draw from evidence within scholarship that calls for heightened antitrust enforcement in the digital sphere, particularly against firms that have access to large datasets of personal information concerning their users. We show that dystopia and nostalgia biases cause proponents to resort to precautionary reasoning; however, there is currently no evidence to warrant this precautionary approach. Indeed, while there is undoubtedly some level of uncertainty at play in digital markets, there is no evidence to suggest that the uncertainty could give rise to the type of fat-tailed situations where precautionary reasoning is arguably appropriate.

A.     Dystopia

Recent antitrust scholarship and commentary has routinely voiced dystopian concerns, leading to several recurring tropes. One such trope is the oft-repeated mantra that the digital future will be bleak if governments do not act now. Similarly, it is often said that the emergence of large digital platforms poses threats to democracy that must, at least in part, be addressed through government intervention. Finally, critics often claim that the consolidation of digital markets will significantly slow innovation. Undergirding all of these is the claimed condition of fundamental uncertainty: reasoning from past experience is unavailing because this time will be different.

1.     This Time is Different

As both the dystopia and nostalgia tendencies show, antitrust scholarship relating to digital competition often seems fearful of change. Scholars readily assume that new business realities complicate the task of antitrust authorities. Novel market features are often seen as a threat rather than an opportunity for authorities, consumers, and innovative rivals. The report on digital competition ordered by the European Commission neatly illustrates this view:

Despite the many benefits that digital innovation has brought, much of the enthusiasm and idealism that were so characteristic of the early years of the Internet has given way to concerns and scepticism.53Jacques Crémer, Yves-Alexandre De Montjoye & Heike Schweitzer, Competition Policy For The Digital Era 12 (2019) (emphasis added).

. . . .

Digitisation is profoundly changing our economies, societies, access to information, and ways of life. It has brought welcome innovation, new products and new services, and has become an integral part of our daily lives. However, there is increasing anxiety about its ubiquity, political and societal impact and, more relevant to our focus, about the concentration of power by a few very large digital firms.54Id. at 125 (emphasis added).

From a more technical and economic standpoint, commentators routinely conclude that digital markets display several features that arguably increase the likelihood of anticompetitive outcomes. They often ignore or minimize how these same features also benefit consumers, however, instead asserting that this time negative effects will predominate.

In a European Union committee report tasked with analyzing the competitive functioning of the digital economy, several prominent scholars concluded that:

The cost of production of digital services is much less than proportional to the number of customers served. While this aspect is not novel as such (bigger factories or retailers are often more efficient than smaller ones), the digital world pushes it to the extreme and this can result in a significant competitive advantage for incumbents.55Id. at 2 (emphasis added).

The economic phenomenon just described is commonly referred to as “increasing returns to scale.”56See, e.g., Hal R. Varian, Microeconomic Analysis 16 (3d ed. 1992) (“[W]hen output increases by more than the scale of the inputs, we say the technology exhibits increasing returns to scale.”). The European Commission’s report on digital competition further noted the following about these increasing returns:

[T]he specificities of many digital markets have arguably changed the balance of error cost and implementation costs, such that some modifications of the established tests, including the allocation of the burden of proof and the definition of the standard of proof, may be called for. In particular, in the context of highly concentrated markets characterised by strong network effects and subsequently high barriers to entry (a setting where impediments to entry which will not be easily corrected by markets), one may want to err on the side of disallowing types of conduct that are potentially anticompetitive, and to impose the burden of proof for showing pro-competitiveness on the incumbent. This may be even more true where platforms display a tendency to expand their dominant positions in ever more neighbouring markets, growing into digital ecosystems which become ever more difficult for users to leave. In such cases, there may, for example, be a presumption in favour of a duty to ensure interoperability.57Crémer et al., supra note 53, at 51 (emphasis added).

And Margrethe Vestager, the current head of DG Competition (the European Union’s main antitrust authority), made similar claims in a recent speech:

It’s not just that digitisation has made economies of scale more important than before. It’s also that the huge amount of data that some platforms have, and the huge networks behind them, can give them an edge that smaller rivals can’t match.58Margrethe Vestager, European Comm’n, Speech on Competition and the Digital Economy at OECD/G7 Conference in Paris (June 3, 2019) (emphasis added).

But while these increasing returns can cause markets to become more concentrated, they also imply that it is often more efficient to have a single firm serve the entire market.59See, e.g., W. Brian Arthur, Increasing Returns and the New World of Business, 74 Harv. Bus. Rev. 100, 106 (1996) (“In Marshall’s world, antitrust regulation is well understood. Allowing a single player to control, say, more than 35% of the silver market is tantamount to allowing monopoly pricing, and the government rightly steps in. In the increasing-returns world, things are more complicated. There are arguments in favor of allowing a product or company in the web of technology to dominate a market, as well as arguments against.”). For instance, to a first approximation, network effects, which are one potential source of increasing returns, imply that it is more valuable—not just to the platform, but to the users themselves—for all users to be present on the same network or platform.60Id. In other words, fragmentation—de-concentration—may be more of a problem than monopoly in markets that exhibit network effects and increasing returns to scale.61See Volker Nocke, Martin Peitz & Konrad Stahl, Platform Ownership, 5 J. Eur. Econ. Ass’n 1130, 1133 (2007). Given this, it is far from clear that antitrust authorities should try to prevent consolidation in markets that exhibit such characteristics, nor is it self-evident that these markets somehow produce less consumer surplus than markets that do not exhibit such increasing returns.62Whether or not they feature increasing returns to scale, recent research suggests that digital markets produce significant value for consumers. See, e.g., Erik Brynjolfsson, Avinash Collis & Felix Eggers, Using Massive Online Choice Experiments to Measure Changes in Well-Being, 116 Proc. Nat’l Acad. Scis. 7250, 7250 (2019) (“Our overall analyses reveal that digital goods have created large gains in well-being that are not reflected in conventional measures of GDP and productivity.”). Unfortunately, however, would-be antitrust reformers routinely overlook these important counterarguments or assume that they are meritless.

The idea that “this time is different” has also led scholars to argue that some industries deserve special protection against competitive disruption. A report published by the University of Chicago’s Stigler Center, for instance, argues that:

Digital Platforms are devastating the newspaper industry: Newspapers are a collateral damage of the digital platform revolution. Craigslist destroyed the lucrative newspaper classified ads, and Google and Facebook dramatically reduced the revenues newspapers could get from traditional advertising. Local newspapers have been hit particularly hard: At least 1800 newspapers closed in the United States since 2004, leaving more than 50% of US counties without a daily local paper. Every technological revolution destroys pre-existing business models. Creative destruction is the essence of a vibrant economy. In this respect, there is nothing new and nothing worrisome about this process. Yet, a vibrant, free, and plural media industry is necessary for a true democracy. The newspapers of yesteryear played an essential function in a democratic system.63Luigi Zingales & Filippo Maria Lancieri, Policy Brief, in George J. Stigler Center for the Study of the Economy and the State, Stigler Committee on Digital Platforms: Final Report 6, 10 (2019) [hereinafter Stigler Center Report, Policy Brief] (emphasis added).

In this case, it is not that the market dynamics that are presumed to play out differently, but that the consequences in this market or for this industry will be uniquely dire.

The upshot is that antitrust scholarship often emphasizes the unique risks that new market realities create for competition, while idealizing the extent to which previous market realities led to procompetitive outcomes. The critics who mobilize these sentiments generally call for the introduction of precautionary measures to keep anticompetitive concerns at bay. As argued below, however, there is little reason to believe that such measures are necessary, or that they would even be beneficial.

2.     The Future is Bleak

Building on the “this time is different” sentiment is the first big dystopian trope: that our digital future will be miserable if governments do not act now. This is well illustrated by a quote from Tim Wu’s The Curse of Bigness, in which he discusses the rise of “big tech”:

We have managed to recreate both the economics and politics of a century ago—the first Gilded Age—and remain in grave danger of repeating more of the signature errors of the twentieth century. As that era has taught us, extreme economic concentration yields gross inequality and material suffering, feeding an appetite for nationalistic and extremist leadership. Yet, as if blind to the greatest lessons of the last century, we are going down the same path. If we learned one thing from the Gilded Age, it should have been this: The road to fascism and dictatorship is paved with failures of economic policy to serve the needs of the general public.64Wu, supra note 17, at 14 (emphasis added).

The Stigler Center Report offers another example. The report cautions governments against the severe economic and social consequences that digital industries will purportedly give rise to, offering its suggested brand of state intervention as a necessary corrective: “[T]his report is offered in the spirit of ensuring a future of continued technological and economic progress and social well-being as we move further forward into the Digital Age.”65Market Structure and Antitrust Subcommittee, Stigler Center for the Study of the Economy and the State, Report, in George J. Stigler Center for the Study of the Economy and the State, Stigler Committee on Digital Platforms: Final Report 23, 28 (2019) [hereinafter Stigler Center Report, Antitrust Subcommittee].

Along similar lines, some scholars have suggested the digital world is not living up to its full potential. This is perhaps best illustrated by a passage written by economist Jason Furman and his co-authors, in a report commissioned by the UK Treasury:

Digital technology is providing substantial benefits to consumers and the economy. But digital markets are still not living up to their potential. A set of powerful economic factors have acted both to limit competition in the market at any point in time and also to limit sequential competition for the market in which new companies would overthrow the currently dominant ones. This means that consumers are missing out on the full benefits and innovations competition can bring.66Digital Competition Expert Panel, Unlocking Digital Competition: Report of the Digital Competition Expert Panel 17 (2019) (emphasis added). Digital Competition Expert Panel Chair Jason Furman also chaired the Council of Economic Advisers under the Obama Administration.

The view that governments must act now was also echoed by the Australian Competition and Consumer Commission:

Important for Governments to act now, responding to current problems and anticipating future issues

We are at a critical time in the development of digital platforms and their impact on society. Digital platforms have fundamentally changed the way we interact with news, with each other, and with governments and business. It is also clear that the markets in which digital platforms and news media businesses operate will continue to evolve. It is very important that governments recognise the role digital platforms perform in our individual and collective lives, be responsive to emerging issues, and be proactive in anticipating challenges and problems.67Australian Competition & Consumer Commission, Digital Platforms Inquiry: Final Report 27 (2019).

In short, there is a long strand of antitrust scholarship that concludes governments must act immediately in the digital sphere or face the prospect of severe economic and social consequences. But while it is manifestly true that the future brings new risks and uncertainties and that the present is far from perfect, these critics often ignore the flipside of the same coin: things could also be worse today and, more importantly, there is no guarantee that the contemplated government interventions will lead to welfare improvements going forward.68In doing so, these commentators thus fall prey to the “Nirvana fallacy.” See Harold Demsetz, Information and Efficiency: Another Viewpoint, 12 J.L. & Econ. 1 (1969) (“[T]hose who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient.”).

3.     Our Democracy Is at Stake

Tightly linked to these first two tropes is the oft-repeated claim that, if left unchecked, the rise of large digital platforms may threaten the very fabric of western democracies. For instance, Columbia Professor Tim Wu—and now member of President Biden’s National Economic Council—has argued that big tech firms currently exert too much control over our daily lives:

Big tech is ubiquitous, seems to know too much about us, and seems to have too much power over what we see, hear, do, and even feel. It has reignited debates over who really rules, when the decisions of just a few people have great influence over everyone. Their power feels like “a kingly prerogative, inconsistent with our form of government” in the words of Senator John Sherman, for whom the Sherman Act is named.69Wu, supra note 17, at 21 (emphasis added).

The Stigler Center Report reached a similar conclusion:

This concentration of economic, media, data, and political power is potentially dangerous for our democracies . . . . To make matters worse, as more of our lives move online, the more commanding these companies will become. We are currently placing the ability to shape our democracies into the hands of a couple of unaccountable individuals. It is clear that something has to be done.70Stigler Center Report, Policy Brief, supra note 63, at 11 (emphasis added).

The Stigler Center Report also claims that Google and Facebook are in a unique position to thwart the democratic forces:

Digital platforms are uniquely powerful political actors: Google and Facebook may be the most powerful political agents of our time. They congregate five key characteristics that normally enable the capture of politicians and that hinder effective democratic oversight[.]

. . . .

In sum, Google and Facebook have the power of ExxonMobil, the New York Times, JPMorgan Chase, the NRA, and Boeing combined. Furthermore, all this combined power rests in the hands of just three people.71Id. at 9–10 (emphasis added).

Finally, Federal Trade Commission (“FTC”) Chair Lina Khan famously argued that the way Amazon operates its retail platform excludes rivals, thus increasing economic concentration and threatening media freedom:

The political risks associated with Amazon’s market dominance also implicate some of the major concerns that animate antitrust laws. For instance, the risk that Amazon may retaliate against books that it disfavors— either to impose greater pressure on publishers or for other political reasons— raises concerns about media freedom.72Lina M. Khan, Amazon’s Antitrust Paradox, 126 Yale L.J. 710, 767 (2017).

Even specific elements of these platforms’ behaviors are sometimes said to imperil our polity: “Digital platform self-preferencing threatens the American Dream. When digital platforms pick the winners and losers of our economy, we lose the American promise of upward mobility based on merit.”73Competition in Digital Technology Markets: Examining Self-Preferencing by Digital Platforms, Hearing Before the S. Judiciary Comm. Subcomm. on Antitrust, Competition Pol’y and Consumer Rts., 116th Cong. (Mar. 10, 2020) (testimony of Sally Hubbard, Dir. of Enforcement Strategy, Open Mkts Inst.) (emphasis added).

To summarize, critics routinely conclude that the advent of “big tech” today jeopardizes individual freedoms because the high levels of economic concentration that these markets exhibit allegedly translates inexorably into greater political power.74See, e.g., Luigi Zingales, Towards a Political Theory of the Firm, 31 J. Econ. Persps. 113, 124 (2017) (“Thus, in a fragmented and competitive economy, firms find it difficult to exert this power. In contrast, firms that achieve some market power can lobby (in the broader sense of the term) in a way that ordinary market participants cannot. Their market power gives them a comparative advantage at the influence game: the greater their market power, the more effective they are at obtaining what they want from the political system. Moreover, the more effective they are at obtaining what they want from the political system, the greater their market power will be, because they can block competitors and entrench themselves.”). Contra Kevin B. Grier, Michael C. Munger & Brian E. Roberts, The Industrial Organization of Corporate Political Participation, 53 So. Econ. J. 727, 737 (1991) (“Our results indicate that both sides are right, over some range of concentration. The relation between political activity and concentration is a polynomial of degree 2, rising and then falling, achieving a peak at a four-firm concentration ratio slightly below 0.5.”); Geoffrey Manne & Alec Stapp, Does Political Power Follow Economic Power?, Truth on the Market (Dec. 30, 2019), https://perma.cc/74JY-KPHN (“If we look at the lobbying expenditures of the top 50 companies in the US by market capitalization, we see an extremely weak (at best) relationship between firm size and political power (as proxied by lobbying expenditures)[.]”). What these critics miss (aside from the tenuousness of the asserted causal relationship between economic concentration and political power) is that the type of heavy-handed intervention against big tech firms that many of these critics prescribe may pose equal if not greater threats to individual freedoms.75For instance, regulating speech on social media may ultimately lead to dangerous government censorship. See, e.g., Niam Yaraghi, Regulating Free Speech on Social Media Is Dangerous And Futile, Brookings Inst. (Sept. 21, 2018), https://perma.cc/3DWN-FFBC; see also Geoffrey A. Manne, Dirk Auer & Samuel Bowman, Why ASEAN Competition Laws Should Not Emulate European Competition Policy, Sing. Econ. Rev. (forthcoming 2021) (“Endorsing the European approach to antitrust, in a naïve attempt to bring high-profile cases against large internet platforms, would prioritize political expediency over the rule of law. It would open the floodgates of antitrust litigation and facilitate deleterious tendencies, such as non-economic decision-making, rent-seeking, regulatory capture, and politically motivated enforcement.”). In other words, even if protecting individuals from the influence of powerful entities was a valid goal of antitrust policy, it is not clear that increased government intervention would achieve this end rather than the opposite.

4.     Innovation Will Dwindle

Another recurring theme in Antitrust Dystopia is the idea that big tech firms will cause innovation to slow down if governments do not intervene. The intuition is that, because of their powerful market positions, big tech firms can either capture the profits of their rivals (by using their bottleneck positions to squeeze them) or snuff out budding competitors through so-called killer acquisitions. This, in turn, is said to reduce both rivals’ and incumbents’ incentives to innovate: the former because their expected profits from innovation are reduced, and the latter because they no longer need to innovate in order to best their competition.

These fears are perhaps best encapsulated by economist Hal Singer’s comments during the 2019 FTC hearings, as well as the paper on which these claims are based76See Kevin Caves & Hal Singer, When the Econometrician Shrugged: Identifying and Plugging Gaps in the Consumer-Welfare Standard, 26 Geo. Mason L. Rev. 395, 416 (2018) (“[I]nnovation harms could be addressed outside antitrust pursuant to a nondiscrimination standard . . . . Like a rule-of-reason case under antitrust, the complainant would bear the burden to show that the differential treatment violated the nondiscrimination standard, assuming it could meet certain evidentiary criteria.”).:

Dominant tech platforms can also exploit the vast amounts of user data made available only to them by monitoring what their users do both on and off their platforms and then appropriating the best performing ideas, functionality, and nonpatentable products pioneered by independent providers. If these practices are left unchecked, the resulting competitive landscape could become so inhospitable that independents might throw in the towel, leading to less innovation at the platform’s edges.77Competition and Consumer Protection in the 21st Century, FTC Hearing #3 Day 3: Multi-Sided Platforms, Labor Markets, and Potential Competition Before the FTC (2018) (statement of Hal Singer, Managing Dir., Econ One) (emphasis added).

On a more dramatic note, in a piece promoting the Stigler Center Report, Professor Luigi Zingales referred to big tech firms as “robber barons” that levy a tax on innovation:

[B]y positioning themselves as a mandatory bottleneck between new entrants and customers, digital platforms play the role of the traditional robber barons, who exploited their position of gatekeepers to extract a fee from all travelers. Not only does this fee represent a tax on innovation, it also reduces the value new entrants can fetch alone and thus the price at which they would be acquired.78Luigi Zingales, “The Digital Robber Barons Kill Innovation”: The Stigler Center’s Report Enters the Senate, ProMarket (Sept. 25, 2019), https://perma.cc/KHF4-8KVU (emphasis added).

Merger enforcement in particular has become a key area of focus for these critics, who accuse large firms of buying startups to kill off future competition.79See, e.g., Colleen Cunningham, Florian Ederer & Song Ma, Killer Acquisitions, 129 J. Pol. Econ. 649, 649 (2021) (“This paper argues incumbent firms may acquire innovative targets solely to discontinue the target’s innovation projects and preempt future competition.”). With this in mind, some scholars have called on authorities to implement far-reaching measures that would allegedly protect innovation from the depredations of “big tech.” For instance, a report ordered by the European Commission concluded that digital mergers should be much more heavily scrutinized:

Where network effects and strong economies of scale and scope lead to a growing degree of concentration, competition law must be careful to ensure that strong and entrenched positions remain exposed to competitive challenges. The test proposed here would imply a heightened degree of control of acquisitions of small start-ups by dominant platforms and/or ecosystems, as they would be analysed as a possible defensive strategy against partial user defection from the ecosystem as a whole. Where an acquisition plausibly is part of such a strategy, the burden of proof is on the notifying parties to show that the adverse effects on competition are offset by merger-specific efficiencies.80Crémer et al., supra note 53, at 124 (emphasis added).

Along similar lines, the Stigler Center Report cautions that so-called “kill zone” mergers may entrench dominant incumbents and harm innovation:

While investment in innovation will continue, the type of innovation that will be funded will be broadly determined by the incumbent and its strategies. Disruptive innovation in markets that are characterized by high concentration levels and network effects is likely to be reduced compared to a competitive market. One of the few sources of entry in digital platforms comes from rival platforms that enter each other’s markets, as these large firms are more able to overcome entry barriers of all kinds.81Stigler Center Report, Antitrust Subcommittee, supra note 65, at 76 (emphasis added).

To be clear, this Article’s claim is not that digital platforms can never slow down innovation. The effect that market concentration and firm size have on innovation is ambiguous in theory and a hotly contested empirical question.82For an overview of this question, see Dirk Auer, Structuralist Innovation: A Shaky Legal Presumption in Need of an Overhaul, CPI Antitrust Chron., Dec. 2018, https://perma.cc/8VY8-69TJ. See also Geoffrey A. Manne & Joshua D. Wright, Introduction, in Competition Policy and Patent Law Under Uncertainty: Regulating Innovation 1 (Geoffrey A. Manne & Joshua D. Wright eds., 2011) (“[T]he ratio of what is known to unknown with respect to the relationship between innovation, competition, and regulatory policy is staggeringly low. In addition to this uncertainty concerning the relationships between regulation, innovation, and economic growth, the process of innovation itself is not well understood.”); Richard J. Gilbert, Competition and Innovation, UC Berkeley Competition Pol’y Ctr. 8 (Jan. 17, 2007), https://perma.cc/HDX6-Q7JQ (“Economic theory supports neither the view that market power generally threatens innovation by lowering the return to innovative efforts nor the Schumpeterian view that concentrated markets generally promote innovation by providing a stable platform to fund R&D and by making it easier for the firm to capture its benefits.”). The above critics do not merely argue that large digital platforms sometimes slow down innovation, however. Instead, they posit that this effect is so systematic and severe that it is necessary to impose precautionary measures—for instance, shifting the burden of proof in merger cases—whatever the cost.

To make matters worse, the evidence these critics cite to support their claims is highly conjectural. Economists Kevin Caves and Hal Singer, for example, put forward only weak anecdotal evidence to support their call for tougher antitrust enforcement against digital platforms, as they even acknowledge: “The empirical evidence that edge innovation has been diminished by dominant tech platforms is partially anecdotal and not dispositive . . . .”83Caves & Singer, supra note 76, at 402.

Likewise, Professors Giulio Federico, Fiona Scott Morton and Carl Shapiro dismiss a large strand of economic literature concerning the link between competition and innovation by claiming that its authors did not specifically address the effect of mergers.84Giulio Federico, Fiona Scott Morton & Carl Shapiro, Antitrust and Innovation: Welcoming and Protecting Disruption, 20 Innovation Pol’y and the Econ. 125, 136 (2020) (“[T]he models used in this literature generally do not analyze the effects of mergers, but instead look at exogenous variations in the intensity of product market competition. The authors of the cited papers often do not assert that their analysis applies to the antitrust analysis of mergers.”). While this may be a useful caveat, it does nothing to support their own view that increased product market competition systematically boosts innovation. Indeed, the approach taken by Giulio Federico and his coauthors (like most of the relevant economic literature) is myopically committed to a model of innovation that ties innovation to market structure, and (unlike the literature it criticizes) even assumes that the relationship is unidirectional: changes in market structure affect incentives to innovate, but not the other way around. But the reality is far more complicated:

To summarize, the basic framework employed in discussions about innovation, technology policy, and competition policy is often remarkably naïve, highly incomplete, and burdened by a myopic focus on market structure as the key determinant of innovation. Indeed, it is common to find a debate about innovation policy among economists collapsing into a rather narrow discussion of the relative virtues of competition and monopoly, as if they were the main determinants of innovation. Clearly, much more is at work.85J. Gregory Sidak & David J. Teece, Dynamic Competition in Antitrust Law, 5 J. Competition L. & Econ. 581, 589 (2009).

Again, our argument is not that these authors’ claims about competition and innovation are necessarily wrong—only that they put forward little to no theoretical or empirical evidence to support their dire claims of curtailed innovation absent far-reaching precautionary measures.

B.     Nostalgia

The antitrust literature surrounding digital competition is also beset by a strong and often-problematic sense of nostalgia. Scholars (and certain aspects of antitrust doctrine) are skeptical or fearful of change, even though change is a hallmark of digital industries where disruption has been the norm for decades.86See, e.g., Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail xvi (1997). This nostalgia can take several forms.

For a start, although it has no direct bearing on the development or interpretation of the law itself, nostalgic antitrust scholars have devoted extraordinary attention to dissecting and reinterpreting the historical origins of early US antitrust legislation (largely the Sherman Act). This sort of “meta-nostalgia” is consistent with a resurgent effort on the part of some activists and scholars to reinvigorate the populist sentiments of the late nineteenth century and progressive sentiments of the early twentieth century more broadly—including what some hold to be the apotheosis of those movements: the 1890 Sherman Act and the 1914 Clayton and FTC Acts. Although borne out of this broader political movement, such writing is more pointedly employed in an attempt to discredit subsequent judicial interpretations of antitrust law and the general common-law-like approach to antitrust jurisprudence.

Bearing more directly on the antitrust enterprise, nostalgic scholars tend to assume that markets were less problematic in the past, and that new business realities fundamentally alter the optimal balance of antitrust enforcement. This is nothing new, for as early as 1942 Joseph Schumpeter spoke dismissively of scholars advocating a view that “involves the creation of an entirely imaginary golden age of perfect competition that at some time somehow metamorphosed itself into the monopolistic age[.]”87Joseph A. Schumpeter, Capitalism, Socialism & Democracy 81 (1942).

But the problem is exacerbated in digital markets where change often takes the form of technological or business process innovation, both of which policy should generally encourage. “The critical point here is that innovation is closely related to antitrust error. The argument is simple. Because innovation involves new products and business practices, courts and economists’ initial understanding of these practices will skew initial likelihoods that innovation is anticompetitive and the proper subject of antitrust scrutiny.”88See Geoffrey A. Manne & Joshua D. Wright, Innovation and The Limits of Antitrust, 6 J. Competition L. & Econ. 153, 167 (2010). We simply do not know enough—especially about the relationship between firm structure, market structure, and innovation—to view change with the skepticism that some scholars do.89See, e.g., Caves & Singer, supra note 76.

Nevertheless, a host of scholars, regulators, politicians, and, of course, competitors tend to confront novel business structures and practices as if they undermine presumptively efficient markets, thus counseling enforcement to thwart these practices—precisely the tendency to condemn “ununderstandable practices” decried by Ronald Coase.90Coase, supra note 33, at 67.

Many further argue in favor of more aggressive interventions in digital markets in order to “restore” markets to the presumptively preferable state that existed before allegedly anticompetitive conduct occurred. In the past, antitrust enforcers mostly relied on cease-and-desist orders and deterrent sanctions (such as fines, treble damages for victims, and criminal prosecution) to police anticompetitive conduct.91See, e.g., A. Douglas Melamed, Afterword: The Purposes of Antitrust Remedies, 76 Antitrust L.J. 359, 364 (2009) (drawing a typology of antitrust remedies and suggesting that restorative remedies lie at the outer bounds of antitrust enforcement, or at the very least that authorities should impose such remedies with great caution). Such sanctions are mostly forward-looking (with the partial exception of damages awarded to victims of anticompetitive conduct). Authorities intended only to prevent anticompetitive harm from occurring in the future, whether by putting an end to a given practice or by deterring other firms from adopting a similar course of conduct. Today, however, many scholars consider these intervention methods to be insufficient in the realm of digital competition. They urge antitrust authorities to go one step further and impose remedies that “restore” markets to the presumptively efficient state that existed before the challenged conduct occurred. The underlying assumption is that the state of competition before an infringement took place is necessarily better than that which exists in the aftermath of a firm’s conduct, and that the differences that exist between both states are caused by this conduct.

These nostalgic inclinations are based on premises that are far from self-evident, however. In fact, as explained below, the error-cost framework that heavily influenced antitrust enforcement in the US (and to a lesser extent in the EU) is designed, among other things, to avoid the social costs that might stem from overly “nostalgic” enforcement. This makes it particularly ironic that nostalgic advocates for more aggressive intervention base their arguments in large measure on a rejection of the error-cost framework.

1.     Progressive and Populist Nostalgia Aimed at Furthering a Broader “Democratization” Movement

The current resurgence of activist antitrust—variously labeled “populist antitrust,” “neo-Brandeisian antitrust,” or “hipster antitrust”—rests in considerable part (as its monikers suggest) on a deeply nostalgic sentiment. The hearkening back to activists’ preferred political and legal historical eras is part of a broader movement in both the academy92See Jedediah Britton-Purdy, David Singh Grewal, Amy Kapczynski & K. Sabeel Rahman, Building a Law-and-Political-Economy Framework: Beyond the Twentieth-Century Synthesis, 129 Yale L.J. 1784, 1826 (2020) (“Finally, regrounding law and policy analysis in a broad conception of equality will require scholars to articulate substantive notions of what a commitment to equality should mean in different domains of law . . . . This means reengaging with lines of argument . . . [aimed at destabilizing] welfarism and its questionable metaethical underpinnings . . . from liberal theorists concerned with the autonomy-degrading aspects of welfarism to Marxian-inspired accounts of need.”). and in political discourse93See, e.g., Gerald Berk, Monopoly and Its Discontents, Am. Prospect (Oct. 9, 2019), https://perma.cc/B8FE-E4F4 (“Democrats of all stripes speak the language of anti-monopoly now, whether they acknowledge it or not. Some, like Warren, speak it in a pure form. Others combine anti-monopoly inquiry with rival forms of analysis, which look incompatible at first glance. Sanders’s plan for rural America combines anti-monopoly and class analysis to explain how corporate monopolies turned independent farmers into an impoverished proletariat through an oppressive subcontracting system. His solution is antitrust and price supports, not public ownership. Neoliberal corporatists Hillary Clinton, Mark Warner, and Amy Klobuchar, who once saw government regulation as the primary obstacle to technological innovation and entrepreneurship, now place tech monopolies atop their list.”). to (re-)build a “more genuine democracy that also takes seriously questions of economic power and racial subordination; . . . the displacement of concentrated corporate power and rooting of new forms of worker power; [and] . . . the challenges posed by emerging forms of power and control arising from new technologies . . . .”94Britton-Purdy et al., supra note 92, at 1834–35. “This [antitrust] moment is part of a larger one in which settled orthodoxies in many other areas of law and policy, particularly those that shape economic life, have been ruptured and new constructive projects have begun.”95Sanjukta Paul, Reconsidering Judicial Supremacy in Antitrust, 131 Yale L.J. (forthcoming) (manuscript at 2). “A necessary, though not sufficient, condition of [democratizing the economy] is to right the balance of law-making power, in antitrust law, in favor of the democratic branches of government.”96Id. at 56.

The antitrust prong of this “democratization” movement is focused on reviving what proponents see as the “true” antitrust tradition, which was illegitimately subverted by the neoliberalism of the late twentieth century. “Antitrust laws historically sought to protect consumers and small suppliers from noncompetitive pricing, preserve open markets to all comers, and disperse economic and political power. The Reagan administration—with no input from Congress—rewrote antitrust to focus on the concept of neoclassical economic efficiency.”97Lina Khan & Sandeep Vaheesan, Market Power and Inequality: The Antitrust Counterrevolution and Its Discontents, 11 Harv. L. & Pol’y Rev. 235, 236 (2017).

A significant part of this effort is a continual hearkening back to the origins of early US antitrust legislation, particularly the legislative history surrounding the enactment of the Sherman Act.98See, e.g., Peter C. Carstensen, Antitrust Law and the Paradigm of Industrial Organization, 16 U.C. Davis L. Rev. 487, 488 (1983); John J. Flynn, The Reagan Administration’s Antitrust Policy, “Original Intent” and the Legislative History of the Sherman Act, 33 Antitrust Bull. 259, 263–64 (1988); Robert H. Lande, Wealth Transfers as the Original and Primary Concern of Antitrust: The Efficiency Interpretation Challenged, 34 Hastings L.J. 65, 82–83 (1982); Paul, supra note 95, at 27–39; see generally David Millon, The Sherman Act and the Balance of Power, 61 S. Cal. L. Rev. 1219 (1988). Recent scholarship has sought to justify what amounts to a sea change in antitrust law by suggesting that such a shift would merely reinstate the original intent behind federal antitrust legislation. In particular, this approach would supplant, and rescind, the judicially directed contours of antitrust law over the past 100 years with a “claimed” more fulsome and direct congressional mandate:

The legislative record shows that Congress . . . . made the affirmative purpose of the legislation quite clear. That purpose was to respond to the recent rise of concentrated corporate power by means of an overall decision rule that would disperse economic coordination rights rather than further concentrate them. Thus, Congress already set specific normative criteria for allocating economic coordination rights under the Sherman Act; it did not leave it to the courts to do so.99Paul, supra note 95, at 4. See also Sandeep Vaheesan, The Evolving Populisms of Antitrust, 93 Neb. L. Rev. 370, 374 (2014) (“[C]onsumer protection would be true to the legislative intent of Congress in enacting the antitrust laws—preventing unjustified wealth transfers from consumers to producers.”).

Interestingly, a significant number of scholars who advocate substantial precautionary antitrust reform are nevertheless deeply critical of these nostalgia-infused arguments. Professor Herbert Hovenkamp, for example, offers a strong critique of progressive antitrust, concluding that:

Not only have progressives been expansionist in antitrust policy, they also pursued policies that did not fit well into any coherent vision of the economy, often in ways that hindered rather than furthered competitiveness and economic growth—all while injuring the very interest groups the policies were designed to protect.

. . . .

[A]lthough the progressive state’s expanded ideas about the role of regulation may be justified, these views should not spill into antitrust policy. Rather, the country is best served by a more-or-less neoclassical antitrust policy with consumer welfare, or output maximization, as its guiding principle. Not only is such a policy consistent with overall economic growth, it is also more likely to provide resistance against special-interest capture, which is a particular vulnerability of the progressive state.100Herbert Hovenkamp, Progressive Antitrust, 2018 U. Ill. L. Rev. 71, 76.

Indeed, although these progressive movement activists are surely the most nostalgic of all, their revivalist approach does not factor in any significant way into the central discussion of this Article. While some of these writers do also engage with the substantive debate over the proper antitrust treatment of data and digital platforms (and thus some of this work is discussed below), the effort to justify their preferred outcomes with an originalist appeal to nineteenth century legislative intent is simply irrelevant to the contemporary discussion. The broader political movement of which it is a part may be worth taking seriously, but as a matter of political science, not of antitrust law and economics. It is worth mentioning, but not worthy of further consideration here.

2.     Much Antitrust Doctrine is Inherently Backward-Looking

The application of longstanding antitrust doctrine to digital platform technologies is often difficult:

[C]ompetition law instruments, such as market shares or concentration ratios, used for traditional markets in order to assess dominance cannot be easily used when it comes to platforms because of the multiplicity of services they offer simultaneously to different groups of consumers. It also means that market power indicators based on a comparison of price and cost . . . cannot be used on each side of the platform to assess its market power. It finally means that the characterization of an abusive practice of a platform may be either different or more complex than in the case of traditional markets.101Frederic Jenny, Competition Law and Digital Ecosystems: Learning to Walk Before We Run 4 (Jan. 20, 2021) (unpublished manuscript), https://perma.cc/SKY4-26W4.

But the disconnect is made even more stark because much antitrust doctrine is inherently backward-looking. Antitrust Nostalgia is not simply a function of overly precautionary scholars advocating against change; it is also embedded within much antitrust process and doctrine.

This is not to say that the nostalgic elements of antitrust doctrine are inherently bad; indeed, as noted above, every common law-based legal system is inherently conservative in this respect.102See supra notes 34–35 and accompanying text. But it does mean that antitrust is often particularly skeptical of novel business conduct, absent the doctrinal correctives to this tendency103See Manne, supra note 34, at 84.—precisely the correctives dystopian antitrust scholars advocate rescinding.104See infra Section I.B.4.

Market definition offers a fitting example. Market definition is a key part of modern antitrust proceedings, not least in digital markets.105The recent Supreme Court cases of Ohio v. American Express Co. and Apple, Inc. v. Pepper each offer a case in point. See generally Apple, Inc. v. Pepper, 139 S. Ct. 1514 (2019); Ohio v. Am. Express Co., 138 S. Ct. 2274 (2018). For a discussion of the role of market definition in both of these cases, see Geoffrey A. Manne & Kristian Stout, The Evolution of Antitrust Doctrine After Ohio v. Amex and the Apple v. Pepper Decision That Should Have Been, 98 Neb. L. Rev. 425, 430 (2019) (“As we discuss further below, we believe that the Amex Court correctly decided that effects falling on the other side of a tightly integrated, two-sided market from challenged conduct must be addressed by the plaintiff in making its prima facie case. But, as the Amex Court made clear, that outcome entails a market definition that places both sides of such a market in the same relevant market for antitrust analysis. As a result, the Amex Court’s holding should also have required a finding in Apple [v. Pepper] that an app user on one side of the platform who transacts with an app developer on the other side of the market, in a transaction made possible and directly intermediated by Apple’s App Store, should similarly be deemed in the same market for standing purposes.”). And yet, it is primarily backward-looking and nostalgic. Innovation often enables firms to disrupt rivals in new markets. Retrospective market definition exercises that focus on firms’ past and present competitors are thus likely to minimize where competition is going, locking even fast-evolving digital competitors out of the “relevant market.” As Judge Douglas Ginsburg and Professor Joshua Wright put it, “[E]conomics provides no reason to believe innovation ordinarily will come from within a ‘market’ as defined for the purpose of static antitrust analysis; hence, there is little reason to believe proxies for dynamic competition will be positively correlated with innovative activity observed in such a market.”106Douglas H. Ginsburg & Joshua D. Wright, Dynamic Analysis and the Limits of Antitrust Institutions, 78 Antitrust L.J. 1, 4 (2012).

Indeed, traditional market definition analysis that infers future substitution possibilities from existing or past market conditions will systematically lead to overly narrow markets and an increased likelihood of erroneous market power determinations. This is the problem with viewing Google as a “search engine,” Amazon as an “online retailer,” Facebook and Instagram as “social networks,” and excluding each from the others’ markets. Because these products and services are constantly evolving, such firms can likely be viewed as “dominant” (if at all) only with backward-looking market definitions, superficially defined by legacy products that do not reflect the actual behavior of users or businesses.

Google and Amazon are direct competitors in search, for example—it makes no difference that one is nominally a “general search engine” and the other an “online retailer.” Indeed, today, over two-thirds of all product searches occur on Amazon, not Google.107See Greg Magana, Amazon Rules the Product Search Process, Bus. Insider (Mar. 20, 2019, 9:13 AM), https://perma.cc/WA9W-TC36. Here, as in other nominal product markets, the reality is that the large platforms—as well as many other companies—are increasingly and significantly in direct competition with each other across those nominal market boundaries for users and advertisers.

In reality, digital platforms compete vigorously in a different and more competitive market—the “market for eyeballs,” perhaps—and arguably none is truly dominant, at least not for long. At the same time, the competition for users’ attention is constantly evolving. Today, the battle rages over digital assistants and smart-home devices—a sector characterized by intense competition among Google’s Assistant, Amazon’s Alexa, and Apple’s Siri, as well as failed or flailing combatants like Facebook’s virtual assistant named M, Microsoft’s Cortana, and Samsung’s Bixby. Surely this is not the last stage of competition among these firms, nor the last time that the relevant market(s) in which they compete will shift.

Assessing competition among these firms with reference to superficial markets defined by the specific functionality they employ in order to do so is circular. Rather, an important component of getting market definition right, especially in high tech markets, may be an expansion of the role of supply-side substitution in market definition and market power calculations, and especially from potential entrants.

The US Horizontal Merger Guidelines significantly downplay the role of supply-side substitution.108U.S. Dep’t of Just. & Fed. Trade Comm’n, Horizontal Merger Guidelines 7 (2010), https://perma.cc/5JDD-SZXQ. But demand-side substitution imposes on the analysis a static, backward-looking conception of competition driven significantly by consumer prices. Supply-side substitution, on the other hand (or some other, even less-traditional approach, perhaps), is better able to capture the dynamics of markets driven by product innovation and in which competitive constraints are importantly imposed by potential entrants from outside demand-defined “markets.”109Atilano Jorge Padilla, Nat’l Econ. Rsch. Assocs., The Role of Supply-Side Substitution in the Definition of the Relevant Market in Merger Control 19 (2001) (“But even if there are no alternative products to which consumers would consider switching, a firm may still be subject to other rather immediate competitive constraints. Indeed, even if consumers were unable to react immediately to an increase in price, producers might be able to do so rather quickly. How? First, some of them may be endowed with assets (physical and human) that can be easily adjusted to produce substitute goods. If these producers were able to respond to a price increase by switching their production facilities to produce the goods or services subject to such price increase, then consumers would be able to avoid abuse. Second, some other firms might consider entering the market by investing on those assets needed to produce goods that are regarded as substitutes by consumers. This de novo entry, however, may help to constrain the behaviour of the established firms as effectively as demand substitution only if entry occurs (or it is likely to occur) promptly.”). This is an implication of the importance of competition via product innovation, rather than price, in these markets.110See Thomas M. Jorde & David J. Teece, Competing Through Innovation: Implications for Market Definition, 64 Chi.-Kent L. Rev. 741, 742 (1988). This also means that seemingly unrelated suppliers and seemingly unrelated markets should often properly be counted in the same market.

Because of these relatively static, backward-looking market definitions, innovation or other procompetitive conduct may be systematically misidentified as anticompetitive. And the benefits of innovation aimed at competing with rivals outside an improperly narrow market, or procompetitive effects conferred on users elsewhere on the platform or in another market, will be relatively, if not completely, neglected. But at the same time, such innovation may seem to impose outsized constraints on firms or consumers within the improperly defined market, leading to more complaints and to more readily identifiable, apparent harm. These problems are likely to be particularly acute in rapidly changing digital markets.

It has to be recognized that some things that are excluded from the market because they seem to differ in superficial ways may actually be at least as similar, and at least as likely to operate as substitutes, as any number of items that are included in the market. Most obviously, this is true when it comes to digital platforms. If we think of them as competing for user attention, it is apparent, for example, that Google and Facebook are direct and significant competitors. But when the market is defined as “search” or “social media,” the market is being defined in a way that disregards the relevant competitive effects:

However, market definition is an entirely artificial construct that has been called an incoherent process as a matter of basic economic principles. Real markets do not come defined. Market definition is an exercise that serves to establish the group of products that are sufficiently substitutable with one another.111Pinar Akman, The Theory of Abuse in Google Search: A Positive and Normative Assessment Under EU Competition Law, 2017 U. Ill. J. L. Tech. & Pol’y 301, 369 (citing Louis Kaplow, Why (Ever) Define Markets, 124 Harv. L. Rev. 437 (2010)) (emphasis omitted).

The bigger problem, perhaps, is that such market definitions are, as noted, inherently backward-looking. Yet, as Professors Thomas Jorde and David Teece note, true competition in high tech markets tends to come from the future:

It is especially in assessing potential competition that a departure must be made from orthodox approaches when new technologies and new products are at issue. The reason is that potential competition from new technologies can destroy a firm’s position in a particular market and its underlying competences. Price competition, on the other hand, may erode profit margins but is less likely to completely destroy the value of a firm’s underlying technological, physical, and human assets. Accordingly, potential competition from new products and processes is the more powerful form of competition.112Thomas M. Jorde & David J. Teece, Innovation, Dynamic Competition, and Antitrust Policy, Regulation, Fall 1990, at 35, 37–38.

Google is a paradigmatic example. When Google evolved from offering “10 blue links” directing users only to other sites to offering direct answers or developing its own content and directing users there, it was probably responding primarily to the threat from Amazon (where two-thirds of product searches now originate113See Magana, supra note 107.) and Facebook (which now accounts for approximately twenty-five percent of external page referrals on the Web114See Parse.ly’s Network Referrer Dashboard, Parse.ly, https://perma.cc/3KCJ-X3TL (showing Facebook as referring 24.9 percent of web traffic).), rather than the “threat” from, say, Foundem—a UK-based comparison shopping search engine that built its business on the assumption that it would always be able to get all the product search traffic it wanted from Google’s links.115See Geoffrey A. Manne, Int’l Ctr. for L. & Econ., The Real Reason Foundem Foundered 18 (2018) (ICLE Antitrust & Consumer Prot. Rsch. Program White Paper No. 2018-02) [hereinafter Manne, Foudnem Foundered]; Geoffrey A. Manne & Joshua D. Wright, Google and the Limits of Antitrust: The Case Against the Case Against Google, 34 Harv. J. L. & Pub. Pol’y 171, 202 (2011). But based on a narrow market definition that includes Foundem but not Amazon or any of the myriad other channels of distribution available (like direct navigation, mobile apps, links from other sites, etc.), Google’s innovation gets characterized (by Foundem, at least) as foreclosure, not healthy competition. Ultimately, this is exactly what the European Commission held in its Google Shopping decision.116Eur. Comm’n DG Competition, Summary of Commission Decision in Case AT.39740–Google Search (Shopping), 2018 O.J. (C 9/08) ¶¶ 3–7.

In the few instances where enforcers have taken a more forward-looking approach to market definition—the FTC’s Nielsen-Arbitron merger challenge, for example117In re Nielsen Holdings N.V., 157 F.T.C. 348, 387–88 (2013).—they have often resorted to even more speculative nostalgia given the inherent absence of data upon which to assess future markets. Thus, in Nielsen, the FTC asserted the risk of future anticompetitive harm based on rank speculation and “a general presumption that economic theory teaches that an increase in market concentration implies a reduced incentive to invest in innovation.”118Id. at 391 (dissenting statement of Comm’r Joshua D. Wright). As then-Commissioner Wright further points out in a related footnote: The link between market structure and incentives to innovate remains inconclusive. See, e.g., [Douglas H. Ginsburg & Joshua D. Wright, Dynamic Analysis and The Limits of Antitrust Institutions, 78 Antitrust L.J. 1, 4–5 (2012)] (“To this day, the complex relationship between static product market competition and the incentive to innovate is not well understood.”); Richard J. Gilbert, Competition and Innovation, in 1 ABA Section of Antitrust Law, Issues in Competition Law and Policy 577, 583 (W. Dale Collins ed., 2008) (“[E]conomic theory does not provide unambiguous support either for the view that market power generally threatens innovation by lowering the return to innovative efforts nor the Schumpeterian view that concentrated markets generally promote innovation.”). Id. at 391 n.7.

The essential facilities doctrine—at least as it is interpreted under US antitrust law—has the same nostalgic bent. Take Aspen Skiing Co. v. Aspen Highlands Skiing Corp.119472 U.S. 585 (1985).—the “outer boundary of [section] 2 liability” in the US—and the most prominent “essential facilities” case.120Verizon Commc’ns, Inc. v. L. Offs. of Curtis V. Trinko, LLP, 540 U.S. 398, 409, 411 (2004). There, liability is defined by giving up a previously profitable, voluntary enterprise.121Aspen Skiing, 472 U.S. at 610–11. In other words, a change in behavior by moving on to new modes of business or marketing can actually create liability. In that case, as it happened, it may well have been an effort to innovate that caused the defendant to abandon the prior arrangement and led to the case in the first place.122See John E. Lopatka & William H. Page, Bargaining and Monopolization: In Search of the “Boundary of Section 2 Liability” Between Aspen and Trinko, 73 Antitrust L.J. 115, 148 (2005); Alan J. Meese, Property, Aspen, and Refusals to Deal, 73 Antitrust L.J. 81, 112–13 (2005).

The Google-Foundem dispute again provides an example. Such a nostalgic approach—which bases a challenge to Google on the change in its practice away from one that favored Foundem—would end up protecting the decidedly non-innovative Foundem’s bad business model—one that made itself dependent on Google never changing—at the expense of Google’s efforts to evolve with technology, competition, and consumer demand. This turns essential facilities on its ear. Under this theory, Google is “essential” only because Foundem decided to put all its eggs in one basket. It is not that Google is the only way for consumers to reach Foundem and vice versa; it is that Foundem chose to acquire customers this way. If it is Foundem’s bad business model that turns Google into an essential facility, prevented from developing new products or new modes of business, then competition law will be doing the opposite of what it is supposed to do.

3.     Static Nostalgia Meets Dynamic, Innovative Markets

The problem of nostalgia-driven false positives is particularly acute in dynamic markets.123See, e.g., Manne & Wright, supra note 88, at 170, 172. (“The false positives problem is magnified in the context of technological innovation, both because of the immense value of the innovations and because of the increased likelihood of error . . . . A proper application of error-cost principles would deter intervention in such cases until empirical evidence could be amassed and assessed. Nevertheless, it is precisely in these situations that intervention may be more likely. On the one hand, this may be because in the absence of information disproving a presumption of anticompetitive effect, there is an easier case to be made against the conduct—this despite putative burden-shifting rules that would place the onus on the complainant. On the other hand, successful innovations are also more likely to arouse the ire of competitors and/or customers, and thus both their existence and their negative characterization are more likely brought to the attention of courts or enforcers—abetted in private litigation by the lure of treble damages.” (footnote omitted)). Particularly in oligopoly markets (like those predominated by platforms), “[a] stable outcome will require restrictions on the freedom of market participants; that is, stability will require some sort of coordination. These restrictions look like the bread and butter of antitrust lawsuits—cartels, tacit collusion, vertical restrictions, and merger.”124George Bittlingmayer, The Economic Problem of Fixed Costs and What Legal Research Can Contribute, 14 L. & Soc. Inquiry 739, 740 (1989). “Clearly, when no competitive equilibrium is possible, something else has to take its place. Since the problems arise from too much competition and too little cooperation, the institutions that solve these problems necessarily imply a variety of arrangements that look ‘anticompetitive.’”125Id. at 751.

Large, complex systems like digital platforms are incongruous with perfect competition-informed antitrust doctrine and the traditional, linear supply-chain relationship upon which it is built. The Supreme Court’s recent Ohio v. American Express Co 126138 S. Ct. 2274 (2018). decision—much reviled by dystopian antitrust thinkers—is a nod in this direction: a realization that something more than the “local” efficiency notions endemic to traditional antitrust is required if adjudication of competition disputes relating to these businesses is to avoid systematic error. What the literature and the Amex Court make clear is that the systemic efficiency associated with platform ecosystems entails elements of coordination and control in various places in order to optimize the whole.127See, e.g., Jonathan M. Barnett, The Host’s Dilemma: Strategic Forfeiture in Platform Markets for Informational Goods, 124 Harv. L. Rev. 1861, 1890 (2011) (“The [platform] therefore faces a basic trade-off. On the one hand, it must forfeit control over a portion of the platform in order to elicit user adoption. On the other hand, it must exert control over some other portion of the platform, or some set of complementary goods or services, in order to accrue revenues to cover development and maintenance costs (and, in the case of a for-profit entity, in order to capture any remaining profits).”); see also Kevin Boudreau, Platform Boundary Choices & Governance: “Opening-Up” While Still Coordinating and Orchestrating, in 37 Advances in Strategic Management: Entrepreneurship, Innovation, and Platforms 227, 323 (Jeffrey Furman, Annabelle Gawer, Brian S. Silverman & Scott Stern eds., 2017). As the Supreme Court notes (with specific reference to the payment card platform at issue in the Amex case):

Focusing on merchant fees alone misses the mark because the product that credit-card companies sell is transactions, not services to merchants, and the competitive effects of a restraint on transactions cannot be judged by looking at merchants alone. Evidence of a price increase on one side of a two-sided transaction platform cannot by itself demonstrate an anticompetitive exercise of market power.128Am. Express Co., 138 S. Ct. at 2287; see also Geoffrey A. Manne, In Defence of the Supreme Court’s ‘Single Market’ Definition in Ohio v. American Express, 7 J. Antitrust Enforcement 104, 109 (2019) (“As the multi-sided markets literature makes clear, multi-sided platforms are defined by the interrelatedness between their various sides, and market definition (and competitive effects) analysis must entail an assessment of all sides of a platform. For platforms, the structure and interrelatedness of the relative prices (and other terms—like the ‘anti-steering’ terms at issue in Amex—that, in effect, determine the quality of the service) is what matters, not the specific prices charged to users on a given side of the market.” (footnote omitted)).

Properly assessing the competitive effects of business conduct in platform ecosystems entails a recognition that the systemic benefits from an efficiently operating platform can greatly exceed the costs of any localized restraint:

Systemic efficiencies involve and affect multiple, dispersed parts of a large complex system whose components are interconnected intricately in such a way that changes in one part might trigger readjustments in other parts (for example, as seen in electronic communications networks and operating system ecosystems). Because systemic efficiencies are drawn from multiple parts, understanding them requires a holistic overview of the system in which they are present, which makes them more difficult to identify and appreciate. However, because systemic efficiencies are so integrative and extensive, that also means that they can bring about dramatic innovations in the industry that would otherwise not occur, especially in smaller-scale, insular environments. Systemic efficiencies and innovations therefore generate unique value for both the introducing firm and the industry as a whole, and deserve to be identified as a distinct type of efficiency.129Konstantinos Stylianou, Systemic Efficiencies in Competition Law: Evidence from the ICT Industry, 12 J. Competition L. & Econ. 557, 558–59 (2016).

Realizing systemic efficiencies across platform users’ disparate incentives and characteristics, the satisfaction of which is interrelated with those of every other group of users, entails “pervasive control,” which may demand “potentially exclusionary practices, such as refusing to supply, tying, and discrimination, among others. These practices aim at creating the necessary conditions for the efficiency to materialize, as they arguably ensure the involvement and proper interaction of only suitable parts, actors, and components.”130Id.; see also S. M. Elaluf-Calderwood, B. D. Eaton, C. Sørensen & Y. Yoo, Control as a Strategy for the Development of Generativity in Business Models for Mobile Platforms, in 15 International Conference on Intelligence in Next Generation Networks 271 (2011) (“The issue of managing digital ecosystem innovation can be seen as the continuous process of developers as protagonists seeking to engage in generative acts, further expanding the platform functionality, and an opposing platform owner as antagonist serving the role of moderator and regulator [] accepting or rejecting generative attempts through the application of control points. The core challenge of innovation in a digital ecosystem is continuously to engage in balancing control and generativity.”). In certain contexts, of course, these mechanisms of control can be anticompetitive. But especially in complex systems, “authorities and courts should not underestimate the indispensable role control plays in achieving coordination and coherence in the context of systemic efficiencies. Without it, the attempted novelties and strategies might collapse under their own complexity.”131Stylianou, supra note 129, at 559.

In and of itself, this dynamic is likely to lead to mistaken over-enforcement against some procompetitive conduct. But the nostalgia bias amplifies this error by increasing the skepticism directed against the evolution of such practices out of arrangements where they may previously be absent or directed elsewhere.

The Court of Appeals for the Second Circuit’s United States v. Apple, Inc.132791 F.3d 290 (2d Cir. 2015). case regarding Apple’s e-books suggests how nostalgia-infused antitrust enforcement against platforms can hinder the adoption of novel business practices, potentially stifling beneficial business model competition. In a nutshell, the case centered on most-favored-nation (“MFN”) clauses agreed upon by Apple and several book publishers. The record suggests these clauses were part of a concerted strategy to move the e-book industry from a retail model, where Amazon purchased e-books from publishers and sold them at its chosen price, to an agency model, where publishers set the price of e-books, and platforms (Apple and Amazon) took a percentage fee. Plaintiffs and courts framed these agreements as naked price fixing, thereby bringing them within the Sherman Act’s rule of per se liability. Whether or not this harsh treatment was appropriate is debatable. However, the case presented important implications for the evolution of the e-book industry (and competition in digital platforms, more generally) that mostly seem to have eluded enforcers and the court’s majority at the time.

Consider the following sentence from the majority opinion:

More importantly, even if there were such evidence, the fact that a competitor’s entry into the market is contingent on a horizontal conspiracy to raise prices only means (absent monopolistic conduct by the market’s dominant firm, which cannot lawfully be challenged by collusion) that the competitor is inefficient, i.e., that its entry will not enhance consumer welfare.133Id. at 334 (emphases omitted).

While the court’s analysis might arguably be correct from a static point of view, it misses key questions pertaining to dynamic competition. Though Apple’s agency model may have led to higher short-run e-book prices, and though bringing it about may have entailed control over and effective collusion of otherwise independent competitors, it also had the potential to shake up the e-book industry and enable firms with differentiated business models to gain traction (notably Apple and its then-brand-new iPad).134See, e.g., Babur De los Santos & Matthijs R. Wildenbeest, E-book Pricing and Vertical Restraints, 15 Quantitative Mktg. & Econ. 85, 111 (2017) (recognizing that a switch from Apple’s agency model and most favored nation clauses towards retail model led to significant price decreases). However, this analysis does not reveal what prices would have been if Apple had not entered the e-book market. See, e.g., Wan Cha, A New Post-Leegin Dilemma: Reconciliation of the Third Circuit’s Toledo Mack Case and the Second Circuit’s Apple E-books Case, 67 Rutgers U. L. Rev. 1547, 1548 (2015) (“Apple wanted to break into the e-book market, but could not compete with Amazon’s low prices. Amazon was basically an e-book monopoly that set such low prices that no one else could afford to compete. So Apple went to each major publisher and convinced them to collectively force Amazon to raise its prices by refusing to sell to Amazon if it did not comply.” (footnotes omitted)).

Indeed, by essentially shifting competition from price to non-price parameters, the clauses arguably enabled firms to experiment with business models that, in an “unfettered” environment, would have been thwarted by market imperfections. As the dissent appropriately noted:

As to the pro-competitive effects, the rule of reason must take account primarily of the deconcentrating of the e-book retail market . . . . As the district court found, Apple was weighing its entry into the retail e-book market, and the agency structure was the only way Apple would enter the market. Nobody has proposed—before or since Apple’s entry—any “less restrictive means” by which Apple could have achieved the same competitive benefits. Apple’s challenged conduct broke Amazon’s monopoly, immediately deconcentrated the e-book retail market, added a platform for reading e-books, and removed barriers to entry by others. And removal of a barrier to entry reduces for the long term a market’s vulnerability to monopolization. These effects sound in the basic goals of antitrust law . . . . (Judge Livingston’s opinion discounts this pro-competitive effect by noting the open question whether “below-cost pricing is unlawfully anti-competitive,” thereby suggesting that Apple’s dismantling of the entry barrier could be pro-competitive only if the barrier was itself a Sherman Act violation. But it is no matter whether the insuperable barrier that Apple tore down had been raised lawfully or not.).135Apple E-books, 791 F.3d at 350–51 (Jacobs, J., dissenting) (citations omitted).

While, on its face, this recognition that less “perfectly” competitive market structures (“non-standard contracting”)136See generally Alan J. Meese, Market Failure and Non-Standard Contracting: How the Ghost of Perfect Competition Still Haunts Antitrust, 1 J. Competition L. & Econ. 21 (2005). may be optimal devices for overcoming market failures endemic to imperfect markets highlights the nostalgia bias, the fact that the case was brought in the context of a change in conduct is even more telling.

In this case, the court found it damning that Apple’s entry resulted in Amazon charging higher prices for certain e-books. While the court acknowledged that “[n]o court can presume to know the proper price of an ebook,”137Apple E-books, 791 F.3d at 328–29. its analysis rested on the presumption that Amazon’s prices before Apple’s entry were competitive.138Id. at 299–301. The record, however, offered no support for that presumption, and thus no support for the inference that post-entry price increases were anticompetitive. In fact, a restraint might increase prices precisely because it overcomes a market failure.139See Alan J. Meese, Price Theory, Competition, and the Rule of Reason, 2003 U. Ill. L. Rev. 77, 146–51. Here, the change in Amazon’s pricing scheme may simply have reflected the fact that Amazon’s business model resulted in artificially low prices akin to market failure—not that Apple sought or obtained supra-competitive prices.

The Court of Appeals also focused erroneously on the effect of Apple’s entry on the e-book prices of a single competitor, Amazon, instead of on the e-book marketplace as a whole. The court found problematic that Apple’s entry “stiffened the spines” of the publishers140Apple E-books, 791 F.3d at 317 (internal quotation marks omitted). and enabled them to “demand new terms from Amazon,” including the use of the agency model.141Id. at 305. But that is exactly what competition from new entrants does: it empowers parties to obtain better products and more favorable terms from their suppliers. The fact that an incumbent firm—particularly a market leader such as Amazon—had to respond to the rigors of competition is hardly ground for condemning a new entry.

In other words, antitrust law may sometimes prevent firms from collectively moving towards a superior long-term equilibrium (on account that, in the short run, this marks an apparent competitive decline compared to the status quo), even though such arrangements might have been deemed unproblematic if they had been introduced ab initio (i.e., antitrust authorities would likely not have intervened in the e-book industry if the firms had immediately adopted the agency model with MFN clauses when the market emerged).

By the same token, there is a strong tendency among scholars to challenge platform evolution that may harm some complementors of these platforms to the benefit of others or enable platform appropriability where it was previously unavailable. Scholars have notably alleged this to be the case when Amazon enters business segments previously occupied by third-party retailers that use its platform.142See Caves & Singer, supra note 76, at 395 (“Amazon has been accused of leveraging its platform power into retail via predation against independent retailers such as Diapers.com, and more recently by steering voice searches on Alexa to its private-label products.” (footnotes omitted)). This relatively static, nostalgic analysis that infers harm from a change that imposes costs on complementors essentially assumes that any given complementor that succeeded in the past “should” succeed in the future (especially against competition from a platform’s own, integrated product). Doing so is mistaken.143See Sidak & Teece, supra note 85, at 611 (“Simple rules based on static analysis may well produce policy actions and judicial decisions that impede competition. In particular, policymakers should de-emphasize concentration analysis.”).

The Google Shopping144Commission Decision, Case AT.39740, Google Search (Shopping) (June 26, 2017). decision essentially turned this sort of shift in fortunes into an antitrust problem, inferring harm from the fact of a change, bolstered by the fact that the platform—Google—appeared to be appropriating more of the platform’s value than it had previously when it permitted comparison shopping services to operate without competition from Google itself.145Id. at ¶ 379. Yet doing so is particularly likely to give rise to false positives:

The relatively static, “nostalgic” analysis that essentially assumes that any given complementor that succeeded in the past “should” succeed in the future (especially against competition from a platform’s own, integrated product) is deeply flawed. Past success under a particular set of platform constraints is no reason to assume that a complementor would provide any measure of innovation in the future under different constraints, nor is it an argument for insisting that the platform’s constraints cannot change. Indeed, if platform discrimination is rampant, the fact that a complementor previously succeeded under different, discriminatory conditions offers no reason to think that that there was an “effective competition structure” in the first place and thus that its previous success was in any way “merited.”146Manne, supra note 34, at 89.

The upshot is that platforms are rarely static; there is expansive literature on the evolution of platforms both toward and away from greater “openness” depending on their place in the lifecycle, changing demand, evolving technology, experimentation, and the like.147See, e.g., David J. Teece, Dynamic Capabilities & Strategic Management: Organizing for Innovation and Growth 48 (2009). And yet, for better or worse, antitrust enforcers routinely view such changes with circumspection. The pressing policy question is thus whether or not the idiosyncrasies of digital markets (discussed in Part II) warrant a reinforcement of these nostalgic inclinations (for instance, by shifting the burden the burden of proof in antitrust proceedings or by imposing restorative remedies).

4.     Restorative Remedies

Another form of nostalgia can be seen in the repeated calls for authorities and courts to impose “restorative” measures upon firms that have infringed the antitrust laws. Contrary to other antitrust remedies that are mostly forward-looking (seeking to prevent further infringements from occurring), restorative measures, as their name suggests, attempt to restore the market to the state in which it was—or purportedly would have been—absent a firm’s anticompetitive behavior.

These calls for restorative remedies are driven by the idea that digital markets move fast, and that authorities are often late to the scene. Scholars thus argue that it is necessary to restore the competition that might have been lost in the interim. As the Stigler Center Report puts it:

Effective antitrust enforcement requires effective remedies. Treble damages and financial penalties can compensate for past harms and deter future bad conduct, but they do not restore competition to markets in which competition has been harmed. Even an injunction to forbear from the same or similar anticompetitive conduct going forward will not restore the lost competition if entry barriers are high. For example, if the market has tipped and network externalities are very strong, the firm that became a monopolist through violations of the antitrust laws could stop the conduct at issue and yet retain its monopoly position and the associated stream of profits.148Stigler Center Report, Antitrust Subcommittee, supra note 65, at 99–100 (emphasis added).

The Stigler Report goes on to list several potential restorative remedies: “Data sharing, full protocol interoperability, non-discrimination requirements, and the unbundling of content from a platform are all tools that the regulator, in conjunction with the antitrust authority, could apply and monitor over time in order to restore competitive markets.”149Id. at 33.

The European Commission’s digital markets report makes a similar argument:

[W]here self-preferencing has significantly benefitted a platform’s subsidiary in improving its market position vis-à-vis competitors, such remedies might include a restitutive element (“restorative” remedies). In order to enable formerly disadvantaged competitors to regain strength, it may, for example, be necessary to give them access to the dominant platform’s competitively relevant data resources or otherwise compensate for their reduced visibility or lack of data access in the past.150Crémer et al., supra note 53, at 68 (emphasis added).

Finally, a commonly cited solution would be for authorities to break up firms, thus reversing the supposedly harmful effects of industry consolidation. Tim Wu, for example, has argued that this should be the case for Facebook and Instagram:

As this analysis suggests, the case for the breakup should be relatively clear. Today, we can measure the effects of the lack of competitors to Facebook, in terms of higher prices and lower quality.

It is true that sometimes a breakup can undo benefits and efficiencies achieved by a merger. But when it comes to breaking off Instagram, it is hard to see what those might be. What seems more obvious is that an independent Instagram would be in a position to fashion itself into the full-fledged competitor to Facebook it was on track to becoming six years ago.151Tim Wu, The Case for Breaking Up Facebook and Instagram, Wash. Post (Sept. 28, 2018, 1:11 PM), https://perma.cc/5QE9-RDRS.

While these proposals might sound good on paper, they overlook the very real challenges that antitrust authorities and courts would face in designing and enforcing such remedies. In doing so, these commentators exhibit significant nostalgia bias.

Take the most obvious question: what competition would authorities restore? Would their goal be to recreate the market conditions that prevailed before an infringement took place, or those that would currently prevail absent the challenged conduct? Because markets are constantly evolving, these two settings will often be very different.

Restoring the competitive conditions that prevailed before an infringement arbitrarily assumes that all, or most, of the changes that took place in the interim were ultimately to the detriment of consumers. But everything that occurs after an infringement is not necessarily caused by it. In seeking to restore markets to their prior state, policy makers might also lose any positive effects stemming from industry consolidation and vertical integration that occurred between the infringement and authorities’ decision.

Similarly, this approach implicitly assumes that less concentrated, less controlled markets (relative to subsequent concentration or reinforcement of monopoly) are implicitly more efficient and preferable to markets with structures that deviate from this norm. But this is a mistaken assumption. Deviations from perfect model assumptions are not necessarily expressions of market power; rather, they are often corrections of underlying market failures:

Reliance on the perfect competition model, I submit, accounts for the failure of modern scholars to offer any account of the formation and enforcement of non-standard contracts that does not depend on the possession or exercise of market power. By focusing solely on the propensity of non-standard contracts to reduce “transaction costs, ” these scholars ignore the fact that such agreements also reverse market failures by internalizing externalities and thus altering the costs faced by parties to such agreements. Thus, such restraints naturally produce prices or output different from what would obtain in an unbridled market.152Meese, supra note 136, at 85 (emphasis omitted).

Attempting to recreate a counterfactual world where an infringement never took place would not be any easier, and, for similar reasons, how could authorities distinguish those market evolutions that were caused by the infringement and which presumably harmed consumers, from those that emerged organically and were potentially beneficial?

More concretely, imagine that a vertically integrated platform excluded its downstream rivals by self-preferencing its own services. Imagine further that, months or even years down the line, an antitrust authority or court decided to challenge this practice and impose a restorative remedy. During that time span, the market may have tipped in favor of the monopolist. Authorities may thus be tempted to break up the firm, or to force it to share data with its rivals. But doing so opens a can of worms.

For a start, breaking up the dominant platform would potentially destroy valuable network externalities,153Nocke et al., supra note 61, at 1130. while data-sharing remedies may raise important data security issues.154Daniel R. Stoller & Sara Merken, Zuckerberg’s Call for Data Portability Highlights Security Risks, Bloomberg (Apr. 1, 2019, 4:46 PM), https://perma.cc/HCF2-C5DR. More fundamentally, there is simply no telling whether the market would, or would not, have consolidated towards a single player in any case. In trying to reverse market consolidation, nostalgia-driven policy makers might ultimately be trying to reconstruct a state of competition that would have disappeared anyway. The breakup of the Bell Telephone Company offers a fitting case in point. After the breakup, in 1984, most of the so-called “baby Bells” either exited the market or merged together.155Nilay Patel, Look at This Goddamn Chart, The Verge (Oct. 24, 2016, 3:33 PM), https://perma.cc/NNR5-CLDB.

Finally, while it is easy to cite hypothetical reasons why breaking up companies would improve competition, as Tim Wu does when talking about Facebook and Instagram,156Wu, supra note 151. it is much harder to identify the benefits that were achieved thanks to market consolidation (be it by merger or competition). This could be the case for synergies,157See, e.g., Joseph Farrell & Carl Shapiro, Scale Economies and Synergies in Horizontal Merger Analysis, 68 Antitrust L.J. 685, 685–86 (2001). economies of scale,158See, e.g., Dennis C. Mueller, A Theory of Conglomerate Mergers, 83 Q.J. Econ. 643, 643 (1969) (“If firms maximize profit, mergers will take place only when they produce some increase in market power, when they produce a technological or managerial economy of scale, or when the managers of the acquiring firm possess some special insight into the opportunities for profit in the acquired firm which neither its managers nor its stockholders possess.”). superior management,159See Henry G. Manne, Mergers and the Market for Corporate Control, 73 J. Pol. Econ. 110, 119 (1965) (“Among the advantages of the former, as we have seen, are a lessening of wasteful bankruptcy proceedings, more efficient management of corporations, the protection afforded non-controlling corporate investors, increased mobility of capital, and generally a more efficient allocation of resources.”). etc. These benefits may be lost when a breakup remedy is imposed. Critics who dismiss this difficulty are ultimately just guessing that the market conditions were better for consumers before—the epitome of Antitrust Nostalgia.

In short, restorative remedies, though superficially tempting, would give rise to tremendous practical difficulties. Professor Richard Epstein summarizes these well:

There is something to be said for modest “behavioral” remedies limiting certain kinds of contractual provisions. The best evaluation of these is that they do little good, but, by the same token, do little harm as well. The same cannot be said of the more ambitious effort to impose “structural” remedies intended to break up the unitary corporate structure of firms.160Richard A. Epstein, Monopolization Follies: The Dangers of Structural Remedies Under Section 2 of the Sherman Act, 76 Antitrust L.J. 205, 206 (2009).

5.     Nostalgia and the Error-Cost Framework

The irony of the “this time is different” mindset161See supra Section I.A.1. is that, in neglected ways that do not fit the dystopian narrative, this time really is different. Modern digital platforms present a challenge for modern antitrust doctrine—not because it is too permissive (as some contend),162See, e.g., Herbert Hovenkamp & Fiona Scott Morton, Framing the Chicago School of Antitrust Analysis, 168 U. Pa. L. Rev. 1843, 1847 (2020). but because it is overly nostalgic, based on presumptions and tools that, on the margin, point back to “blackboard models” of perfect competition as their touchstone. These models do not describe platform competition well, and many aspects of antitrust doctrine are incongruous.

Ironically, critics of the more permissive “Chicago School” of antitrust contend that this same defect underlies their opponents’ views.163See id. at 1878 (“When economic policy takes the model of perfect competition as its starting point, it has nowhere to go but downhill. If we did have a perfectly competitive economy, then of course antitrust intervention would be unnecessary. Faced with the choice of moving to models that provided greater verisimilitude and predictability, but that required more intervention, or clinging to the past, the Chicago School chose the latter.”). And it is, of course, correct that economic science has evolved since the early days of the Chicago School and provided lessons that have, to a limited extent, been incorporated into antitrust law and economics.164Most notably, “raising rivals’ costs” (“RRC”) theories (which, as it happens, actually have their origins in work by early Chicago School scholars) have been important in shifting understandings of foreclosure and exclusionary conduct. See Manne, supra note 34, at 74 (“RRC offers a theoretically rigorous, alternative, anticompetitive theory for much ambiguous conduct, including conduct identified by early Chicago School scholars as having plausible procompetitive bases . . . .”). But the proper corrective to nostalgia is not wild speculation. While the economic literature in this vein is both important and influential, it offers very little to substantiate more interventionist approaches to transforming antitrust doctrine.

While additional theoretical sophistication and complexity is useful, reliance on untested and in some cases untestable models can create indeterminacy, which can retard rather than advance knowledge.

. . . .

As with almost all monopolization strategies, one cannot distinguish an anticompetitive use of RRC from competition on the merits, absent a detailed factual inquiry. . . . [T]here is very little empirical evidence based on in-depth industry studies that RRC is a significant antitrust problem.

. . . .

[Thus, b]ecause of this literature’s focus on theoretical possibility theorems, little evidence exists regarding the empirical relevance of these theories. Absent specific evidence regarding the plausibility of these theories, the courts . . . properly ignore such theories.165Bruce H. Kobayashi & Timothy J. Muris, Chicago, Post-Chicago, and Beyond: Time to Let Go of the 20th Century, 78 Antitrust L.J. 147, 148, 162, 166 (2012).

As explained in the introductory section of this Article, the fears that underpin both the dystopia and nostalgia biases are as old as antitrust enforcement itself. These biases have thus had ample time to permeate through antitrust enforcement. This raises two possibilities. On the one hand, policy makers may have bought into these concerns and thus adapted the antitrust toolkit so as to address them (for instance, by making antitrust law more readily applicable to novel conduct). Conversely, decisionmakers may have perceived these fears as undesirable biases and thus attempted to make antitrust law impervious to them. There is at least some reason to believe that the error-cost framework166See Frank H. Easterbrook, The Limits of Antitrust, 63 Tex. L. Rev. 1, 16 (1984) (“The legal system should be designed to minimize the total costs of (1) anticompetitive practices that escape condemnation; (2) competitive practices that are condemned or deterred; and (3) the system itself.”). that guides US antitrust enforcement was, in part, designed so as to minimize the occurrence of what we have referred to as nostalgia bias.

Let us take a step back. The possibility that, despite its age, antitrust law may be fairly well calibrated to address the novel characteristics of digital markets is routinely overlooked by commentators and scholars alike. A piece recently published in Bloomberg, for instance, argued that “100-year-old antitrust laws are no match for big tech.”167Tara Lachapelle, 100-Year-Old Antitrust Laws Are No Match for Big Tech, Bloomberg (Aug. 4, 2020, 7:00 AM), https://perma.cc/5FNC-DGZA. The implication is clear: antitrust law is outdated and it needs to adapt. Scholars and advocates have voiced similar concerns; FTC Chair Khan has argued that the lack of antitrust enforcement against Amazon is evidence that the law is out of date.168See Khan, supra note 72, at 784 (“[A]spects of Amazon’s conduct and structure may threaten competition yet fail to trigger scrutiny under the analytical framework presently used in antitrust. In part this reflects the ‘consumer welfare’ orientation of current antitrust laws . . . . But it also reflects a failure to update antitrust for the internet age.”).

While it is plausible that antitrust law is out of touch with the realities of digital markets, critics often overlook an important alternative explanation: the antitrust framers, and courts since then, may have consciously designed antitrust law so as to preclude the type of enforcement that nostalgic scholars are contemplating. In other words, antitrust may be powerless against big tech’s behavior not because it is outdated, but because policy makers decided to limit enforcement to what they perceived to be clear-cut infringements, deliberately making it relatively hard for plaintiffs to bring cases against novel forms of business conduct.

There is some reason to believe that the second of these two explanations is the most likely. This is, arguably, apparent from US antitrust law’s adoption of the error-cost framework as a guide to policy making.169See, e.g., Leegin Creative Leather Prods., Inc. v. PSKS, Inc., 551 U.S. 877, 895 (2007) (“[R]ules can be counterproductive. They can increase the total cost of the antitrust system by prohibiting procompetitive conduct the antitrust laws should encourage.” (citation omitted)); Bell Atl. Corp. v. Twombly, 550 U.S. 544, 559 (2007) (adjusting pleading standards in order to avoid Type I errors, noting that “it is self-evident that the problem of discovery abuse cannot be solved by ‘careful scrutiny of evidence at the summary judgment stage,’ much less ‘lucid instructions to juries’ . . . [;] the threat of discovery expense will push cost-conscious defendants to settle even anemic cases before reaching those proceedings” (citation omitted)); Verizon Commc’ns Inc. v. L. Offs. of Curtis V. Trinko, LLP, 540 U.S. 398, 414 (2004) (“The cost of false positives counsels against an undue expansion of [section] 2 liability.”). At its core, the error-cost framework attempts to structure antitrust enforcement to “minimize the total costs of (1) anticompetitive practices that escape condemnation; (2) competitive practices that are condemned or deterred; and (3) the system itself.”170Easterbrook, supra note 166, at 16.

Readers might question how this relates to antitrust nostalgia. The answer is that proponents of the error-cost framework, be they courts or scholars, tend to assume that the costs stemming from false convictions are the most significant—because unlike most market failures, they are not self-correcting—and thus warrant particular caution from policy makers.171Id. at 3 (“But this should not obscure the point: judicial errors that tolerate baleful practices are self-correcting, while erroneous condemnations are not.”). In turn, this translates into a commandment that anticompetitive presumptions are appropriate only when enforcers have acquired enough familiarity with a given practice172Broadcast Music, Inc. v. CBS, Inc., 441 U.S. 1, 10 (1979) (“We have never examined a practice like this one before; indeed, the Court of Appeals recognized that, ‘[i]n dealing with performing rights in the music industry we confront conditions both in copyright law and in antitrust law which are sui generis.’ And though there has been rather intensive antitrust scrutiny of ASCAP and its blanket licenses, that experience hardly counsels that we should outlaw the blanket license as a per se restraint of trade.” (citation omitted)); United States v. Topco Assocs., Inc., 405 U.S. 596, 607-08 (1972) (“It is only after considerable experience with certain business relationships that courts classify them as per se violations of the Sherman Act.”).—something that is rarely the case for the novel business conducts that lie at the heart of the current antitrust upheaval. The error-cost framework thus tends to be relatively favorable to novel forms of conduct—the very opposite of Antitrust Nostalgia, which tends to view such conduct in a negative light.

Underpinning this policy orientation is a belief that policy makers are often quick to find a monopoly explanation for behavior that they fail to understand.173See Coase, supra note 33, at 67 (“[I]f an economist finds something . . . that he does not understand, he looks for a monopoly explanation. And as in this field we are rather ignorant, the number of ununderstandable practices tends to be rather large, and the reliance on a monopoly explanation, frequent.”). This is particularly common in innovative markets where business practices evolve rapidly.174See Manne & Wright, supra note 88, at 164 (“Innovation creates a special opportunity for antitrust error in two important ways. The first is that innovation by definition generally involves new business practices or products. Novel business practices or innovative products have historically not been treated kindly by antitrust authorities. From an error-cost perspective, the fundamental problem is that economists have had a longstanding tendency to ascribe anticompetitive explanations to new forms of conduct that are not well understood.”). This tendency is compounded by a sense that economics always lags behind business practice.175Frank H. Easterbrook, On Identifying Exclusionary Conduct, 61 Notre Dame L. Rev. 972, 975 (1986) (“It takes economists years, sometimes decades, to understand why certain business practices work, to determine whether they work because of increased efficiency or exclusion.”). As a result, entrepreneurs will often struggle to articulate their reasons for adopting a given course of conduct, and thus fail to convey the practice’s redeeming virtues to courts and juries.176Id. (“[E]ntrepreneurs often flounder from one practice to another trying to find one that works. When they do, they may not know why it works, whether because of efficiency or exclusion. They know only that it works. If they know why it works, they may be unable to articulate the reason to their lawyers . . . .”); see also Geoffrey A. Manne & E. Marcellus Williamson, Hot Docs vs. Cold Economics: The Use and Misuse of Business Documents in Antitrust Enforcement and Adjudication, 47 Ariz. L. Rev. 609, 619–24 (2005) (discussing the disconnect between business knowledge and economic reality). The error-cost framework accounts for these biases by making antitrust cases marginally harder to bring in such instances, most notably by precluding the application of per se prohibitions when courts are unfamiliar with the underlying conduct.177Broadcast Music, 441 U.S. at 10; Topco, 405 U.S. at 607–08.

The upshot is that nostalgic scholars are wrong to assume that antitrust enforcement in the digital sphere has been sparse because the law is too dated to tackle the novel realities of these markets. To the contrary, it is precisely because innovative markets often feature novel forms of conduct that antitrust law proceeds with caution. By setting up the antitrust apparatus in this way, legislators and courts effectively recognized that emerging markets are, generally, no more prone to anticompetitive conduct than their traditional counterparts, but that it is harder to reliably identify anticompetitive behavior in these contexts.

C.     Shifting the Burden of Proof

Antitrust dystopia and nostalgia would not be so problematic were it not for one of the key policy proposals that almost invariably accompanies them: shifting the burden of proof in antitrust proceedings, or enacting legislation that would achieve a similar result. Going down either of these paths would effectively transpose the precautionary principle into the world of antitrust enforcement—especially as it relates to digital markets. However, as the rest of this Article explains, there is no evidence to suggest that this precautionary approach is at all justified in antitrust doctrine.

Calls for antitrust enforcers to shift the burden of proof when dealing with digital markets have become increasingly common on both sides of the Atlantic. The European Commission’s digital competition report, for instance, concluded that shifting the burden of proof would be appropriate in both unilateral conduct and merger proceedings concerning digital markets:

The test proposed here would imply a heightened degree of control of acquisitions of small start-ups by dominant platforms and/or ecosystems, as they would be analysed as a possible defensive strategy against partial user defection from the ecosystem as a whole. Where an acquisition plausibly is part of such a strategy, the burden of proof is on the notifying parties to show that the adverse effects on competition are offset by merger-specific efficiencies.178Crémer et al., supra note 53, at 124 (emphasis added).

. . . .

[O]ne may want to err on the side of disallowing potentially anticompetitive conducts, and impose on the incumbent the burden of proof for showing the pro-competitiveness of its conduct. This may be true especially where dominant platforms try to expand into neighbouring markets, thereby growing into digital ecosystems, which become ever more difficult for users to leave. In such cases, there may be, for example, a presumption in favour of a duty to ensure interoperability.179Id. at 4 (emphasis added).

This conclusion was echoed by other influential thinkers. The Stigler Center Report surmised that:

[W]hen an acquisition involves a dominant platform, authorities should shift the burden of proof, requiring the company to prove that the acquisition will not harm competition.180Stigler Center Report, Policy Brief, supra note 63, at 17 (emphasis added).

. . . .

Antitrust law might be revised to relax the proof requirements imposed upon antitrust plaintiffs in appropriate cases or to reverse burdens of proof. Burdens of proof might be switched by adopting rules that will presume anticompetitive harm on the basis of preliminary showings by antitrust plaintiffs and shift a burden of exculpation to the defendant or by ensuring that plaintiffs are not required to prove matters to which the defendants have greater knowledge and better access to relevant information.181Id. at 98.

And while it did not wholeheartedly endorse this approach, the UK’s “Furman Report” considered that shifting the burden of proof in merger cases should at least be on the table:

The principal alternative considered by the Panel has been the introduction of a legal presumption against acquisitions by large digital companies, with the burden placed on parties involved to provide proof that the merger will not be anti-competitive.182Digital Competition Expert Panel, supra note 66, at 101 (emphasis added).

Finally, Tim Wu and a group of antitrust scholars have put forward various proposals that would effectively amount to shifting the burden of proof in antitrust proceedings:

1. Vertical coercion, vertical restraints, and vertical mergers should enjoy no presumption of benefit to the public;

. . . .

5. The Berkey Photo standard for establishing monopoly leveraging should be restored [thereby substantially lowering the evidentiary threshold required to bring Section 2 leveraging cases];

. . . .

6. The broad structural concerns expressed by Congress in its enactment of the 1950 Anti-Merger Act, including due concern for the economic and political dangers of excessive industrial concentration, should drive enforcement of Section 7 of the Clayton Act;

7. Anticompetitive conduct harming one party or class should never be justifiable by offsetting benefits to another party or class. Netting harms and benefits across markets, parties, or classes should not be a method for assessing anticompetitive effects[.]183Tim Wu, The Utah Statement: Reviving Antimonopoly Traditions for the Era of Big Tech, OneZero (Nov. 18, 2019), https://perma.cc/T4AX-RAMS.

These proposals are starting to gain traction with policy makers on both sides of the Atlantic. A draft bill introduced by US Senator Klobuchar proposes to shift the burden of proof in merger proceedings.184See Competition and Antitrust Law Enforcement Reform Act of 2021, S. 225, 117th Cong. (2021). The draft European Digital Markets Act goes one step further. It lays out a series of practices that would be per se prohibited when they are implemented by so-called “gatekeeper” platforms.185See Commission Proposal for a Regulation of the European Parliament and of the Council on Contestable and Fair Markets in the Digital Sector (Digital Markets Act), COM (2020) 842 final (Dec. 15, 2020).

This approach is highly problematic. For a start, it is not the same to argue that certain practices can harm competitors in the digital economy (as the reports cited above ostensibly do), as it is to demonstrate that such conduct is, on balance, harmful to consumer welfare. To the best of our knowledge, there is no evidence to support this second claim. If it is indeed the case that there is no evidence to support this claim, then the problem becomes one of proof: can defendants and plaintiffs realistically be expected to show the innocuousness or harmfulness of practices on a case-by-case basis? And, if not, what is the appropriate default presumption? As we have discussed, there are strong reasons to believe that plaintiffs are prone to mischaracterize efficient behavior as exclusionary, and that defendants will struggle to show that conduct is innocuous. And given the unlikelihood of fat-tailed antitrust harms in digital markets, plans to shift the burden of proof in digital antitrust proceedings appear ill-advised.

In short, the above proposals would effectively establish what amounts to a precautionary principle within the antitrust laws. From the moment authorities or plaintiffs meet some very limited evidentiary thresholds, a whole series of practices would be presumed harmful or banned outright and the onus of proving the lack of harm would fall upon defendants—who may be uniquely unsuited to meeting this burden.186Manne, supra note 34, at 77 (“[D]efendants engaged in innovative business practices that have evolved over time through trial and error regularly have a difficult time articulating a justification that fits either an economist’s limited model or a court’s expectations. . . . Imposing a burden of proof on entrepreneurs—often to prove a negative in the face of enforcers’ pessimistic assumptions—when that burden can’t plausibly be met can serve only to impede innovation.”).

D.     Partial Conclusion: When is Precaution Appropriate?

The previous sections have shown that numerous policy makers and scholars—guided by dystopian fears and a sense of nostalgia—have called for the introduction of precautionary measures in antitrust proceedings. This raises two interrelated questions. First, as a matter of general principle, when is the resort to precautionary measures appropriate? Second, are precautionary measures required to deal with antitrust enforcement in digital markets?

Conclusively answering the first question is ultimately beyond the scope of this Article. Instead, we proceed under the assumption that precautionary measures are called for only when a given course of conduct brings about a credible (if hypothetical) risk of total ruin.187Taleb et al., supra note 39, at 1. This is not to say that we endorse this version of the precautionary principle (or any version for that matter)—as some of if its implications are far from uncontroversial.188This cautious approach might rule out innovations which could arguably save the lives of millions of human beings. See Bailey, supra note 47. However, it is a solid benchmark against which to assess the claim that proponents of heightened antitrust enforcement have not produced sufficient evidence to warrant the imposition of precautionary measures within antitrust enforcement.

As the rest of this Article will explain, current calls for heightened antitrust intervention in digital markets do not pass the low precautionary principle benchmark set out above. More precisely, critics’ claims that digital markets are particularly prone to market failure appear unfounded. Proponents of heightened antitrust intervention cite digital markets’ strong reliance on user data and the presence of network effects and returns to scale in order to support their assertions. However, as argued below, theoretical and empirical evidence fails to support such claims. There is thus nothing to suggest that digital markets are inherently more problematic than the rest of the economy, and that they should, as result, be subject to a higher level of scrutiny from antitrust and/or regulatory authorities.189See infra Part II. Finally, we show that extremely similar claims have been raised before, most notably during the Microsoft antitrust interventions in the late-1990s and early 2000s. Although the evidence is only anecdotal, the history of this case suggests that, if anything, severe market failures are less likely in digital markets than in the rest of the economy.190See infra Part III.

II.     The Case of Big Data Competition

The idea that digital markets are inherently more problematic than their traditional counterparts—if there even is a meaningful distinction—is perhaps best encapsulated by the catchphrase that “data is the new oil.” This trope is routinely repeated by policy makers, business investors, and reporters alike.191For instance, several State Attorneys General drew this parallel in an antitrust complaint lodged against Google. See Complaint at 6, Colorado v. Google, 1:20-cv-03715 (D.D.C. 2020) (“Cash is no longer the only form of currency, and rather than mining and monetizing a scarce resource such as oil, the attention economy is based on mining and monetizing knowledge about what is inside the minds of individual users. Google uses its gargantuan collection of data to strengthen barriers of expansion and entry, which blunts and burdens firms that threaten its search-related monopolies (including general search services, general search text advertising, and general search advertising.”)); see also Meglena Kuneva, Eur. Consumer Comm’r, Keynote Speech at the Roundtable on Online Data Collection, Targeting and Profiling (Mar. 31, 2009) (“Personal data is the new oil of the internet and the new currency of the digital world.”); Martin Pelletier, Why Data is the New Oil and What it Means for Your Investment Portfolio, Fin. Post (Aug. 29, 2017), https://perma.cc/SFH7-CL92 (arguing that data is the new global resource); The World’s Most Valuable Resource is No Longer Oil, But Data, The Economist (May 6, 2017) [hereinafter The World’s Most Valuable Resource], https://perma.cc/M8C4-TXMT (arguing that data is the world’s most valuable resource and antitrust regulation must step into the breach). Behind the slogan lies the fear that left to their own devices, today’s dominant digital platforms will become all powerful—protected by an impregnable “data barrier to entry”—akin to the industrial giants of the Gilded Age and specifically the Standard Oil Company. These comparisons are not just implicit. The Economist and other press outlets have routinely used Standard Oil Company-related imagery to depict the rise of digital platforms (see Figure 1).

Against this alarmist backdrop, nostalgic antitrust scholars have argued for aggressive antitrust intervention against the inherently non-standard business models and contractual arrangements that characterize these markets. Yet, as this Part demonstrates, a proper assessment of the attributes of data-intensive digital markets does not support either the dire claims or proposed interventions.

Section A shows that data is not exclusive, making it hard for incumbents to appropriate the benefits that might stem from a superior access to data. Section B argues that firm-level capabilities, notably those which stem from talented research and development (“R&D”) teams, are likely far more relevant from a competitive standpoint than is the access to large datasets. This assertion is borne out by empirical evidence that suggests that, contrary to critics’ claims, the marginal value of data decreases rapidly. Finally, Section C shows that most successful platforms emerge and overthrow incumbents, despite—or maybe even thanks to—their initially inferior access to data (“necessity is the mother of invention”). In other words, even if data ultimately plays a large role in the monetization of digital platforms, it does not appear to be necessary for their creation, and thus for the emergence of new competitors.

The upshot is that “data is the new oil” is a highly misleading trope that perfectly illustrates the many misapprehensions that exist about competition in data-intensive markets.192See, e.g., Alec Stapp, Why Data is Not the New Oil, Truth on the Market (Oct. 8, 2019), https://perma.cc/78H8-93ZR. Instead, a closer look at these markets suggests that competition may indeed flourish. Andrei Hagiu and Julian Wright summarize this well: “These developments make data-enabled learning much more powerful than the customer insights companies produced in the past. They do not, however, guarantee defensible barriers.”193See Andrei Hagiu & Julian Wright, When Data Creates Competitive Advantage, Harv. Bus. Rev. Mag. (Jan.–Feb. 2020).

Figure 1: Top left, Top Right and Bottom left: Depictions of Google, Amazon, and Facebook (Contemporary). Bottom Right, Depiction of the Standard Oil Company (1904).194These images are cited in a clockwise direction. Survival of the Biggest: Battle of the Internet Giants, The Economist (Dec. 1, 2012), https://perma.cc/M9YZ-4MUL; see also Scott Galloway, Silicon Valley’s Tax-Avoiding, Job-Killing, Soul Sucking Machine, Esquire (Feb. 8, 2018), https://perma.cc/CF29-253D; The World’s Most Valuable Resource, supra note 191. For a more detailed discussion, see generally Dirk Auer & Nicolas Petit, Antitrust Versus the Press: Two Systems of Belief About Monopoly, 39 Cato J. 1, 6 (2018).

A.     Data is Information

One of the most salient features of the data that online firms create and consume is that, jargon aside, it is just information. As with other types of information, it thus tends to have at least some traits that are usually associated with public goods (i.e., goods that are non-rivalrous in consumption and not readily excludable).195See Carl Shapiro & Hal R. Varian, Information Rules: A Strategic Guide to the Network Economy 3 (1998). “[D]ata has near-zero marginal cost of production and distribution even over long distances[,]”196Catherine Tucker, Digital Data, Platforms and the Usual [Antitrust] Suspects: Network Effects, Switching Costs, Essential Facility, 54 Rev. Indus. Org. 683, 691 (2019). making it very difficult to exclude others from accessing it. Meanwhile, multiple economic agents can simultaneously use the same data, making it non-rivalrous in consumption.

This is not to say that data requires some special protection in order to be provided by the market: far from it. As Ronald Coase famously showed, public goods are a theoretical construct—like perfect competition or monopoly—that rarely exist outside of economic textbooks.197See R. H. Coase, The Lighthouse in Economics, 17 J. L. & Econ. 357, 375–76 (1974). Instead, the public good analogy shows that data bears some traits which make it almost irreconcilable with the alleged hoarding and dominance that came to be associated with the oil industry of the late nineteenth and early twentieth centuries.

Moreover, data, broadly speaking, is useful to all industries. Collecting data on consumers is not a new phenomenon restricted to online companies. The market for data, even if narrowly described as data for targeted advertising, is much broader than the online world. Offline retailers have long used data about consumers to better serve them. Through devices like coupons and loyalty cards (to say nothing of targeted mailing lists and the age-old practice of data mining check-out receipts), brick-and-mortar retailers have long tracked purchase data and used it to better serve consumers.198See, e.g., Dianne Heath, How Panera Uses Rewards Card to Increase Customer Loyalty & Attract Customers, Analyst Dist. (Nov. 4, 2011), https://perma.cc/MZ5J-XXT8; Nancy Kross, Big Data Analytics Revolutionizing The Way Retailers Think, Bidness Etc (June 26, 2014), https://perma.cc/Y2GE-UQQ3. Not only do consumers receive better deals as a result, but retailers know better what products to stock and advertise and when and on what products to run sales.

1.     Access to Data Is Not Exclusive

Data tends to be non-rivalrous, or at least, the cost of producing an additional copy of some piece of data is usually close to zero.199See Shapiro & Varian, supra note 195, at 3. For this reason, one agent’s use of a given piece of information does not automatically preclude its rivals from using the same information.

The non-rivalrous nature of information seriously undermines the views of critics who have compared digital platforms to Standard Oil and argued that government authorities need to step in to limit the platforms’ control over data.200See Nathan Newman, Taking on Google’s Monopoly Means Regulating Its Control of User Data, Huffington Post (Sept. 24, 2013, 7:37 AM), https://perma.cc/3UJQ-XHNG. To say that data is like oil betrays a serious misunderstanding. Google knowing a person’s birthday doesn’t limit the ability of Facebook to know a person’s birthday as well. While databases may be proprietary, the underlying data usually is not.

In other words, most data is non-exclusive. Not only can the same data be used by many different economic agents, but there are also numerous ways in which it can be obtained through different platforms. As discussed in more detail below, antitrust authorities should thus be highly skeptical about claims that rivals will be unable to independently generate equivalent data to that which is held by dominant platforms.

2.     Data is Hard to Appropriate

[W]e expect a free enterprise economy to underinvest in invention and research (as compared with an ideal) because it is risky, because the product can be appropriated only to a limited extent, and because of increasing returns in use.201Kenneth J. Arrow, Economic Welfare and the Allocation of Resources for Invention, in The rate and direction of inventive activity: Economic and social factors 619 (1962). But see Harold Demsetz, Information and Efficiency: Another Viewpoint, 12 J. L. & Econ. 1, 14 (1969); Jack Hirshleifer, The Private and Social Value of Information and the Reward to Inventive Activity, 61 Am. Econ. Rev. 561, 561 (1971).

The second key feature of information is that is it hard to appropriate. In practice, this means that companies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also means that firms may be more reluctant to invest in data generation than is socially optimal.202See Arrow, supra note 201, at 617. In fact, to the extent this is true there is arguably more risk of companies under-investing in data generation than of firms over-investing in order to create data troves with which to monopolize a market. This again contrasts with oil where complete excludability is the norm. The fact that appropriating data is a complicated task can be seen in a number of instances.

First, specific pieces of data can usually be obtained through a variety of channels. This undermines oft-repeated claims that large online platforms such as Google and Facebook have acquired an insurmountable data advantage over their competitors.203See, e.g., The World’s Most Valuable Resource, supra note 191. In other words, it is almost impossible to build an insurmountable data advantage because there will generally be an alternative way (or, more likely, a multitude of ways) to amass the same data. To take just one example, mobile internet service providers (“ISPs”) like Verizon have access to considerable data about their users, likely at least comparable to what Google and Facebook have. What’s more, mobile ISPs have uniquely good access to location data, increasingly the coin of the realm in a world where the most important and valuable consumer interactions are shifting to mobile. This may not be identical information, and even where it overlaps it is certainly a somewhat different dataset. Yet there can be no doubt that advertisers, among others, can use Verizon’s data for the same purposes as they use data from Google and Facebook.

Another important example concerns the ubiquity of data scraping on the internet. Contrary to popular belief, numerous firms in data-heavy industries do not rely solely on proprietary data to improve and market their products. Instead, these firms routinely “scrape” the internet in order to obtain the data they require.204See Klint Finley, ‘Scraper’ Bots and the Secret Internet Arms Race, Wired (July 23, 2018, 7:00 AM), https://perma.cc/8QFZ-4MJ8. This practice has led to a blossoming industry. Critically, this is one space where dominant firms arguably have little advantage over more nimble rivals. Indeed, stories abound of startups going head-to-head with large incumbents and generating more useful insights from the same publicly accessible data.205See, e.g., Julia Angwin & Steve Stecklow, ‘Scrapers’ Dig Deep for Data on Web, Wall St. J. (Oct. 12, 2010, 12:01 AM), https://perma.cc/3USB-UFD2; Miranda Katz, A Lone Data Whiz Is Fighting Airbnb—And Winning, Wired (Feb. 10, 2017, 12:00 AM), https://perma.cc/TSQ5-RPXT.

The upshot is that the ease with which data can be obtained—especially by identifying or creating new sources of information or by using publicly accessible information—suggests that it is an unlikely tool for firms to perpetuate monopoly power over lasting periods of time. A monopoly that relies on data to cement its position is thus built on sand, because any data-related advantage can be eroded the moment rivals come up with an alternative way of attaining comparable information.

It is important not to overstate the fungibility of data. While fungibility is the norm for many types and uses of data, it certainly will not always be. But, properly understood, the uniqueness of data is not a strong argument for antitrust enforcement against firms successfully using big data. First, unique agglomerations of data for which comparable substitutes do not (yet) exist inevitably reflects unique entrepreneurial foresight into the value of certain data, superior data processing abilities, and a particularly innovative mechanism for generating unique data.206See, e.g., Ufuk Akcigit & Qingmin Liu, The Role of Information in Innovation and Competition, 14 J. Eur. Econ. Ass’n 828, 828 (2016) (examining how firms pursue different innovation strategies owed to imperfect information distribution across firms). In all of these cases, there potentially are considerable consumer advantages from the underlying conduct that enables the unique appropriation of data, and penalizing the successful use of data means also penalizing broader innovative activities. Indeed, the inseparability of data from the products or services that generate or use it is one of the key problems of calls for antitrust intervention against big data: we do not use our antitrust laws (in the US, at least) against effective competition, but only against abuse of market power.

Second, data use by multi-sided platforms may often appear competitively unique when looking at only one side of the platform. But any anticompetitive significance may also be mitigated or undermined by the fungibility of the data on the other side of the platform. To take one obvious example, the data used and generated by Google Search is significantly different than that used and generated by Facebook. And, not coincidentally, on the user side of the platform Google and Facebook offer substantially different products, used primarily for divergent purposes. But on the advertising side, of course, the distinctions are substantially less relevant. Both Google and Facebook collect, generate, and process data to help advertisers identify and reach likely customers. The mechanisms by which they do this are quite different, but the purpose and aggregate content of the data is likely not very different at all.207It must be mentioned, as well, that the difference between the sets of specific users advertisers might access on each platform approaches zero as each platform approaches ubiquity. For advertisers, the substitutability of Facebook for Google (and vice-versa) increases as each increases in size. Whether this increase in competition offsets any (alleged) competitive problems resulting from their size is an empirical question (but one that advocates for antitrust action against these firms because of their size). The lack of advertising-side differentiation is no doubt bolstered by user multi-homing and the increasing ability of users to transfer data between platforms.208See The Data Transfer Project, https://perma.cc/UY7Z-S87N. The DTP is an initiative, begun in 2017, of Google, Facebook, Microsoft, Twitter, and a number of other data platforms to make data portability between platforms more efficient and user-friendly.

3.     Data as a Simulacrum and Information Asymmetry

A third important feature of data is that, in general, it is extremely difficult for parties to determine its value. For instance, economists have long theorized that it is hard for would-be purchasers to know how much a given piece of data is worth, ex ante. As economist Kenneth Arrow argued: “[T]here is a fundamental paradox in the determination of demand for information; its value for the purchaser is not known until he has the information, but then he has in effect acquired it without cost.”209See Arrow, supra note 201, at 615.

But the problem cuts even deeper. It can be hard for platforms or other intermediaries to know the value of underlying information at all—even once they have acquired access to a data set. Indeed, data’s true value typically is revealed only when firms apply high-quality processing to a data set.

Data is a simulacrum. Platforms are locked in an ever-evolving battle to identify, collect, process, interpret, and use data in order to figure out user preferences or conduct. There is no silver bullet with respect to the amount or kind of data to accomplish this. Every data set represents some collection of pieces of information that are an effort to guess at the user’s mind, as is every aggregate set of data about a large group of people (for which errors are more likely to cancel out, but for which the representational value of the data is less likely to be very accurate or useful because it encompasses significant noise relative to signal). Big data sets do, however, allow for pattern recognition. For example, in order to plot out likely traffic issues, mapping apps don’t really need to know where a given user is per se, but only whether a large mass of drivers are likely to be in the same place at the same time.

This information asymmetry point is important. It is commonly said or assumed that platforms have much more information than users and can use it to their advantage. The same is said for incumbents against new entrants. But is this really true? Users know far more about themselves than any platform ever will. If a new entrant were to ask the right questions, or buy the right data, it could easily know more than whatever Google knows about a user.

This is a key reason why Amazon is such a threat to Google: it knows what users shop for, what they buy, at what price, etc. That data is manifestly of enormous value to advertisers. Whatever Google knows about how often users search for terms like “Stigler entry barriers,” it pales in importance to what Amazon knows about what products users buy, or what Facebook knows about who their friends are and how they interact with them. And who knows what will be most relevant tomorrow? Even today, if big data were so good at predicting users’ behavior, then tech firms would be very good at, for example, predicting what future products and R&D projects will be most profitable. They, of course, are not.

The notion that data’s value is inherently tied to subsequent processing efforts also suggests that incumbent platforms may actually be important facilitators of new entry. Without generalizing, there are some obvious examples, like Amazon Web Services, which reduces the cost to smaller entrants of obtaining scale in backbone technology, or Google Search, which makes it easier for users to find new entrants that otherwise have to overcome the problem of anonymity. In fact, to the extent that lack of information is a real entry barrier, the role of incumbent intermediaries in reducing search and other information costs (like providing reputation markets, etc.) can actually operate to overcome entry barriers. It is crucial in assessing the extent to which data might operate as a barrier to also assess the mechanisms it enables for reducing barriers, even for a company’s direct competitors.210For further discussion of this point, with particular reference to the Microsoft case, see infra Section III.D.

As suggested by the US Microsoft court, however, the relevant question concerns not the “initial acquisition of monopoly power”; it concerns a company’s “efforts to maintain this position through means other than competition on the merits.”211United States v. Microsoft Corp., 253 F.3d 34, 56 (D.C. Cir. 2001). It is, presumably, possible for a company to deploy, use, or limit access to data in order to impede competition at the platform level, rather to compete—but this does not convert data into an entry barrier per se.212It should also be noted that examples of conduct that might amount to the erection of unjustified data barriers to competition are few and far between and may not even be identifiable in actual markets. See, e.g., Daniel L. Rubinfeld & Michal S. Gal, Access Barriers to Big Data, 59 Ariz. L. Rev. 339, 362–63 (2017) (attempting to canvass possible “behavioral” data barriers, but essentially identifying only a limitation imposed on a national census form as a constraint employed without business justification).

B.     Data is Not Scarce; Expertise Is

Another important feature of data is that it is ubiquitous. Contrary to oil, the predominant challenge for firms is not so much obtaining data but rather drawing useful insights from it. This has two important implications as far as antitrust policy is concerned. First, although data does not have the same self-reinforcing characteristics as network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns. Second, it is firms’ capabilities, rather than the data they own, that lead to success in the marketplace. Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might thus have causality in reverse: arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around.

1.     Increasing Returns to Scale, Data Network Effects, and Learning by Doing

Much of the impetus for antitrust enforcement in digital markets is premised on the assumption that the firms operating in these industries necessarily present some combination of increasing returns to scale, network effects, and data-related incumbency advantages. However, critics fail to provide anything more than anecdotal evidence to support such claims.213Crémer et al., supra note 53, at 20.

A first question concerns increasing returns to scale in the digital economy, in particular those relating to the use of data.214For an introduction to the concept of increasing returns to scale, see generally Varian, supra note 56. Indeed, critics often argue that digital platforms benefit from increasing returns to scale on all ranges of output. In other words, because of their scale, they require fewer economic inputs per unit of output, which allegedly gives them an unassailable advantage over rivals. The Stigler Center Report, for instance, concludes that:

Typically, information goods involve increasing returns to scale because their production requires a fixed cost and no or little variable cost

. . . .

The increasing returns to scale create barriers to entry: New firms cannot offer the quality of the incumbent without the same large-scale operation to pay for the fixed costs. But the firm can only achieve a large scale if quality is high. Thus, a potential entrant, foreseeing that it will not be profitable at the smaller scale, will not enter the market to challenge the incumbent.215Stigler Center Report, Antitrust Subcommittee, supra note 65, at 36–37.

In practice, however, the evidence for these increasing returns is particularly thin. A look at the annual reports of many big tech firms is revealing in this regard.

Google’s most recent 10-K, for example, shows that many of the company’s costs are unlikely to become smaller (in relative terms) as its output increases. This cuts against the existence of extreme returns to scale. For a start, more than fifty percent of Google’s total expenditures concern so-called “cost of revenues.”216Alphabet Inc., Annual Report (Form 10-K) 36 (Feb. 4, 2019). Of these, roughly half involve traffic acquisition costs (“TAC”), whereby Google pays other firms for the placement of its contents (be it advertisements or access points).217Id. at 32. In Google’s own words, “TAC which are paid to Google Network Members primarily for ads displayed on their properties and amounts paid to our distribution partners who make available our search access points and services. Our distribution partners include browser providers, mobile carriers, original equipment manufacturers, and software developers.”218Id. At first glance, there is nothing to suggest that these expenditures become relatively less burdensome as a company increases in scale. In fact, the opposite may well be true. It is probably more costly to gain access to marginal users than inframarginal ones. This might explain why, for years, Google’s TAC-related expenditures have been steadily increasing.219Id. Google’s single largest expenditure thus fails to fit the increasing returns to scale pattern. And much the same can be said about many of the company’s other large outlays. These include the acquisition of bandwidth, the operation of data centers (which rivals can outsource220The five largest players in this space are currently Amazon, Microsoft, IBM, Google, and Alibaba. See, e.g., Synergy Rsch. Grp., The Leading Cloud Providers Increase Their Market Share Again in the Third Quarter, Globe Newswire (Oct. 26, 2018), https://perma.cc/V3A8-CMD5. Together, these firms provide data services to some of the world’s largest companies. For instance, Netflix is widely reported to use Amazon’s cloud services. See, e.g., Andria Cheng, Amazon’s Retail Rivals Are Happy To Work With It—As AWS Cloud Clients, Forbes (July 14, 2019, 8:04 PM), https://perma.cc/6FXN-AEUB.), general administration, sales and marketing costs.221See Alphabet, Inc., Annual Report (Form 10-K) 37, 60-61 (Dec. 31, 2020). This leaves R&D, which represents roughly a fifth of Google’s expenditures, as a potential source of increasing returns to scale. But, here too, the case for increasing returns to scale is far from clear-cut. Much of the economic literature on the topic considers that R&D leads to decreasing—and not increasing—returns to scale.222See, e.g., Claude d’Aspremont & Alexis Jacquemin, Cooperative and Noncooperative R&D in Duopoly with Spillovers, 78 Am. Econ. Rev. 1133, 1133–34 (1988); see also Partha Dasgupta, The Theory of Technological Competition, in New Developments in the Analysis of Market Structure 523 (Joseph E. Stiglitz & G. Frank Mathewson eds., 1986). Contra Rabah Amir, Jim Y. Jin & Michael Troege, On Additive Spillovers and Returns to Scale in R&D, 26 Int’l J. Indus. Org. 695, 696 (2008) (arguing broadly that there should be no presumption either for or against decreasing returns to scale in R&D). At the very least, there can be no presumption that Google’s heavy reliance on R&D is necessarily a source of increasing returns. In short, for most of Google’s expenditures, there is no obvious reason to believe that greater economic output would necessarily require a less than proportionate increase in economic inputs.

The story is much the same for other big tech firms, such as Amazon and Facebook. Roughly three-quarters of Amazon’s expenditures involve “cost of sales” and “fulfillment” costs.223See Amazon, Inc., Annual Report (Form 10-K) 35 (Jan. 31, 2019). These primarily include costs associated with the purchase and shipping of goods. There is little to suggest that either of these is a source of “extreme” returns to scale. They mostly involve the same capacity utilization challenges that brick-and-mortar retailers must contend with.224Note that capacity utilization is sometimes cited as a source of potential (if moderate) increasing returns to scale. See, e.g., Randy A. Nelson, On the Measurement of Capacity Utilization, 37 J. Indus. Econ. 273, 282 (1989) (arguing that the capacity utilization of electric utilities may be a source of moderate increasing returns to scale). If these potential returns to scale were indeed insurmountable, Amazon would arguably have been unable to effectively compete against incumbent brick-and-mortar retailers in the first place. Likewise, roughly two-thirds of Facebook’s expenditures fall under the “cost of revenue,” “marketing and sales,” and “general and administrative” categories.225See Facebook, Inc., Annual Report (Form 10-K) 42 (Dec. 31, 2018). Again, there is little to suggest that increased output would be associated with a less-than-proportional increase in inputs for these expenses.

So, if they exist, where might big tech’s increasing returns to scale originate? One common suggestion is that they stem from the use of data. Critics sometimes point to the existence of so-called “data network effects” (although economies of scale and scope might be a more appropriate terminology226See Tucker, supra note 196, at 685 (“The middle two categories of network effects described by Grunes and Stucke (2016), are simply known as economies of scale and scope in economics terminology and are considered to be distinct from network effects.”).).227See, e.g., Maurice E. Stucke & Allen P. Grunes, Big Data and Competition Policy 6–7 (2016) (arguing that data-driven industries are “subject to several network effects,” including: “traditional network effects, including social networks such as Facebook; network effects involving the scale of data; network effects involving the scope of data; and network effects where the scale and scope of data on one side of the market affect the other side of the market (such as advertising).”); see also Jason Furman, Address at the FTC Hearings on Competition and Consumer Protection in the 21st Century (Sept. 13, 2018) (“I think the big empirical question that I do not know the answer to . . . is if you think there is diminishing returns to data then you are a lot less worried about it then [sic] if you think there is some region of increasing returns. There is [sic] some people that deal with computer science that say, with machine learning, when you get past a certain point you get to this place where you can, you know, do the AI in a certain way that you could not do before you get to that scale.”). The argument goes that superior access to data allows firms to improve their products and gain more users. This then leads to even more data, thereby creating a self-reinforcing circle that eventually causes one firm to dominate the market. Take, for example, Google, which has become the poster child for unsophisticated “data network effects” arguments. In the words of Nathan Newman:

While there are a number of network effects that come into play with Google, [“its intimate knowledge of its users contained in its vast databases of user personal data”] is likely the most important one in terms of entrenching the company’s monopoly in search advertising.

. . . .

Google’s overwhelming control of user data . . . . might make its dominance nearly unchallengeable.228Nathan Newman, Search, Antitrust, and the Economics of the Control of User Data, 31 Yale J. on Reg. 401, 420, 423 (2014).

Although the intuition is appealing, it has, to the best of our knowledge, neither been translated into a rigorous economic model nor been established empirically. In fact, the anecdotal evidence that has often been used to support this naïve assertion merely shows that learning-by-doing plays an important role in the tech industry, just as it does in the rest of the economy.

For a start, the existence of data-driven increasing returns to scale (or other data-related incumbency advantages) is not borne out by the burgeoning empirical literature on the topic. Summarizing these empirical findings, Professor Catherine Tucker concludes that “empirically there is little evidence of economies of scale and scope in digital data in the instances where one would expect to find them.”229See Tucker, supra note 196, at 686.

There are numerous pieces of evidence to support this claim.230But see Maximilian Schaefer, Geza Sapi, & Szabolcs Lorincz, The Effect of Big Data on Recommendation Quality. The Example of Internet Search 5 (2018) (Düsseldorf Inst. for Competition Econ. Discussion Paper No. 284) (showing that cookies that track user activity for longer periods of time improved the accuracy results on the Yahoo! search engine). One potential objection to this study is that, though it broadly argues that obtaining more data about each user improves results, it says very little about the cumulative effect of obtaining data about multiple users. Ultimately, it is this second potential effect that is central to critics’ claims. See, e.g., Newman, supra note 228, at 421. For instance, economist Patrick Bajari and his co-authors use data from Amazon to show that (1) data on a wider range of products does not improve demand forecasting, and (2) increasing the timescale of data improves forecasting, but with diminishing returns.231See Patrick Bajari, Victor Chernozhukov, Ali Hortaçsu, & Junichi Suzuki, The Impact of Big Data on Firm Performance: An Empirical Investigation 5–6 (Nat’l Bureau of Econ. Rsch., Working Paper No. 24334, 2019). Likewise, in a paper co-authored with economist Lesley Chiou, Catherine Tucker finds that storing search engine results for shorter periods does not affect the accuracy of subsequent search results.232See Lesley Chiou & Catherine Tucker, Search Engines and Data Retention: Implications for Privacy and Antitrust 3 (Nat’l Bureau of Econ. Rsch., Working Paper No. 23815, 2017). Again, this cuts against the existence of increasing returns to scale. In another paper, Catherine Tucker and her co-authors cast doubts on the overall accuracy of digital profiling, and thus the competitive edge that firms might obtain by acquiring larger amounts of data.233See Nico Neumann, Catherine E. Tucker & Timothy Whitfield, How Effective is Third-Party Consumer Profiling and Audience Delivery?: Evidence from Field Studies, 38 Mktg. Sci. 918 (2019). Finally, a recent study argues that additional data improves algorithmic prediction with decreasing returns to scale.234See Jörg Claussen, Christian Peukert & Ananya Sen, The Editor vs. the Algorithm: Targeting, Data and Externalities in Online News (CESifo, Working Paper No. 8012, 2019). Using data from a large German news outlet, the authors show that the number of times a user visits a website improves the site’s prediction algorithm with decreasing returns (the algorithm optimizes the news articles that are presented to each individual user).235Id. at 11.

As our survey of the empirical literature in the following table shows, additional pieces of data are usually beneficial, but these benefits systematically entail diminishing marginal returns.

Table 1: Survey of Empirical Papers That Analyze the Marginal Benefits of Data

Author Year Method Source of data Effect of “More Data”
de Fortuny, Martens, & Provost 2013 Multivariate event model Data drawn from nine different predictive modeling applications, from book reviews to banking transactions Increasing with diminishing returns

“One should note, however, that the curves do seem to show some diminishing returns to scale.”236See Enric Junqué de Fortuny, David Martens & Foster Provost, Predictive Modeling with Big Data: Is Bigger Really Better?, 1 Big Data 219 (2013).

“The marginal increase in generalization accuracy decreases with more data for several reasons.”237Id.

Chiou & Tucker 2017 Difference-in-differences Data from Experian Hitwise (outgoing traffic from Google, Yahoo! and Bing search engines) Flat

Longer periods of data storage (6 to 18 months; and 3 to 13 months) “do not confer advantages in search quality.”238Chiou & Tucker, supra note 232, at 14–17.

Schaefer, Sapi & Lorincz 2018 Local polynomial regression Yahoo! search logs (i.e., users’ click behavior) Increasing with diminishing returns

“[Q]uality of search results improve with more data on previous searches” (cookie length)”; with decreasing returns. Personalized data is the most valuable.239Schaefer et al., supra note 230, at 1, 11.

Bajari,

Chernozhukov, Hortaçsu & Suzuki

2019 Linear regression Amazon’s retail forecasting system Flat, or increasing with diminishing returns

Cookie length is robustly helpful in improving forecast quality. The effect of increasing number of products in the same category is robustly flat (with a few exceptions). When the estimated effects are not flat, “they exhibit diminishing returns to scale, with the exception of T effects in the model without time controls.”240Bajari et al., supra note 231, at 39.

Neumann, Tucker & Whitfield 2019 Field experiment Nielsen ad ratings data Sometimes negative from a cost/benefit point of view

“In comparison with random audience selection, the use of black box data profiles, on average, increased identification of a user with a desired single attribute by 0%–77%. Audience identification can be improved, on average, by 123% when combined with optimization software. However, given the high extra costs of targeting solutions and the relative inaccuracy, we find that third-party audiences are often economically unattractive.”241Neumann et al., supra note 233, at 919–20.

Claussen, Peukert & Sen 2019 Randomized experiment Website of large German news outlet Increasing with diminishing returns

“Additional data helps algorithmic performance with rapidly decreasing returns.”242Claussen et al., supra note 234, at 10.

In short, available evidence suggests that claims of “extreme” returns to scale in the tech sector are greatly overblown. Not only are the largest expenditures of digital platforms unlikely to become proportionally less important as output increases, but empirical research strongly suggests that even data does not give rise to increasing returns to scale, despite routinely being cited as the source of this effect.

2.     Misapplying the Theory of Network Effects

At a more theoretical level, the “data network effects” framing also misapplies the theory of network effects. Network effects occur when a consumer’s utility for a good is, at least in part, a function of the expected number (and quality) of other agents using the same product.243See Michael L. Katz & Carl Shapiro, Systems Competition and Network Effects, 8 J. Econ. Persps. 93, 96 (1994). These valuable users may be located on the same side (direct network effects) or on the opposite side (indirect network effects) of a platform.244See Jean‐Charles Rochet & Jean Tirole, Platform Competition in TwoSided Markets, 1 J. Eur. Econ. Ass’n 990, 993 (2003). In both cases, the bottom line is that consumers place a premium on utilizing a product whose network contains a large number of users (or higher quality ones). To a first approximation, however, this means that network effects are a benefit to users, not a cost.245See Manne & Wright, supra note 115, at 224–25.

In the case of “data networks effects,” however, the theoretical model is weak at best. Because users likely do not attach any standalone value to platforms with more data,246Data collection and use is merely a tool that a platform uses to customize user experience, not the experience itself. Firms can offer the same end-user experience (which is, logically, what consumers actually value) using different data in different amounts. there is literally an infinite number of ways in which firms may offer a superior product without having the same or as much data as their rivals. Firms can differentiate themselves on a variety of features, ranging from price, to quantity and invasiveness of ads to which users are exposed, to the degree of privacy protection afforded to users. This notably has been the case for search engines.247DuckDuckGo, for instance, has experienced significantly increased traffic in recent years (though it still lags very far behind Google in terms of users). See DuckDuckGo Traffic, DuckDuckGo https://perma.cc/PEQ6-TW5P. Crucially, competition between Google Search and DuckDuckGo does not seem to be primarily dependent on the data these firms hold. On the one hand, Google offers much lower default levels of privacy protection but proposes a full suite of online applications free of charge. In contrast, DuckDuckGo differentiates itself by offering a search engine with higher levels of privacy protection. It is not clear how much the data owned by these companies influences consumer choices.

Given this theoretical distinction, telling a story of problematic “data network effects” for a company like Google is difficult. For instance, some critics have argued that Bing lost out to Google because of inferior data. The claim is perhaps best encapsulated by Nathan Newman who writes: “That Microsoft, with nearly half of Google’s user base, still generated $2.6 billion in losses compared to its costs shows the height of the competitive barrier.248Newman, supra note 228, at 419.

Although it is possible that data-related incumbency advantages stymied Microsoft’s success relative to Google, it seems far more likely that Microsoft simply offered an inferior product. That proponents of a data network effects story ignore relative product quality along multiple dimensions and assume that the quantity of data alone is outcome determinative highlights the paucity of the argument. Data matters to the extent that it is used to provide value to users within a product or service that is also attractive, functional, and usable. The quality of the underlying algorithm, informed in part by data derived from users, certainly contributes to that, but it is far from the only factor.

And even if one were to assume that data has a significant impact on the quality of a digital firm’s products, critics would still need to prove that smaller rivals cannot obtain comparable datasets. Returning to the criticism that has been voiced against Google, proponents of the data network effects theory often propose that Google’s access to data is unparalleled and unsurmountable: “[T]he gain for Google from its network of users is not just data on each individual user, but the cumulative data that can reveal how similar users behave.”249Id. at 421.

But the relevant information is as available to Bing as it is to Google: observable patterns of users’ interactions with readily indexable, web-connected content. Google certainly makes observations about its greater number of users’ behaviors that it uses to improve its product. But Bing also has that capability, as well as the support structure of one of the most valuable companies in the world (Microsoft) and teams of talented programmers. Roughly a quarter of all US searches were performed on Bing in 2018.250See Joseph Johnson, Share of Search Queries Handled by Leading U.S. Search Engine Providers as of April 2021, Statista (2021), https://perma.cc/Y4HX-BPZJ. Surely, given its resources and teams of programmers, Bing (or another well-funded and technologically savvy competitor) is capable of competing away Google’s gains. There’s no indication that any more than a significantly smaller volume of data is required to train a search algorithm.251Manne & Wright, supra note 115, at 212. After all, the power of machine learning is that it can make useful inferences about user behavior based on (relatively) small sample sizes—one doesn’t need “all the data” to make useful machine learning algorithms. Bing has “enough” data—it indexes the same public web and has access to a very large share of user activity. It just so happens that users prefer Google’s results and other features of its product.

Moreover, synthetic data sets can cheaply, easily, and sufficiently replicate real-world data to enable even a data-poor competitor (at least one with comparable data processing capabilities) to compete with a data-rich incumbent. As Kelvin Yu has observed:

One of the most significant advantages [AI researcher] Kai-Fu [Lee] claims that top companies have—the ability to continuously collect proprietary data—may not be such a skewed advantage in the near future. Synthetic data generation is the creation of artificial data for the purposes of testing and improving AI models. . . . [S]tartups are already offering Data-Generation-as-a-Service. The technique is already being used by companies like Waymo and Tesla to simulate autonomous driving. As of July 2019, Waymo had 10 billion simulated miles and only 10 million physical miles driven, demonstrating the scalability and speed of simulating data.252Kelvin Yu, No, AI Does Not Lead to Monopoly Markets, Profiles in Entrepreneurship (July 25, 2019), https://perma.cc/J3MT-DW8W.

In fact, there is already a company, Usearch, offering search engine technology built on synthetic data.253See The World’s First Search Engine Based on Synthetic AI-Generated Data, Usearch, https://perma.cc/VDW7-M8AG (“At Usearch, we . . . took on the challenge of inventing new technology that builds large quantities of synthetic AI-generated quality pairs <query,webpage> using a new independent paradigm. Being independent means that we are not reliant in any way on collecting Query Logs from Google (like Cliqz did) or using Bing API (like DuckDuckGo, StartPage, Dogpile and so on did). Moreover, we don’t need any users, browser or query pool to get started.”).

A final point concerns the difference between network effects and “learning by doing.” When it comes to data, it is more appropriate to consider the growth of firms and the size of their networks as a function of the latter.254See Hal R. Varian, The Economics of Internet Search, Rivista di politica economica, Nov.–Dec. 2006, at 177, 179.

Network effects occur when a consumer’s utility for a good is, at least in part, a function of the expected number (and quality) of other agents using the same product.255See, e.g., Katz & Shapiro, supra note 243, at 96. These valuable users may be located in the same market or on the opposite side of a platform.256See, e.g., Rochet & Tirole, supra note 244, at 993. From a policy standpoint, some scholars have voiced fears that these network effects may lead to highly concentrated markets when incumbents are impossible to overthrow.257See Joseph Farrell & Garth Saloner, Installed Base and Compatibility: Innovation, Product Preannouncements, and Predation, 76 Am. Econ. Rev. 940, 940 (1986); see also Ariel Ezrachi & Maurice E. Stucke, Virtual Competition, 7 J. Eur. Competition L. & Prac. 585, 586 (2016). And yet, in practice, this intuition often turns out to be false. For instance, economists Stan Liebowitz and Stephen Margolis show that one of the most commonly cited examples of “excess inertia”—the failure of Dvorak keyboards to displace the allegedly less-efficient QWERTY layout—did not withstand empirical scrutiny.258See, e.g., S. J. Liebowitz & Stephen E. Margolis, The Fable of the Keys, 33 J.L. & Econ. 1, 21 (1990). The authors conclude that:

The trap constituted by an obsolete standard may be quite fragile. Because real-world situations present opportunities for agents to profit from changing to a superior standard, we cannot simply rely on an abstract model to conclude that an inferior standard has persisted. Such a claim demands empirical examination.259Id.

The upshot is that there is a theoretical, though empirically debatable, case to be made for network effects leading to potential market failures.260For a discussion, see Daniel F. Spulber, Consumer Coordination in the Small and in the Large: Implications for Antitrust in Markets with Network Effects, 4 J. Competition L. & Econ. 207 (2007); see also Dirk Auer, What Zoom Can Tell Us About Network Effects and Competition Policy in Digital Markets, Truth on the Market (Apr. 24, 2019), https://perma.cc/4FH3-8YKP.

In contrast, learning-by-doing is the idea that a firm’s productivity improves with experience.261See Kenneth J. Arrow, The Economic Implications of Learning by Doing, 29 Rev. Econ. Stud. 155, 172 (1962); see also Armen Alchian, Reliability of Progress Curves in Airframe Production, 31 Econometrica 679, 679–80 (1963). The advantage of learning-by-doing is usually found to be much less pronounced than network effects. For instance, in his seminal paper about learning-by-doing, Kenneth Arrow cites empirical literature indicating that “to produce the Nth airframe of a given type, counting from the inception of production, the amount of labor required is proportional to N-1/3.”262See Arrow, supra note 261, at 156. Contrary to network effects, learning-by-doing is thus generally assumed to involve decreasing marginal benefits, and to become almost irrelevant beyond a point.263See Alchian, supra note 261, at 680. In other words, learning-by-doing generates significant advantages in the early stages of improving a process, but these incremental advantages drop off sharply after a certain point because firms have picked all the low hanging fruit and because knowledge spills over to rival firms that can imitate the learned process improvements. For this reason, for data-driven platforms, growth more commonly follows a “learning curve” and is not subject to the winner-takes-all effect implied by the conventional network effect assumption.264See D’Arcy Coolican & Li Jin, The Dynamics of Network Effects, Andreessen-Horowitz (2018), https://perma.cc/3YH8-VSGY.

Another important difference is that, in the case of learning by doing, success is, by definition, a function of superior capabilities (and/or efficiency because of increased productivity). Large returns can (and do) exist in industries in which learning-by-doing is important (arguably in proportion to the technological complexity of the industry265See Philip E. Auerswald, Entry and Schumpeterian Profits: How Technological Complexity Affects Industry Evolution, 20 J. Evolutionary Econ. 553, 578 (2009) (“In industries where production processes are simple, I find that profits rapidly converge on the norm, particularly when imitation is possible. In industries where production processes are more complex, persistent profits accrue to surviving firms. Such profits are greatest in the early stages of industries where technology is of intermediate complexity—that is, where learning is rapid enough to confer a competitive advantage, but imitation is sufficiently uncertain to deter later entry.”).). But it makes no sense to attack such firms, even where they may enjoy large profits and market power as a result of their superior skill; this is precisely the type of benefit that the antitrust laws were designed to promote.266See Robert H. Bork, Antitrust Paradox: A Policy War with Itself 105 (1993). And there is even less of an argument that learning-by-doing constitutes a barrier to entry than do network effects because incumbents and entrants must bear roughly the same costs to move down their respective learning curves. Moreover, initial advantages are typically dissipated over time as information spills over. This contrasts with the widely accepted definition of barriers to entry, which holds that a barrier to entry is any cost that must be borne by entrants but not incumbents.267See Stigler Center Report, Antitrust Subcommittee, supra note 65, at 71.

Although these may seem like abstract distinctions, they have very real consequences. Take a presentation given by the Chief Economist of the European Commission in late 2018. In a nutshell, the Commission official held that a positive feedback loop allowed dominant platforms to extract even more data from its users.268See Valletti, supra note 18, at 5. The intuition is that a platform with more users generates more data, and this allegedly leads to superior targeted advertisements. These, in turn, allegedly lead to more users because the platform can reinvest the added revenue they generate, etc.269Id. at 7. But this is precisely the conceptual trap that competition authorities should avoid.

This flawed reasoning implies that there is a linear, or even super-linear (e.g., quadratic) relationship between the data owned by a firm and the money it can extract from targeted advertisements. Putting aside the fact that the revenue required to fund platform growth can come from any source, not just advertising itself,270See Manne & Wright, supra note 115, at 210–11 (“[T]hough Google perhaps generates the funds for its continued product development through its successful business, the same business model need not be adopted by competitors. In fact, Microsoft, one of Google’s primary competitors, has a market capitalization substantially larger than Google’s, and higher profits generated by its other businesses to invest in search engine functionality improvements. There is no reason why it matters if this investment comes from advertising revenue, the sale of operating systems, or outside capital sources.”). this leaves out consideration of two crucial questions: (1) when does additional data cease to markedly improve ad targeting, and (2) at what point does superior ad targeting no longer significantly increase revenues? As argued above, early empirical evidence suggests that data used for ad targeting exhibits diminishing returns to scale, and that it does so at a fairly modest threshold.271See, e.g., William Terdoslavich, Big Data and the Law of Diminishing Returns, InformationWeek (2015), https://perma.cc/VZ3N-T7HA; Hal Varian, Is There a Data Barrier to Entry? (2015), https://perma.cc/85SL-ST6R. Moreover, although this area of research is still in its infancy, there is at least some evidence that highly targeted advertisements might not always be effective because consumers perceive them to be overly intrusive.272See Avi Goldfarb & Catherine Tucker, Online Display Advertising: Targeting and Obtrusiveness, 30 Mktg. Sci. 389, 398 (2011). This suggests that there may indeed be a point at which more data used to improve ad targeting no longer provides any meaningful benefits.

The bottom line is that so-called “data network effects” are better framed as a form of learning-by-doing. They thus raise relatively little antitrust concern and should be embraced by policy makers because they ultimately lead to superior efficiency—the very goal of antitrust law.

3.     Dynamic Capabilities

This leads us to a third important point. The challenge for firms in data-reliant industries is multidimensional. Not only must they acquire data (and this is not merely a matter of “data network effects”), but just as importantly they must also develop the expertise to analyze this data, draw useful insights from it, and turn these insights into successful products. In doing so, acquiring the right data and getting the best out of a firm’s engineers is at least as important as controlling a large amount of data or engineering expertise. In other words, there is no single ingredient that mechanically leads to success. Instead, it is up to firms to identify and seize upon emerging business opportunities. Under this light, the resounding success of certain technology platforms appears to be down to their respective “dynamic capabilities” rather than the operation of positive feedback loops.

Dynamic capabilities can be defined as the “capacity business enterprises have to shape, reshape, configure, and reconfigure those assets so as to create and respond to changing technologies, competition, and market developments.”273See Teece, supra note 147, at 156. Critically, David Teece adds that “[t]he dynamic capabilities framework recognizes that the business enterprise is shaped but not necessarily trapped by its past.”274Id. at 50.

This is of great importance for antitrust authorities. Though it may seem obvious, not all firms will possess the requisite capabilities to compete and flourish in these dynamic marketplaces. And evolving market realities imply that some prosperous firms will fall out of favor with consumers for no other reason than the firms’ failure to adapt to new market realities (these firms will often find themselves in situations where it is too late to turn the ship and opt for another business strategy). Antitrust enforcers may often be tempted to try and prop-up these failing firms—under the faulty premise that their demise is due to anticompetitive behavior rather than a mix of poor decisions and bad luck. But forcing successful firms to share their assets will often only delay the inevitable.

These factors can be seen at play in the early days of the search engine market. In 2013, The Atlantic ran a piece titled “What the Web Looked Like Before Google.”275See Rebecca Greenfield, What the Web Looked Like Before Google, The Atlantic (Sept. 27, 2013), https://perma.cc/XSQ4-V6CT. By comparing the websites of Google and its rivals in 1998 (when Google Search was launched), the article shows how the current champion of search marked a radical departure from the status quo (see Figure 2):

Figure 2: Above, Homepages of Yahoo!, AltaVista, and AOL, 1998. Below, Homepage and First Result Page of Google, 1998.

These images reveal critical differences between Google and its rivals. Even if it stumbled upon it by chance (and although it was not necessarily apparent at the time), Google immediately identified a winning formula for the search engine market. It ditched the complicated classification schemes favored by its rivals and opted, instead, for a clean page with a single search box. This ensured that users could access the information they desired in the shortest possible amount of time (thanks in part to Google’s PageRank algorithm).276See Steven Levy, Exclusive: How Google’s Algorithm Rules the Web, Wired (Feb. 22, 2010, 12:00 PM), https://perma.cc/N4NT-H5GZ.

It is hardly surprising that Google’s rivals struggled to keep up with this shift in the search engine industry. The theory of dynamic capabilities tells us that firms that have achieved success by indexing the web will struggle when the market rapidly moves toward a new paradigm (in this case, Google’s single search box and ten blue links). During the time it took these rivals to identify their weaknesses and repurpose their assets, Google kept on making successful decisions (notably, the introduction of Gmail,277See Annys Shin, When Google Introduced Gmail, Wash. Post (Mar. 30, 2017), https://perma.cc/9B29-M7Q6. its acquisitions of YouTube and Android,278See Matt Reynolds, If You Can’t Build It, Buy It: Google’s Biggest Acquisitions Mapped, Wired (Nov. 25, 2017, 11:00 AM), https://perma.cc/T787-YLDH. and the introduction of Google Maps,279See Bret Taylor, Mapping Your Way, Google Official Blog (Feb. 8, 2005), https://perma.cc/UBV6-4DP9. among others). All these products tied in with one of Google’s key capabilities: to provide users with information through whatever platform they are using (desktop or mobile) and regardless of the medium in which it is stored (be it web pages, online videos, maps, emails, etc.). Seen from this evolutionary perspective, Google thrived because its capabilities were perfect for the market at that time, while rivals were ill-adapted.

According to this interpretation, Google’s meteoric rise had nothing to do with “data network effects” and everything to do with its specific capabilities and the strategy it deployed, over many years, to capture latent consumer demand in the search engine market. In fact, it overcame a tremendous data disadvantage to catch-up with—and overtake—firms such as Yahoo! and AltaVista (which had entered the search engine market long before Google).280Whereas the early search engine giants like Lycos scrambled to index the quickly growing set of web pages (one billion by the year 2000). Google took a different tack, eschewing the collation of large data sets—as was the habit of its competitors—and focusing on the relevance of pages to given queries: Google embraced the philosophy of quality over quantity. They didn’t try to index every page in existence. Instead, Google focused on trying to retrieve the best possible results to meet the user’s query. Google tried to display the few highly relevant results before the thousands of slightly relevant results that plagued the older search engines. Google also introduced the concept of page ranking to help move towards their goal. Google’s quick popularity forced other major search engines to redesign their own algorithms to keep pace. See Stephen Goehler, Masud Cader & Harold Szu, Smart Internet Search Engine Through 6W, in Proceedings of SPIE—The International Society for Optical Engineering, 2–3 (2006).

This capabilities approach—distinctly absent from traditional antitrust economics and the decisions it has produced—should at the very least give pause to proponents of the “data network effects” theory. Indeed, it is hard to take such claims seriously when they are so deeply incongruous with one of the most significant competitive events in the history of the search engine. If the data network effects fable were true, and search engines with more data inevitably prosper compared to data-poor rivals, then Yahoo! and AltaVista should have obliterated Google. The reality is that competition in the search engine industry (and other digital markets) is about far more than data.

The theory of dynamic capabilities also sheds light on the European Union’s recent Google Search and Google Android decisions.281Commission Decision, Case AT.39740, Google Search (Shopping) (June 26, 2017); Commission Decision, Case AT.40099, Google Android (July 18, 2018). In these cases, the European Commission concluded that Google had excluded its rivals from the search engine market. On closer inspection, however, it seems at least plausible that these rivals simply failed because of poor business judgement.

In the Google Search case, Foundem (one of the complainants) based its entire business on comparison shopping services.282Google Search (Shopping), at ¶¶ 191–250. In so doing, it took no steps to protect itself from potential changes to the Google Search engine on which it depended, despite a clear industry trend towards single search boxes leading to all results. As Geoffrey Manne put it:

Google’s purpose is not to send traffic away from its site; it’s “to bring all the world’s information to users seeking answers.” It just happens that sending users away from its site was the best and quickest way to provide answers on the Web in, say, 1999. But as Google’s technological abilities and resources grew, and as users sought even quicker answers—especially ones provided by voice or on mobile devices—its mechanisms for serving its users evolved.283See Manne, Foundem Foundered, supra note 115, at 17–18 (internal quotation marks omitted).

Much the same can be said about Yandex, a Russian search engine that was a complainant in the Android case.284See Russia’s Yandex Says Complained to EU Over Google’s Android, Reuters (Nov. 13, 2015, 9:07 AM), https://perma.cc/K2G5-9JNH. Yandex argued that it was being excluded from the search market as a result of Google’s dominance of the Android mobile platform. Regardless of the merits of the underlying case, two facts are particularly relevant: (1) Yandex never attempted to launch its own mobile operating system, and (2) with the rise of virtual assistants, the market for search will likely become less and less distinct from the mobile operating system market. Though Yandex has not been excluded from the Russian market (proof that the theory of data network effects is greatly exaggerated), its market share has slowly declined (it nevertheless remains roughly on par with Google in Russia).285See Katerina Rubinova, Battle of the Titans: Yandex vs Google, The Future Media Blog (Mar. 11, 2016), https://perma.cc/E5BD-BXRV; see also Search Engine Market Share Russian Federation: Apr. 2020–May 2021, Statcounter GlobalStats, https://perma.cc/VCD5-PPPT.

The upshot is that, in both of these cases, enforcers struggled to distinguish exclusion resulting from anticompetitive conduct from shifts in the marketplace that incidentally caused some firms to fall by the wayside due to their failure to adapt to new circumstances. When deciding on such matters, it is crucial that authorities not ignore the important role that dynamic capabilities may play during these industry transition periods. In contrast to the claims made by those who allege that “data network effects” account for the success of some firms and failure of others, these capabilities actually do appear to be a key predictor of a firm’s success or failure. In short, business model competition necessarily implies that some firms will be left out, not because they don’t have data, but because they have chosen a strategy that either left them with too few users to generate relevant data or collecting the wrong type(s) of data. It is thus particularly inapposite for proponents of more aggressive intervention to adhere to relatively nostalgic antitrust economics in defense of their claims.

C.     Data as a Byproduct and Path to Monetization, Not Creation, of Platforms

Policy makers should also bear in mind that platforms must often go to great lengths in order to create data about their users—data which these same users often do not know about themselves. Under this framing, data is a by-product of firms’ activity rather than an input that is necessary for rivals to launch a business. This is especially clear when one looks at the formative years of numerous online platforms. Most of the time, these businesses were started by entrepreneurs who did not own much data but, instead, had a brilliant idea for a service that consumers would value. Even if data ultimately plays a role in the monetization of these platforms, it does not appear to be necessary for their creation. According to economist Andres Lerner:

While data collected from users can be important to online providers in improving the services offered and their ability to monetize, user data is only one of many inputs into providing online services. The quality of services offered by online providers, and the ability to monetize effectively, is driven by much more than user data. There are many other sources of data, inputs into providing high quality services, dimensions of quality, and means of attracting users (such as distribution arrangements). Online providers can make investments in quality and distribution that are independent of its scale of users. And, through these investments, a provider can attain scale. Thus, it is incorrect to assert that an online platform lacking scale today can never attain scale. The fact that online providers can gain user scale in ways that do not involve user data weakens the claimed user data-service quality feedback-loop.286Andres V. Lerner, The Role of “Big Data” In Online Platform Competition 28 (Aug. 26, 2014) (unpublished manuscript), https://perma.cc/5WBJ-KWAU.

1.     Platforms Create Data

Possessors of information are assumed to benefit from the private use of information. But, while this is undoubtedly true for some data, it is often the case that information has no realizable value unless and until a mechanism is created for using it. At the extreme, for example, there is no intrinsic value to a consumer in the knowledge that she likes music by the Grateful Dead. There is value to her, however, in others knowing and using this information—most obviously, music recommendation services and music sellers (but also, in the noncommercial sense, friends and social communities). There is thus also value to the consumer from making sure that others know this information about her.

That applies to information that is known to the consumer. But there is also information that does not even “exist” in any real sense (or at least is not known) until the mechanism is created to elicit it. Indeed, “[i]t is questionable whether wants, as conscious motives to conduct, ever exist unless we are in a position of having to choose, to adopt one line of conduct and renounce another.”287Knight, supra note 40, at 60. Whether or not someone likes her brother’s latest photo of his dog isn’t “information” in any meaningful sense until the photo exists, it is shared with her, and she considers her reaction to it. In this sense, the vast majority of (actionable) information exists only because of some activity that creates the mechanism for the information to be created (or coalesced).288This sort of information must be distinguished from statistical knowledge, which consists of making inferences based on past experience. This is related to the distinction between “information” and “news” in John M. Marshall, Private Incentives and Public Information, 64 Am. Econ. Rev. 373, 373–74 (1974) (“In common usage the word information is ambiguous. It means either information that is known, as it is after being delivered, or unknown, as it is when it is purchased. The word ‘information’ will be used here only in the latter sense, while the former meaning will be conveyed by the term ‘news.’ Thus, the purchase of information eventually results in news.”).

This type of information is extremely important—but routinely overlooked—in discussions of big data. It is, in fact, arguably the most important sort of data employed by these platforms, and it does not exist absent the platforms on which it is created. Crucially, data of this sort is most obviously the manifestation of users’ preferences. A user’s preferences may be, in some philosophical sense, pre-existing. But the user may not even know what they are until asked, and certainly external users of that information cannot know it without it being communicated either directly (e.g., “I like the Grateful Dead”) or indirectly (e.g., through a user’s music purchase history). Thus, information about users’ preferences can perhaps be known, but, more to the point, it must typically be elicited. “[The consumer] does not know what he will want, and how much, and how badly; consequently, he leaves it to producers to create goods and hold them ready for his decision when the time comes.”289Knight, supra note 40, at 241.

And users have an interest in that information being elicited and shared. Moreover, users have no particular comparative advantage in the eliciting or interpreting of that information: as noted, it may not even be known ex ante, and, even if it is, in many cases it is virtually useless. As a result, the mechanisms that elicit and share that information with others who do have a comparative advantage in using it are of great value to users—not only because the information, once processed, may be used by others in ways that ultimately impart value to the user, but also because the very act of eliciting and sharing the information imparts knowledge to the user directly.

This is a crucial and overlooked aspect of policy discussions surrounding data-intensive platforms. The value of a user’s interactions with Facebook and Google, for example, is not, as commonly assumed, only in the platform’s aggregation and use of the data generated through those interactions, but also in the user’s own immediate access to information that either didn’t exist or wasn’t known to her beforehand. An enormous quantity of the data at issue in these policy discussions is of this sort: it is non-existent, unknown, or useless, even for personal use by the user, until it is made manifest through some activity by which the user interacts with the platform. Thus, the value of those activities is not just in the sharing of information with others, but in the creation of information in the first place.

Why is this so important? Because, as discussed above,290See supra Section II.A.3. it turns the generally assumed “platform information asymmetry” on its head. To begin with, there is information that a user does often know about herself that the platform does not: her preferences. Any information that the platform gleans about her preferences is necessarily incomplete and indeterminate, and the platform can make only inferences—inferences that, even when accurate, can quickly become obsolete. Information asymmetry in this regard runs in favor of the individual user, not the platform.

In addition, there is information that even the user does not know about herself and that becomes known only because of the platform. Even though the user (unlike the platform) does not know the aggregate information from many users of which her data is only a minuscule part, the private use value of that information is better known to the user than to the platform. While the user knows whether the information is accurate and valuable, and while there is no limitation on what the user can do with that information, the platform is able to use it only to make inferences about its relevance and importance to the user, with limited accuracy.

To be sure, a platform can also combine this information with other data to create yet more information and to derive value inaccessible to the individual user. But the relative magnitudes of these different types of information and their value to different users and to the platform itself is uncertain. It cannot simply be assumed that there is “asymmetry” or that it flows in only one direction. Indeed, this dispels any notion that “if you’re not paying for it, you are the product” or “the price for ‘free’ services is your data.” In truth, much of the information we share is shared because it is only by doing so that its value can be realized. More importantly, much of the data we share with platforms does not even exist (or is not known) separately from our interactions with these platforms. In this sense it is not data that is the “price” users pay for platform services; it is platform services that are the “price” platforms pay for data.

Of course, none of this is especially new; it is simply overlooked. The great UCLA economist, Jack Hirshleifer, noted many of these dynamics as long ago as the 1970s:

The possessor can in general benefit simply by private use of the information for his own productive or consumptive decisions. But in a market context it might also be possible for him to profit from sale of the information to others. The information-seeker might correspondingly find it advantageous to produce socially “new” information by direct inquiry of Nature (research) or to purchase “secondhand” information in the market. Viewed as a tradeable commodity, information has (as we shall see) a number of special features. . . . In the market process information can be regarded as “pulled” from the possessor by purchase, i.e., by payment of an explicit price. But what is surprising, the possessor may find it preferable to give away this valuable commodity, to disseminate it without pull of compensation. Indeed it may be highly profitable for him to incur costs so as to gratuitously “push” information to potential recipients! As for the information-seeker, his knowing that the possessors are so motivated may lead to adoption of a monitoring or listening mode of learning behavior.291J. Hirshleifer, Where Are We in the Theory of Information?, 63 Am. Econ. Ass’n 31, 32 (1973) (emphasis added) (citation omitted).

2.     Most Platform Businesses Started Without Any Data

Another important point is that data often becomes significant only at a relatively late stage in these businesses’ development. A quick glance at the digital economy is particularly revealing in this regard. Google and Facebook, in particular, both launched their platforms under the assumption that building a successful product would eventually lead to significant revenues. It took five years from its launch (and 300 million users) for Facebook to start making a profit. But even then, it was not entirely clear whether the social network would generate most of its income from app sales or online advertisements.292See Derek Thompson, Facebook Turns a Profit, Users Hits 300 Million, The Atlantic (Sept. 17, 2009), https://perma.cc/F5RQ-MPZJ. It was another three years before Facebook started to cement its position as one of the world’s leading providers of online ads.293See Rebecca Greenfield, 2012: The Year Facebook Finally Tried to Make Some Money, The Atlantic (Dec. 14, 2012), https://perma.cc/5BN2-WJF6. During this eight-year timespan, it seems that Facebook’s first concern was not so much the monetization of its platform, but user growth.

Facebook thus appears to have concluded (correctly, it turns out) that once its platform attracted enough users, it would surely find a way to make it highly profitable. This suggests that data might not have been of critical importance during the formative years of the Facebook platform (or at least not for its monetization). This might explain how Facebook managed to build a highly successful platform, despite a large data disadvantage over rivals like MySpace.294See Harrison Jacobs, Former MySpace CEO Explains Why Facebook Was Able to Dominate Social Media Despite Coming Second, Bus. Insider (May 9, 2015, 6:13 AM), https://perma.cc/BBR3-EXCN. The upshot is that, in the case of Facebook, data does not seem to have been a prerequisite for building a successful platform.

And Facebook is no outlier. Other successful technology firms have similar origins. For instance, Snapchat managed to build a successful platform that has 280 million daily active users.295Jill Goldsmith, Snap Q1 Daily Active Users Top Forecasts at 280 Million, Up 51 Million from Year Ago; Revenue Pops 66% to $77 Million, Deadline (Apr. 22, 2021, 1:25 PM), https://perma.cc/LF46-5DH2. Snapchat achieved this feat without much, if any, user data, and despite entering the market later than numerous high-profile rivals,296See J.J. Colao, Snapchat: The Biggest No-Revenue Mobile App Since Instagram, Forbes (Nov. 27, 2018, 1:36 PM), https://perma.cc/SF9R-V5SC. including Facebook,297See A Faster Way to Message on Mobile, Facebook (Oct. 19, 2011), https://perma.cc/QL3N-2A2F; see also Facebook Launches, History (Oct. 24, 2019), https://perma.cc/3GYP-JKBS (explaining Facebook launched on February 4, 2004). Instagram,298See MG Siegler, Instagram Launches with the Hope of Igniting Communication Through Images, TechCrunch (Oct. 6, 2010, 9:00 AM), https://perma.cc/5ARJ-HT23. and WhatsApp.299See Parmy Olson, Exclusive: The Rags-To-Riches Tale of How Jan Koum Built WhatsApp Into Facebook’s New $19 Billion Baby, Forbes (Feb. 19, 2014, 7:58 PM), https://perma.cc/MS69-7LZL. Like Facebook, Snapchat chose to build its network without a clear monetization strategy, deferring this question to a later stage, when it would have an established user base.300See Colao, supra note 296. Granted, Snapchat may yet succumb to larger rivals (at the time of writing, Instagram seems to be winning the battle and may ultimately drive Snapchat out of the market). But these rivals’ success does not appear to have anything to do with superior access to data.301See Kurt Wagner & Rani Molla, Why Snapchat Is Shrinking, Vox: Recode (Aug. 7, 2018, 7:48 PM), https://perma.cc/L7JE-BB2J. Instead, Snapchat’s possible decline appears to be due to Instagram having introduced more attractive features to its app.302See Sara Salinas, Instagram Stories Has Twice As Many Daily Users As Snapchat’s Service—And It Now Has Background Music, CNBC: Tech Drivers (June 28, 2018, 3:50 PM), https://perma.cc/N5GE-323J. Far from being suggestive of data-related market failures, Snapchat’s decline at the hands of Instagram appears to be a sign of healthy competition. It thus shows that competition between digital platforms is about much more than data, and that it is perfectly feasible for innovative companies to enter these markets despite significant data disadvantages.

And consider companies like Uber, Lyft, and Sidecar that have taken over the personal transport sector. They too had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft, and Sidecar have been able to effectively compete because they built products that users wanted to use303See Karen Matthews & Verena Dobnick, Yellow Cabs Now Outnumbered By Uber Cars on NYC Streets, AP News (Mar. 19, 2015), https://perma.cc/Q5AF-549Z.—they came up with an idea for a better mousetrap. The data they have accrued came after they innovated, entered the market, and mounted their successful challenges—not before.

The list of companies that prevailed despite starting with little to no data, and before they implemented (or even identified) a data-dependent monetization strategy, is vast. Other examples include Airbnb, Amazon, Twitter, PayPal, etc. These abundant illustrations severely undermine ideas that data constitutes a barrier to entry, that “data network effects” inevitably lead to tech platform tipping, or that data constitutes an essential facility.

A more apt economic parallel can be made with regard to the economic literature on two-sided markets. In these markets, it is well established that firms face a “chicken and egg problem.” Because the success of their business hinges on attracting two complementary groups of users, these platforms must often decide which group of users to favor early on in the hope that this will then kickstart any positive feedback loops that may exist between users on both sides of the platform.304See Bernard Caillaud & Bruno Jullien, Chicken & Egg: Competition Among Intermediation Service Providers, 34 RAND J. Econ. 309, 310 (2003); see also Geoffrey G. Parker & Marshall W. Van Alstyne, Two-Sided Network Effects: A Theory of Information Product Design, 51 Mgmt. Sci. 1494, 1496 (2005); Rochet & Tirole, supra note 244, at 990. One particularly relevant strategy for ad-supported business is what economist David Evans refers to as “sequential entry”:305See David S. Evans, How Catalysts Ignite: The Economics of Platform-Based Start-Ups, in Platforms, Markets and Innovation 99, 109 (Annabelle Gawer ed., 2011).

In some cases it is possible to get one group of agents on board over time and then make these agents available to the other group of agents later in time. That is the situation with advertising-supported media. One can use content to attract viewers and then bring advertisers on board later. This dynamic works because there are non-positive indirect network effects between the two sides: viewers do not care about advertisers (and may dislike advertising) but come to the platform for the content.306Id.

The ubiquity of the sequential entry strategy, which is relevant for many internet firms (including, Google, Facebook, and their rivals), contradicts arguments that access to data is necessary to compete in these industries. Granted, at some point firms may need data to earn profits and reinvest in their platforms. But saying that this dynamic somehow interferes with competition is merely a repeat of the “deep pocket” fallacy that plagued early predatory pricing theory.307The intuition is that firms with significant financial resources can sustain losses for longer periods of time and thus evict smaller rivals through predatory pricing. See Corwin D. Edwards, Conglomerate Bigness As a Source of Power, in Business Concentration and Price Policy 331, 334–35 (1955). This notion was severely exposed by Chicago-School scholars, notably because it assumes capital market imperfections. See L. G. Telser, Cutthroat Competition and the Long Purse, 9 J. L. & Econ. 259, 270 (1966). In the case of online platforms, there is no reason to believe that firms that earn less profits will invest less in their products. Instead, all these firms need to do is convince investors they will ultimately have the best product and the most users. If capital markets work properly—and there are literally billions of dollars flowing to Silicon Valley tech startups every year308See Kate Clark, Venture Capital Investment in US Companies to Hit $100B in 2018, TechCrunch (Oct. 9, 2018, 2:05 PM), https://perma.cc/GDS4-J2D7.—then being able to immediately monetize data offers little to no advantage over rivals that must call upon capital markets.

The inevitable conclusion is that, in reality, those who complain about data facilitating unassailable competitive advantages have it exactly backwards. Companies need to innovate to attract consumer data, otherwise consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results: the continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition but a spur to drive it.

D.     Partial Conclusion: Data-Intensive Markets Should Not Alter the Balance of Antitrust Enforcement

We started this Part by referring to the popular metaphor that “data is the new oil.” As we have shown, while this metaphor may be rhetorically appealing, the comparison could hardly be any less apt. Indeed, it overlooks the key economic features of data. In doing so, it is symptomatic of wider misapprehensions about competition in data-intensive markets and the proper application of antitrust law and economics to them.

As we have shown, unlike oil, data is ultimately a form of information and as such is non-rivalrous and, in many cases, non-exclusive. Moreover, the value of a given dataset hinges critically on the expertise that firms can bring to bear in order to analyze the data. Unfortunately, this combination of learning-by-doing and firmwide capabilities in data-intensive markets has often been mislabeled as a “data network effect.” Finally, unlike an oil company that must first drill and refine oil before it can make sales, large amounts of data often become important only in later stages of a digital platform’s development. At the same time, much of the data used by platforms does not, in any meaningful sense, pre-exist platforms’ interactions with their users; rather, it is created by those interactions. As we have discussed, firms routinely build successful businesses without having access to pre-existing data. Instead, they hope that a strong product on the user side of the market will eventually translate into substantial revenues, notably by leveraging the data that is eventually generated on the platform.

In short, we argue that the advent of data-enabled markets does not support the calls for a significant expansion of antitrust enforcement being made in its name. Contrary to what has sometimes been claimed, data does not present unique—or even uniquely large—anticompetitive risks. Data is not irrelevant, of course, but it is just one amongst a plethora of factors that enforcement authorities and courts should consider when they analyze firms’ behavior.

III.     We’ve Been Here Before: The Microsoft Antitrust Saga

Dystopian and nostalgic discussions concerning the power of successful technology firms are nothing new. Throughout recent history, there have been repeated calls for antitrust authorities to reign in these large companies. These calls for regulation have often led to increased antitrust scrutiny of some form. The Microsoft antitrust cases—which ran from the 1990s to the early 2010s on both sides of the Atlantic—offer a good illustration of the misguided “Antitrust Dystopia.”

In the mid-1990s, Microsoft was one of the most successful and vilified companies in America. After it obtained a commanding position in the desktop operating system market, the company sought to establish a foothold in the burgeoning markets that were developing around the Windows platform (many of which were driven by the emergence of the internet.)309See May 26, 1995: Gates, Microsoft Jump on the ‘Internet Tidal Wave, Wired (May 26, 2010, 12:00 AM), https://perma.cc/CK6N-LQLX. These included the internet browser and media player markets. The business tactics employed by Microsoft to execute this transition quickly drew the ire of the press and rival firms, ultimately landing Microsoft in hot water with antitrust authorities on both sides of the Atlantic.

This Part analyzes the antitrust cases that were brought against Microsoft and focuses on four main issues. First, Section A addresses the fears that commentators expressed before and during these cases. Second, Section B explores whether antitrust authorities echoed these same fears. Third, Section C analyzes how the market evolved after the antitrust cases and whether the critics’ fears came to transpire. Fourth, Section D addresses whether the market’s positive evolution could reasonably be attributed to antitrust intervention. Finally, Section E suggests that Microsoft’s own behavior might have caused rival entry, ultimately preventing it from extending its desktop advantage to adjacent markets.

This Article’s analysis shows that, although there were numerous calls for authorities to adopt a precautionary principle-type approach when dealing with Microsoft, and antitrust enforcers were more than receptive to these calls, critics’ worst fears never came to be. This positive outcome is unlikely to be the result of the antitrust cases that were brought against Microsoft. In other words, the markets in which Microsoft operated seem to have self-corrected (or were misapprehended as competitively constrained) and, today, are generally seen as being unproblematic. This is not to say that antitrust interventions against Microsoft were necessarily misguided. Instead, our critical point is that commentators and antitrust decisionmakers routinely overlooked or misinterpreted the existing and non-standard market dynamics that ultimately prevented the worst anticompetitive outcomes from materializing.

A.     Popular Fears

From the mid-1990s onwards, a rapidly growing chorus of commentators started drawing the public’s attention to Microsoft’s business practices. For example, The Guardian published an article with the title “Microsoft’s Plan for World Domination.”310Jack Schofield, Microsoft’s Plan for World Domination, The Guardian (May 24, 2001, 5:09 AM), https://perma.cc/C2YW-ZG8Y; see also Suzanna Kerridge, Microsoft – A Bully Exposed?, ZDNet (Nov. 16, 1998, 12:15 AM), https://perma.cc/672Z-YNL5; Steve Lohr & John Markoff, Microsoft’s World: A Special Report.; How Software’s Giant Played Hardball Game, N.Y. Times, Oct. 8, 1998, at A1, https://perma.cc/6DSR-P9H2; Herbert Stein, Does Microsoft Play Fair?, Slate (June 19, 1996, 3:30 AM), https://perma.cc/9NWC-6G8U. A commentator in the Wall Street Journal intimated that Microsoft was “a threat to everybody in the industry.”311See George Bittlingmayer & Thomas W. Hazlett, DOS Kapital: Has Antitrust Action Against Microsoft Created Value in the Computer Industry?, 55 J. Fin. Econ. 329, 330 (2000) (“Microsoft’s sway over operating systems and applications puts everyone else in the industry at a disadvantage, said Alan C. Ashton, president of WordPerfect Corp., Orem, Utah. They are a threat to everybody in the industry.” (quoting Richard B. Schmitt, FTC Lawyers Urge the Agency To Seek Court Order on Microsoft, Wall St. J., Dec. 11, 1992, at A4)). And somewhat comically, a piece in the Harvard Crimson called for the company to be broken up (and mocked Bill Gates for having dropped out of Harvard).312Kevin S. Davis, Break Up Microsoft’s Monopoly, Harv. Crimson: Tech Talk (Jan. 5, 1998), https://perma.cc/NH8S-HPJH. But the prize for the most alarmist article, without doubt, goes to the New York Times for a piece titled “Making Microsoft Safe for Capitalism.” The piece concluded that “[i]f the software giant has its way, it will soon be in a position to collect a charge from every airline ticket you buy, every credit card purchase you make, every fax you send, every picture you download, every website you visit. It’s time to draw the line.”313James Gleick, Making Microsoft Safe for Capitalism, N.Y. Times, Nov. 5, 1995 (§ 6), at 50.

So, what was it about Microsoft’s market position that frightened these commentators? A central theme was that Microsoft would leverage its strong position in the consumer operating system market to dominate competitors in adjacent markets, particularly the blossoming online space. Having overthrown these competitors, Microsoft could then levy a tax on the entire digital ecosystem, exploiting consumers who had come to rely on its products. The aforementioned New York Times piece, written by James Gleick, nicely encapsulated these fears:

By making connections among all these levels of modern computing, and by exerting control over the architectures that govern those connections, Microsoft is in the process of transforming the very structure of the world’s computer businesses. “Microsoft is imposing a new verticality on the industry,” says Gary Reback, a Silicon Valley technology lawyer who represented a group of anonymous Microsoft rivals in the antitrust proceedings. “Bill’s been able to exploit the market far better than anybody else has, and I think that’s because he intuitively understands what enormous power he has and how to exploit that power.” . . . With its new Microsoft Network, providing both an on-line service and Internet access, it is focusing on electronic financial-transaction processing—which is to say, all electronic commerce; which is to say, at least in some visions of the future, pretty much all commerce. “Basically what Microsoft is trying to do is tax every bit transition in the whole world,” says a senior executive of a competing software company. “When a bit flips, they will charge you.”314Id. (emphasis added).

The implication was perfectly clear: Microsoft was allegedly using its strong position in the consumer operating system market to become the leading gateway to the internet, and thus have the ability to tax every internet user on the planet. Microsoft would purportedly achieve this goal by integrating its “Microsoft Network” dial-up service and web portal (also referred to as “MSN”) into its Windows operating system (these attempts ultimately ended in failure).315Microsoft Network (MSN), Encyclopedia.com, https://perma.cc/N4NN-2HXU. In short, as the same New York Times article concluded:

The Department of Justice does not need to break Microsoft apart. It need only—a far-reaching step in itself—require Microsoft to make its operating system, and the web of standards surrounding it, truly and permanently open. Other companies should be allowed to clone it if they could; Microsoft should be restricted from taking internal advantage of new changes until they were published to the rest of the market.316Gleick, supra note 313 (emphasis added).

This alarmist article offers a perfect illustration of Antitrust Dystopia in action. Its author, James Gleick, hastily concluded that the market would fail, Microsoft would come to dominate online markets, and the internet would inexorably become a closed system. The only solution was government-mandated openness, because market forces allegedly could do nothing to prevent Microsoft from dominating adjacent markets. Of course, Gleick was not alone in reaching this conclusion. For example, another New York Times article found that:

Few people are more familiar with the real-world nature of Microsoft’s market power than Douglas P. Colbeth, the 42-year-old chief executive of Spyglass Inc. His Internet software company has been a supplier to Microsoft, providing the software engine for its Internet Explorer browser. Spyglass has also been a Microsoft competitor, briefly, in the browser market. Both roles proved to be sobering for Spyglass. So much so that the company’s current strategy is mainly to stay out of Microsoft’s path, finding specialized markets for Internet software and services that the software giant ignores.317Steve Lohr, Spyglass, a Pioneer, Learns Hard Lessons About Microsoft, N.Y. Times, Mar. 2, 1998, at D1 (emphasis added), https://perma.cc/7E7L-XZJ9.

And numerous other publications marched to the same beat.318Schofield, supra note 310 (“[D]uring the past five years, Bill Gates’s programmers have been tackling a wider and wider range of devices from home games consoles such as the Xbox to server software for mainframe data centres. . . . Thanks to the web, computing is now moving on, at high speed, and Microsoft wants to be the first company to win two successive platform wars.”); see also Wendy Goldman, “Oh No, Mr. Bill!,” Wired (Apr. 1, 1994, 12:00 PM), https://perma.cc/WRY9-4X3Y (“Mitchell Kapor, founder of Lotus Development Corp. and now chair of the Electronic Frontier Foundation, once said that Bill Gates wants nothing more than to be the Rockefeller of the Information Age. If that’s true, Anne Bingaman just might be his trust-busting Teddy Roosevelt.”). This implication is clear: Bill Gates was seeking to extend his monopoly across numerous markets, just like John D. Rockefeller allegedly sought to do with the Standard Oil Company. For instance, a Harvard Crimson article concluded that:

Gates and company follow an “embrace-and-extend’ strategy: incorporate the features of competitors’ products into your operating system and core applications, essentially giving them away, then make them slightly better to keep customers from buying the stand-alone version.

In market after market, Microsoft has followed this strategy successfully. Text editors. Drive compression. Screen savers. Disk tool software. 3-D graphics acceleration. And soon, speech and handwriting recognition. If Microsoft wins against Justice, add Internet browsers to the list of conquests.

Microsoft’s reach into many software markets gives it other advantages that work against a competitive market.

. . . .

The problem is that when Microsoft is done driving its competitors out of business, innovation tends to stop and prices usually rise. Recent hikes in the price of Office and a surprise end to the company’s concurrent licensing policy show what Microsoft will do when it gains hegemony over a market.319Davis, supra note 312 (emphasis added).

There are uncanny similarities between the claims being made in these articles and those that are routinely raised against contemporary tech firms. The calls for Microsoft to open up its platform are almost identical to those that are now being brought against Google and Amazon, twenty years later. For instance, a report by the Stigler Center called for authorities to create a mandatory access regime covering digital platforms’ data.320Stigler Center Report, Policy Brief, supra note 63, at 17 (“The FTC should be empowered to implement a data access mandate: Congress should empower the FTC to: (i) have access to DPs’ internal databases and studies, (ii) perform their own independent research on how platforms impact different areas of our society, and (iii) moderate independent researchers’ access to these databases. The FTC is a well-established agency that is accustomed to conducting in-depth investigations and whose Bureau of Economics and Office of Technology Research is amongst the better staffed in the country.” (emphasis omitted)). Likewise, the assertion that Microsoft copied the products of rivals in adjacent markets is analytically identical to claims that Amazon cannibalizes its online retailers by offering Amazon-branded products that copy their goods.321See, e.g., Khan, supra note 72, at 755 (“By making itself indispensable to e-commerce, Amazon enjoys receiving business from its rivals, even as it competes with them. Moreover, Amazon gleans information from these competitors as a service provider that it may use to gain a further advantage over them as rivals—enabling it to further entrench its dominant position.”). Similarly, claims that Google’s online advertising business is premised on taxing all news outlets are reminiscent of assertions that Microsoft would exert a tax on all online markets.322See, e.g., Sally Hubbard, The Decline of American Journalism Is an Antitrust Problem, ProMarket (June 14, 2019) (“Weak antitrust enforcement set the stage for Facebook and Google to extract the fruits of publishers’ labor. We won’t be able to save journalism and solve our disinformation problem unless we weaken monopolies’ power.”). Moreover, just as past critics did with Microsoft, today’s critics ultimately argue that innovation will grind to a halt if dominant tech platforms are left to their own devices.323Larry Elliot, IMF Warns That Tech Giants Stifle Innovation and Threaten Stability, The Guardian (Apr. 3, 2019, 10:00 AM), https://perma.cc/33DJ-WTF7. Last but not least, the idea that startups should have avoided those markets where Microsoft operated is similar to today’s “kill zone” claims, in which critics argue that the presence of large digital platforms in a given market deters rivals from investing in that same space.324Hal Singer, Insider Tech’s “Kill Zone”: How to Deal With the Threat to Edge Innovation Posed by Multi-Sided Platforms, ProMarket (Nov. 21, 2018), https://perma.cc/BRV7-Y845; Noah Smith, Big Tech Sets Up a ‘Kill Zone’ for Industry Upstarts, Bloomberg (Nov. 7, 2018, 7:00 AM), https://perma.cc/7AB3-UA2U.

One immediate reaction might be that newspapers at the time were ill-equipped to understand the complexities of these rapidly evolving markets, and that their preoccupations were far-removed from those of sophisticated antitrust authorities and courts. But nothing could be further from the truth. There were, in fact, significant overlaps between some of the more extreme claims raised by newspapers, before and during the Microsoft antitrust investigations, and the conclusions of some antitrust decisionmakers and courts.

B.     Antitrust Intervention

It was not only press outlets that espoused a highly pessimistic view of Microsoft’s market position; antitrust decisionmakers were no different. After the first version of Windows was released in 1985,325History of Microsoft, Wikipedia, https://perma.cc/95DY-NBMB. Microsoft’s sales rapidly took off. By 1993 it was the world’s most used personal computer operating system.326Id. This led several antitrust decisionmakers and courts to the conclusion that, absent antitrust intervention, Microsoft would inexorably crush competitors in adjacent markets.

The first case was brought against Microsoft by the US Department of Justice (“DOJ”) in 1993, leading to a consent decree in 1994.327Press Release, U.S. Dep’t of Just., Microsoft Agrees to End Unfair Monopolistic Practices (July 16, 1994), https://perma.cc/X43A-JL8V. This early case mostly focused on Microsoft’s licensing terms with the original equipment manufacturers (“OEMs”) that assembled Windows-based desktop computers. The DOJ argued that these terms excluded competing operating systems.328Id. In a nutshell, Microsoft licensed its operating system on a “per processor” basis.329Id. As a result, OEMs paid a fee to Microsoft for each PC they sold, regardless of the operating system that was ultimately installed on it.330Id. Microsoft and various OEMs also entered into long-term agreements, sometimes with minimum volume commitments.331See, e.g., Richard J. Gilbert, Networks, Standards, and the Use of Market Dominance: Microsoft (1995), in The Antitrust Revolution: Econ., Competition, and Pol’y 409, 413 (John E. Kwoka, Jr. & Lawrence J. White eds., 3d ed. 1999).

With the benefit of hindsight, some of the conclusions reached by authorities and scholars at the time of the case seem particularly severe.332For a discussion of the merits of this consent decree, see, for example, John E. Lopatka & William H. Page, Microsoft, Monopolization, and Network Externalities: Some Uses and Abuses of Economic Theory in Antitrust Decision Making, 40 Antitrust Bull. 317, 370 (1995) (“[T]here are sufficient questions about the value of the network externalities literature to discount it as a guide to antitrust decision making. Without the support of that literature, the arguments for enjoining the practices become more problematic. In a market that is being transformed by technological change, we suggest, it is unlikely that the courts will improve things by enjoining ambiguous practices.”). See also Robert J. Levinson, Efficiency Lost?: The Microsoft Consent Decree, in The Economics of the Antitrust Process 175, 188 (Malcolm B. Coate & Andrew N. Kleit eds., 1996) (finding that Microsoft practices were procompetitive because “[p]ublicly-available writings suggest that Microsoft’s minimum commitment and per-processor licensing methods are likely consistent with a volume discounting strategy . . . [t]hese could lead to lower prices for OEMs and consumers.”). But see Kenneth C. Baseman, Frederick R. Warren-Boulton & Glenn A. Woroch, Microsoft Plays Hardball: The Use of Exclusionary Pricing and Technical Incompatibility to Maintain Monopoly Power in Markets for Operating System Software, 40 Antitrust Bull. 265, 267 (1995) (“We conclude that, under the conditions present in the operating systems market, such practices can be, and in this instance have been, effective in limiting the growth and threatening the existence of entrants and rivals with very small market shares. We also conclude that Microsoft’s anticompetitive behavior has reduced social welfare.”). For instance, both the DOJ and Amici Curiae agreed that the desktop operating system market was beset by increasing returns to scale and network effects (a mantra that is often cited by recent reports calling for greater antitrust enforcement in the tech sector).333See, e.g., Crémer et al., supra note 53, at 2 (“Extreme returns to scale. The cost of production of digital services is much less than proportional to the number of customers served. While this aspect is not novel as such (bigger factories or retailers are often more efficient than smaller ones), the digital world pushes it to the extreme and this can result in a significant competitive advantage for incumbents.” (emphasis omitted)). They therefore surmised that barriers to entry precluded more efficient rivals from entering the market.334See Lopatka & Page, supra note 332332, at 333 (“The government agreed with the amici that the software market is characterized by network externalities with increasing returns to scale, but drew different conclusions from that characterization.”). As the DOJ observed in its competitive impact assessment:

Development, testing, and marketing of a new PC operating system involves considerable time and expense. A new operating system faces additional barriers to entry, including the absence of a variety of high quality applications to run on the system; the small number of people trained on and using the system, which discourages customers from buying it and software companies from writing applications to run on it; and, since the overwhelming majority of PCs are sold with a pre-installed operating system, the difficulty of convincing OEMs to offer and promote the system.335Competitive Impact Statement at 4, United States v. Microsoft Corp., 253 F.3d 34 (D.D.C. 2001) (No. 94-1564).

And although he did not entirely agree with the normative stance of the DOJ and Amici Curiae, Nobel-winning economist Kenneth Arrow concurred that:

[T]he software market is peculiarly characterized by increasing returns to scale and therefore natural barriers to entry. Large-scale operation is low-cost operation and also conveys advantages to the buyer. Virtually all the costs of production are in the design of the software and therefore independent of the amount sold, so that marginal costs are virtually zero. There are also fixed costs in the need to risk large amounts of capital and the costs associated with developing a reputation as a quality supplier. Further, there are network externalities, in particular, the importance of an established product with a large installed base and the related advantage of a product that is compatible with complementary applications.336Memorandum of the United States of America in Support of Motion to Enter Final Judgment and in Opposition to the Positions of I.D.E. Corp. and Amici, at 5–6, United States v. Microsoft Corp., 159 F.R.D. 318 (D.D.C. 1995) (No. 94-1564) (emphasis added) (declaration of Kenneth J. Arrow).

In short, at the time of the first Microsoft investigation in the US, there was a relatively widespread consensus that operating systems were a natural monopoly. The actions of antitrust authorities were thus premised on the assumption that market forces alone were unlikely to restore competitive outcomes in these software markets.337See Complaint at 6, Microsoft, 159 F.R.D. 318 (No. 94-1564) (“Microsoft’s anticompetitive contracting practices described below significantly increase the already high barriers to entry and expansion facing competitors in the PC operating system market. These practices reduce the likelihood that OEMs will license and promote non-Microsoft PC operating systems, make it more difficult for Microsoft’s competitors to persuade ISVs to develop applications for their operating systems, and impede the ability of a non-Microsoft PC operating system to expand its installed base of users.”). As this Article discusses below, however, the competition that has since emerged in these markets suggests that the natural monopoly fears expressed at the time were likely overblown.338See infra Section III.C.

Much the same can be said about the second antitrust case brought against Microsoft in the US. The case centered on claims of both monopolization and attempted monopolization.339See United States v. Microsoft, 253 F.3d 34, 47 (D.C. Cir. 2001). On appeal, Microsoft was found guilty of maintaining its monopoly over the operating system market by deploying a series of measures destined to slow the uptake of Java and the rival Netscape Navigator browser.340Id. at 77. This was sometimes referred to as the “platform threat” theory of harm.341See Carl Shapiro, Microsoft: A Remedial Failure, 75 Antitrust L.J. 739, 742–43 (2009) (quoting Microsoft, 253 F.3d at 79). Microsoft was also accused of attempting to monopolize the internet browser market by tying Internet Explorer to the Windows operating system (although this part of the district court’s ruling was vacated and remanded on appeal).342Microsoft, 253 F.3d at 84. The findings of the district court are a perfect example of Antitrust Dystopianism. The final paragraph of Judge Jackson’s ruling is perhaps the most revealing:

Most harmful of all is the message that Microsoft’s actions have conveyed to every enterprise with the potential to innovate in the computer industry. Through its conduct toward Netscape, IBM, Compaq, Intel, and others, Microsoft has demonstrated that it will use its prodigious market power and immense profits to harm any firm that insists on pursuing initiatives that could intensify competition against one of Microsoft’s core products. Microsoft’s past success in hurting such companies and stifling innovation deters investment in technologies and businesses that exhibit the potential to threaten Microsoft. The ultimate result is that some innovations that would truly benefit consumers never occur for the sole reason that they do not coincide with Microsoft’s self-interest.343United States v. Microsoft Corp., 84 F. Supp. 2d 9, 112 (D.D.C. 1999) (emphasis added), aff’d in part and rev’d in part, 253 F.3d 34 (D.C. Cir. 2001).

In light of these stark findings, the district court ordered that Microsoft should be broken up (the court euphemistically referred to this as a “structural reorganization”).344United States v. Microsoft Corp., 97 F. Supp. 2d 59, 63–64 (D.D.C. 2000), vacated, 253 F.3d 34 (D.C. Cir. 2001). It also imposed various behavioral commitments on the firm.345Id. The structural remedy would effectively have severed Microsoft’s operating systems business from its other products, such as word processors, internet browsers, etc.346Microsoft, 253 F.3d at 48 (“Having found Microsoft liable on all but one count, the District Court then asked plaintiffs to submit a proposed remedy. . . . In their proposal, plaintiffs sought specific conduct remedies, plus structural relief that would split Microsoft into an applications company and an operating systems company. . . . The District Court adopted plaintiffs’ proposed remedy without substantive change.”). The court and plaintiffs’ reasoning was clear: absent structural separation and strict non-discrimination commitments, Microsoft inevitably would have leveraged its operating system dominance into adjacent markets. There are striking similarities between this stance and some of the claims that have been levelled against today’s successful tech platforms.347See, e.g., Elizabeth Warren, Here’s How We Can Break Up Big Tech, Medium (Mar. 8, 2019), https://perma.cc/K4JB-TA6G (“We must ensure that today’s tech giants do not crowd out potential competitors, smother the next generation of great tech companies, and wield so much power that they can undermine our democracy.”).

Furthermore, according to Microsoft’s critics, structural separation was not only necessary because of the underlying market conditions, but also because of the company’s “untrustworth[iness].”348Microsoft, 97 F. Supp. 2d at 62. For instance, Judge Jackson’s memorandum and order severely criticized the firm’s track record with regulators:

Microsoft has proved untrustworthy in the past. In earlier proceedings in which a preliminary injunction was entered, Microsoft’s purported compliance with that injunction while it was on appeal was illusory and its explanation disingenuous. If it responds in similar fashion to an injunctive remedy in this case, the earlier the need for enforcement measures becomes apparent the more effective they are likely to be.349Id. This did not go unnoticed to newspapers at the time. See, e.g., “Microsoft Enjoys Monopoly Power…”, Time (Nov. 15, 1999), https://perma.cc/924T-AMHD (“The Microsoft of Judge Jackson’s narrative is a deep-pocketed bully that uses its prodigious market power and immense profits to harm companies that presume to compete with it. And it presents Gates as a law-flouting monopolist who makes a threat to one rival considering getting into the software market and berate[s] and then retaliates against an executive from another company who dares to criticize Windows.”). Astute readers will quickly recognize that a similar rhetoric is often mobilized against contemporary tech firms. For instance, in a Wired interview, Tim Wu observed about Facebook: We put Facebook under order for privacy violations, and they violated that order so many times we can’t even count it. So why should we trust a recidivist company—a company that ignores government orders—to protect privacy, to protect the security of this country? That doesn’t make any sense to me. That’s why I think we need a shake-up in big tech. Nicholas Thompson, Tim Wu Explains Why He Thinks Facebook Should Be Broken Up, Wired (July 5, 2019, 7:00 AM), https://perma.cc/BC8R-GT6E.

On a more substantive front, on appeal, the DOJ and several State Attorneys General argued that breaking up Microsoft was essential to restore competition. Endorsing a precautionary principle–type approach to antitrust enforcement, they argued that:

Microsoft’s operating system monopoly and dominance in applications (including IE) give it the incentive and ability to undertake the forms of anticompetitive conduct proved at trial and to pursue future variants of its past anticompetitive strategies that are impossible to predict. . . . Injunctive relief crafted for the long term necessarily would involve complex and highly intrusive restrictions on Microsoft’s conduct, might result in regulation rather than consumer choice determining market outcomes, and would require continued monitoring of Microsoft’s future activities.350Brief for Appellees United States and the State Plaintiffs, United States v. Microsoft, 253 F.3d 34 (D.C. Cir. 2001) (Nos. 00-5212, 00-5213) (emphasis added) (citation omitted).

And it was not just US antitrust authorities that espoused a highly dystopian view of Microsoft. Throughout two long-running competition cases, the European Commission was similarly bullish about the threat posed by Microsoft.351See Commission Decision, Case COMP/C-3/37.792, Microsoft, ¶ 1074 (Mar. 24, 2004) [hereinafter EC Microsoft]; Commission Decision, Case COMP/C-3/39.530, Microsoft (Tying), ¶ 2 (Dec. 16, 2009). Mario Monti, the Commissioner responsible for competition policy at the time of the investigation, notably argued that “Microsoft is leveraging its overwhelmingly dominant position from the PC into low-end servers, the computers which provide core services to PCs in corporate networks.”352Mario Monti Messing with Microsoft, Forbes (Aug. 6, 2003, 9:47 AM), https://perma.cc/3PUX-EFM8 (quoting Competition Commissioner Mario Monti’s Statement of Objections).

This pessimism regarding rivals’ ability to compete head-on with Microsoft was also plain to see in the first decision that the European Commission adopted against Microsoft (which dealt with Microsoft’s alleged tying of Windows Media Player to the Windows operating system, and Microsoft’s purported refusal to supply interoperability information to its key rivals.)353EC Microsoft, at ¶ 5. The European Commission concluded that:

Microsoft’s tying of WMP [i.e., Windows Media Player] also sends signals which deter innovation in any technologies which Microsoft could conceivably take interest in and tie with Windows in the future. Microsoft’s tying instils actors in the relevant software markets with a sense of precariousness thereby weakening both software developers’ incentives to innovate in similar areas and venture capitalists’ proclivity to invest in independent software application companies. A start-up intending to enter or raise venture capital in such a market will be forced to test the resilience of its business model against the eventuality of Microsoft deciding to bundle its own version of the product with Windows.354Id. at ¶ 983 (emphasis added).

Finally, the Commission cited equally alarming press articles and comments from rivals to support the above conclusion:

The presence of the WM player on top of the Windows XP operating system [is] a major threat to the emergence of competitive technologies. Such problems are common to other applications shipped with Windows XP, such as instant messaging services, which are subject to the same threat.”

. . . .

“‘The mood among venture capitalists gathered two weeks ago was somber,’ says Larry Marcus, partner with the WaldenVC firm in San Francisco, as Microsoft presented details of Office XP, the newest version of its word-processing and spreadsheet software. ‘They continue to expand the number of businesses that they’re going after,’ Marcus says. ‘As an investor in early-stage and private companies, the dance with Microsoft is becoming more important and more dangerous.’ He assumes that dozens of business plans were scrapped after the Office XP demonstration.355Id. at nn.1248–49 (emphasis added) (citations omitted).

All the above shows that antitrust authorities and courts on both sides of the Atlantic often bought in to the dystopian reasoning that was being voiced against Microsoft in popular news outlets at the time of their respective investigations. Indeed, just like their lay counterparts, antitrust policy makers readily concluded Microsoft would inevitably maintain its operating system monopoly and extend it to adjacent markets, thus deterring rival investments and stifling innovation. However, as explained in the following section, this is not how competition ultimately played out.

C.     What Happened Afterward

The previous sections have shown that mainstream media, antitrust authorities, and courts repeatedly made strong claims about the threats posed by Microsoft. This raises a critical question: did these gloomy prophecies come to pass, or was there alarmism at play?

Answering this query is no easy task. If critics’ predictions did not materialize, this failure could reasonably be attributed to the antitrust interventions that took place. Critics would thus have been right when they urged authorities to act. Moreover, even if critics’ worst fears were unfounded, this does not necessarily mean that antitrust intervention was uncalled for. We simply do not know what a counterfactual world with stronger (or weaker) antitrust or regulatory intervention against Microsoft would have looked like.

Mindful of these difficulties, this Article narrows the discussion to the following two questions. First, did critics’ most severe predictions materialize? And second, in the event that they did not, is there any evidence to suggest this was due to antitrust intervention? Accordingly, this Article does not second-guess the antitrust interventions that took place against Microsoft. Others have already thoroughly dissected that topic.356See, e.g., Stan J. Liebowitz & Stephen E. Margolis, Winners, Losers & Microsoft: Competition and Antitrust in High Technology (1999); Bittlingmayer & Hazlett, supra note 311; Nicholas Economides, The Microsoft Antitrust Case, 1 J. Indus. Competition & Trade 7 (2001); Richard J. Gilbert & Michael L. Katz, An Economist’s Guide to U.S. v. Microsoft, 15 J. Econ. Persps. 25 (2001); Lopatka & Page, supra note 332; see generally David McGowan, Between Logic and Experience: Error Costs and United States v. Microsoft Corp., 20 Berkeley Tech. L.J. 1185 (2005) [hereinafter McGowan, Between Logic and Experience]; David McGowan, Innovation, Uncertainty, and Stability in Antitrust Law, 16 Berkeley Tech. L.J. 729 (2001). Instead, this Article’s goal is to determine whether the criticism that was levelled against Microsoft was overblown or not.

This Article’s analysis shows that critics’ dire predictions mostly failed to materialize and focuses on three main lines of argument (discussed in detail in the previous sections). First, Subsection 1 explores critics claims that, because of increasing returns to scale and network effects, more efficient rivals would be unable to enter the operating system market and that operating system and software markets were natural monopolies. Second, Subsection 2 analyzes the arguments that Microsoft would leverage its operating system dominance to overthrow more efficient rivals in adjacent markets, especially online ones. Finally, Subsection 3 explores detractors opinions that innovation would grind to a halt, especially in products that relied on Microsoft’s platform and were thus purportedly vulnerable to leveraging. This Article finds little evidence that any of these predictions were accurate.

1.     Natural Monopoly and Barriers to Entry

Let us start with the claims that operating systems were effectively a natural monopoly and that entry barriers would prevent more efficient rivals from competing in the markets where Microsoft was already present. Although it is true that Microsoft still commands a strong market share in the desktop and laptop segments, critics’ predictions were mostly off the mark.

At the time of writing this Article, Microsoft’s strongest segment is still desktop operating systems, where it currently commands a global market share of roughly 60% to 88%, according to various sources.357Desktop Operating System Market Share in United States of America Jan. 2019–May 2021, Statcounter GlobalStats, https://perma.cc/M2E2-LSDH; Operating System Market Share, NetMarketshare, https://perma.cc/RS9C-5MXU. At best, this marks only a slight decrease compared to the US district court’s ruling (which found that Microsoft had a greater than 80% market share, with MacOS included in the market) and the European Commission’s first decision (which concluded that Microsoft had a market share of 93.8% in the year 2002, also including MacOS).358See United States v. Microsoft Corp., 87 F. Supp. 2d 30, 36 (D.D.C. 2000) (“Microsoft’s share of the worldwide market for Intel-compatible PC operating systems currently exceeds ninety-five percent, and the firm’s share would stand well above eighty percent even if the Mac OS were included in the market.”), vacated, 253 F.3d 34 (D.C. Cir. 2001); see also EC Microsoft, at ¶ 431 (“In 2002, [Microsoft’s market share] had further risen to 93.8% when measured by unit shipments and 96.1% by revenues. Microsoft is forecast to maintain these 90%+ market shares in the coming years.”).

Although, at first sight, these numbers might appear to confirm critics’ fears, they mask a very different competitive reality. For a start, if one avoids static and overly nostalgic market definitions and instead accounts for the emergence of mobile operating systems, Microsoft’s market share has decreased to a far less impressive 31%, behind the 42% commanded by Google Android.359Operating System Market Share Worldwide May 2020–May 2021, Statcounter GlobalStats, https://perma.cc/XH65-ZVBE. This is no small detail.

The biggest implication is that Microsoft’s operating system is no longer a bottleneck though which users must pass to access innovative software (and vice versa). Developers can now choose from a vast array of platforms to reach users. For example, Epic Game’s highly popular “Fortnite Battle Royale” is available on Windows, MacOS, PlayStation 4, Xbox One, Nintendo Switch, iOS, and Android.360FAQ, Fortnite, https://perma.cc/RXV6-X6XP.

Similarly, Microsoft’s Windows operating system is no longer the sole gateway through which mainstream users can access the internet. Today, a majority use a smartphone running iOS or Android, a computer running Windows or MacOS, or a tablet running iOS or Android (though there are countless alternatives, as well).361Joseph Johnson, Number of Internet Users in the United States from 2015 to 2021, by Device, Statista (Jan. 27, 2021), https://perma.cc/8UTY-CYGZ. Readers will recall that Microsoft being the gateway to the internet was an important concern for its critics.362See supra 315–319 and accompanying text. This is simply no longer the case.

Finally, the success of rival operating systems, albeit on different platforms, contradicts critics’ claims that network effects and economies of scale ran across platforms.363EC Microsoft, at ¶ 533 (“In summary, there are substantial direct and indirect network effects, not only within each of the two different markets for client PC and work group server operating systems, but also between the two markets.”). In other words, dominating the desktop operating system segment did not give Microsoft a clear advantage when it competed in other segments, such as mobile operating systems (where it has struggled to maintain a significant foothold).364Claire Reilly, Windows 10 Mobile Gets Its Final Death Sentence, CNET (Oct. 8, 2017, 7:57 PM), https://perma.cc/TET2-WBQV (explaining that there were not enough apps or people using Windows phones to make Windows 10 mobile a real competitor in the mobile operating system market). The upshot is that Microsoft’s market position in its strongest and most secure segment was, and still is, much frailer than its critics could have ever imagined.

Perhaps more important, Microsoft’s market position has also been severely eroded in the server operating system market. Estimates place Microsoft’s market share in the web server segment somewhere between 1.8% and 25% (various iterations of Linux occupy almost all of the remaining market).365Usage Statistics of Operating Systems for Websites, W3Techs, https://perma.cc/2PHU-3NEP; OS Market Share and Usage Trends, W3Cook, https://perma.cc/K9NX-LEUK. Microsoft’s share of the market also seems to be declining at a steady pace; according to one report, it decreased from 27% in the year 2000, to 5% in 2019.366September 2019 Web Server Survey, Netcraft (Sept. 27, 2019), https://perma.cc/MJP4-3DZE. More importantly, “Apache” and Microsoft, which once occupied a combined 85% of the market, have both been overtaken by competitors.367Id. If increasing returns to scale, insurmountable network effects, and barriers to entry and expansion were really endemic, the opposite should have occurred.

Things are less clear in the workgroup server segment. The best estimate found places Microsoft’s share at 72.8% of the global workgroup market, slightly up compared to the 61% to 64% that it commanded in Europe at the time of the European Commission’s investigation.368See EC Microsoft, at ¶ 491 (“In 2002, of all servers shipped costing under USD 25,000, Windows’ share measured by unit shipments stood at 64.9% according to IDC. Measured by revenues, the figure was 61.0%.”); see also Thomas Alsop, Share of the Global Server Market by Operating System in 2018 and 2019, Statista (May 13, 2020), https://perma.cc/GZ68-VUX9. As we discuss below, this tends to suggest that one of the remedies imposed by the European Commission was not particularly effective.369See infra Section III.D.

In short, Microsoft’s critics made two important miscalculations. First, operating systems do not appear to have been as close to natural monopolies as they believed—otherwise, Microsoft would not have steadily lost market shares in the web server operating system market and failed to gain traction with its Windows Mobile operating system.370Tom Warren, Microsoft Finally Admits Windows Phone is Dead, The Verge (Oct. 9, 2017), https://perma.cc/7K6E-JE4Z. Second, critics failed to recognize that the emergence of new platforms would allow users and developers to bypass Microsoft’s desktop operating system bottleneck, and that new technologies, such as cloud computing, would make the desktop operating system segment far less relevant because many calculations would no longer need to be made on a user’s device.371Harold Bell, Desktop-as-a-Service: Work from Anywhere, on Any Device, VentureBeat (Jan. 22, 2020, 12:29 PM), https://perma.cc/5SRK-V5AV. The emergence of these new platforms also seems to have prevented the tipping that critics feared.372This tends to confirm the notion that internet platforms can be analyzed in terms of “moligopoly” competition, where large players compete against each other across different markets. See Nicolas Petit, Big Tech and the Digital Economy: The Moligopoly Scenario 4 (2020). In sum, Microsoft might not have been entirely disrupted—its desktop operating system and workgroup server market shares are still high—but it was defanged. It effectively lost the strong base that purportedly would have enabled it to coerce consumers and developers into adopting its products in adjacent markets.

2.     Leveraging

The picture gets worse for Microsoft and its critics when one looks at potential leveraging in adjacent markets. Microsoft’s critics mainly focus on three adjacent markets: web browsers, media players, and search engines. At the time of writing, Microsoft is a secondary figure in all three of them.373Web Browser Usage Trends, W3Counter, https://perma.cc/8A2R-GZR6; Video Streaming Market Size & Share Report, 2021–2028, Grandview Research, https://perma.cc/8J25-7UBV; Joseph Johnson, Global Market Share of Search Engines 2010–2021, Statista (Mar. 12, 2021), https://perma.cc/Z96M-DFXJ. Moreover, Microsoft failed to dominate almost all of the adjacent markets in which it is currently active—the biggest potential exception being office suites.

In the internet browser market, Microsoft’s share has plunged from roughly 67% in 2007 to about 8% in 2019.374Web Browser Usage Trends, supra note 373. But even this 8% figure is misleading. Microsoft’s Edge browser now runs on the open-source Chromium technology.375Preston Gralla, Microsoft’s New Edge Browser: Third Time’s the Charm?, ComputerWorld (Jan. 15, 2020), https://perma.cc/FFS6-4FQ7. It could thus be replicated by rivals with relative ease. Much the same is true in the media player segment. Although Microsoft still bundles a version of its media player with Windows 10, the video streaming market has moved on.376Video Streaming Market Size & Share Report, 2021–2028, supra note 373. Most video streaming takes place either through internet browsers (thanks in part to HTML5377Nikhil Koranne, HTML5 vs. Flash: The Technical Perspective, Chetu, https://perma.cc/3Y5D-8SQE.), or through dedicated apps that are available on most popular platforms (think Netflix, HBO Max, Amazon Prime Video, YouTube, and Disney+). Microsoft is not an important player in this space. Finally, Microsoft failed to convert its operating system dominance into a strong search engine market share. Bing currently hovers around the 6% mark378Johnson, supra note 373. (although some estimates place it as high as 21% when the market is narrowed to desktop search engines in the US379Comscore Releases February 2016 U.S. Desktop Search Engine Rankings, Comscore (Mar. 16, 2016), https://perma.cc/TUU9-XMYG.).

Microsoft’s other services do not appear to have benefited from cross-platform leveraging either. For instance, Azure (Microsoft’s cloud computing service) is only the second largest player in the market (behind Amazon), with a market share of roughly 16%.380Cloud Market Share Q4 2018 and Full Year 2018, Canalys (Feb. 4, 2019), https://perma.cc/DU5B-4RL5; Fourth Quarter Growth in Cloud Services Tops off a Banner Year for Cloud Providers, Synergy Rsch. Grp. (Feb. 5, 2019), https://perma.cc/LYC5-P9E3. More importantly, there is no sign that Microsoft achieved this position by “forcing” users of its other services to choose Azure. The “Azure Portal,” which allows users access to Microsoft’s cloud, runs on all major desktop and tablet devices and is compatible with the latest versions of all the major web browsers.381Supported Devices, Microsoft (May 6, 2021), https://perma.cc/N99H-QS3E. Put simply, Microsoft is not particularly dominant in cloud computing and there is no evidence it has strongly benefited from some type of self-preferencing.

Similarly, Microsoft has apparently not attempted to coerce users of its wider ecosystem into joining the LinkedIn social network (which it acquired in 2016).382Microsoft to Acquire LinkedIn, Microsoft News Ctr. (June 13, 2016), https://perma.cc/8Z2H-336. For example, the LinkedIn app does not come pre-installed on Windows 10.383See New LinkedIn Desktop App for Windows 10, LinkedIn Corp. Commc’ns (July 17, 2017), https://perma.cc/V36D-PCJ6. And although LinkedIn has a large market share in its narrow segment of the social networking industry, Microsoft captures only a tiny share of online advertising. To give readers some idea, LinkedIn and Microsoft’s other websites captured an estimated 4.1% of online advertising spend in 2019, compared to 38.2% for Google, 21.8% for Facebook, and 6.8% for Amazon.384US Digital Ad Spending Will Surpass Traditional in 2019, eMarketer (Feb. 19, 2019), https://perma.cc/QQH5-5PMY.

Microsoft also failed to dominate the game console market. Granted, at the time of writing, it has a comfortable market share of 42% (excluding mobile gaming and platforms such as Steam).385Console Operating System Market Share Worldwide July 2020–July 2021, StatCounter Global Stats, https://perma.cc/87UL-UTEE. Though this is undoubtedly a profitable segment for Microsoft,386Annual Report 2018, Microsoft, https://perma.cc/4N3P-2ZLV (“Gaming revenue increased 14%, driven by Xbox software and services revenue growth of 20%, mainly from third-party title strength.”). it is certainly not the type of dominance its critics’ feared. Nor, again, is there any apparent sense that Microsoft somehow leveraged its strong position in desktop operating systems to maintain a strong position in the game console market.

And although Microsoft has been more aggressive in its attempts to convert users to its messaging services, it has not managed to thwart its competitors in those markets, either. Skype is pre-installed on Windows 10387Paresh Dave, Skype to Be Pre-Installed in New Version of Microsoft Windows, L.A. Times (Aug. 16, 2013, 9:03 AM), https://perma.cc/KPL4-QP5R. and Microsoft Teams is bundled with Office 365.388Tom Arbuthnot, Microsoft Teams Will Be Installed by Default for New Installations of Office 365 Business SKU, Tom Talks (Jan. 6, 2019), https://perma.cc/T4GD-3NGU (“Microsoft Teams will be installed a default part of Office 365 Business.”). Note that this could simply be a response to consumer demand for these services. However, despite Microsoft’s best efforts, competition in these markets has remained fierce. In the business chat segment, Skype competes against Google Hangouts, Slack, and Facebook, among many others.389See Liam Tung, Microsoft Teams is Killing It in the Business Chat Market, ZDNet (Dec. 11, 2018, 1:29 PM), https://perma.cc/HK4R-DAJU. In video calls, it competes against the likes of Zoom, Google Hangouts, and WhatsApp. Finally, in the instant messaging segment, Skype competes against Facebook (WhatsApp, Facebook Messenger, and Instagram), and Snapchat.390H. Tankovska, Most Popular Global Messaging Apps 2021, Statista (Feb. 10, 2021), https://perma.cc/B6PQ-LYG3. And a broader market definition might even include differentiated services such as Twitter, SMS, and TikTok. All of these are robust rivals.

Finally, Microsoft’s position has been somewhat eroded in the market for office suites (which partly overlaps with the services described in the previous paragraph). Estimates place Microsoft’s Office 365 suite at roughly 40% of the market, behind Google’s “G Suite” of productivity apps (now known as Google Workplace), which takes up roughly 60% of the market.391Shanhong Liu, Office Suites Market Share in the U.S. 2020, Statista (May 10, 2021), https://perma.cc/8DYN-3HH2; Offices Suites Market Share, Datanyze, https://perma.cc/UBZ6-DX35. And although G Suite is free for private users, which is not the case of Office 365,392Microsoft 365, Microsoft Store, https://perma.cc/NDZ6-2SGC. both firms charge comparable fees to businesses (Office 365 is priced between $5 and $12.50 per month/per user,393Microsoft 365 Business Basic, Microsoft Store, https://perma.cc/73ND-BSU4. while G Suite costs between $6 to $18 per user/per month394Google Workspace, Google, https://perma.cc/GZ84-MKBC.). However, Microsoft’s position is stronger if one looks at the market in terms of revenue. In 2016, Office 365 generated roughly ten times more revenue than the G suite.395Paresh Dave, Google’s G Suite May Not Kill Off Microsoft Office 365, But It’s Gaining Ground With Enterprises, Bus. Insider (Feb. 1, 2018, 1:06 AM), https://perma.cc/VD2H-Y8SF. Although Google’s suite has grown significantly since 2016, it likely still lags behind Microsoft in terms of revenue. But even with this caveat, there is little sense that Microsoft has anything close to a monopoly in the market for office suites.

What do all of these numbers tell us? Microsoft’s critics claimed it would use its dominance in the operating system market to crush competitors in adjacent markets.396Jay Greene, Trustbusters Are Bypassing the Biggest Tech Company of Them All, Wash. Post (June 28, 2019, 9:12 AM), https://perma.cc/V5KN-8QV6. Looking at the most important business segments in which it operates reveals that this simply has not been the case. Microsoft has not managed to establish a durable monopoly position, or anything close to it, in any of these markets. Of course, this is not to say that Microsoft has not created highly profitable services in these adjacent markets. But critics’ fears that it would leverage its desktop operating system dominance into adjacent markets failed to transpire (the important question, which we address in Section D, is why).

3.     Innovation

A final important claim was that innovation would slow down because of Microsoft’s size and behavior. For instance, the European Commission concluded that:

Microsoft’s tying instils actors in the relevant software markets with a sense of precariousness thereby weakening both software developers’ incentives to innovate in similar areas and venture capitalists’ proclivity to invest in independent software application companies.397EC Microsoft, at ¶ 983 (emphasis added).

Unfortunately, it is impossible to reproduce counterfactual worlds in which Microsoft might have been subjected to tougher or looser antitrust scrutiny, or where it might have decided not to enter adjacent markets. This Article therefore cannot exclude the possibility that innovation could have been superior if Microsoft’s desktop operating system position had been further reduced, just as we cannot exclude the opposite.

Despite this limitation, it is worth noting that the period leading up to and following the Microsoft antitrust cases has been particularly rich in innovation and startup activity. At the very least, this Article can show innovation most certainly did not grind to a halt, contrary to what some of Microsoft’s critics feared.

The best evidence for this conclusion comes from significant increases in venture capital spending in the United States, the upward movement of the Nasdaq index, and a steady (if not record-breaking) trend of Initial Public Offerings (“IPOs”). This Article also complements that analysis by identifying a series of groundbreaking technologies that have emerged in the information technology sphere since the mid-1990s.

A first sign that innovation has continued to progress at a strong pace is that venture capital spending in the United States has been steadily increasing since the mid-1990s.398Felix Richter, U.S. Venture Capital Funding Reaches Dot-Com Era Level, Statista (July 17, 2019), https://perma.cc/V4XQ-RNJY. According to some sources, this spending reached an all-time high in 2018.399Kate Rooney, Venture Capital Spending Hits All-Time High in 2018, Eclipsing Dotcom Bubble Record, CNBC (Jan. 10, 2019, 1:00 PM), https://perma.cc/2U39-83TN. These claims are based on a report by PitchBook and the National Venture Capital Association, which found that the total capital invested in 2018 was $131 billion, compared to the previous record of $105 billion in 2000 (just before the Dotcom bubble burst).40018 Charts to Illustrate US VC in 2018, PitchBook (Jan. 28, 2019), https://perma.cc/4XTN-CEDJ. Similarly, PWC’s MoneyTree report places 2018 just beneath 2000, with $116 billion and $120 billion, respectively.401PwC/CB Insights MoneyTree Explorer, PwC MoneyTree, https://perma.cc/6NJC-86KY. Whether or not 2018 was the biggest year on record, it is clear that venture capital investment has not declined since Microsoft’s prime in the mid-1990s.

Likewise, the Nasdaq Composite index, which is heavily weighted towards information technology companies, reached an all-time high in the summer of 2019.402Fred Imbert & Alexandra Gibbs, NASDAQ and S&P 500 Reach All-Time Highs as Amazon, Alphabet, and Apply Rally, CNBC (Aug. 29, 2018, 7:16 PM), https://perma.cc/BGZ8-GYHZ. With the notable exception of the dotcom bubble which burst in the year 2000, it has been steadily increasing since its launch in 1971.403NASDAQ Composite (^IXIC), Yahoo! Fin., https://perma.cc/A6AC-UL8J (with historical data option).

The picture is a little less clear when one looks at IPOs.404Jay R. Ritter, Initial Public Offerings: Updated Statistics 3 (2021); Lia Der Marderosian, Harv. L. Sch. F. on Corp. Governance, 2017 IPO Report, https://perma.cc/YCP6-MCLW. The years 1999 and 2000 (leading up to the end of the dotcom bubble) remain the largest on record, both in terms of aggregate proceeds and number of IPOs.405Ritter, supra note 404. During each of these two years, aggregate IPO proceeds were around $64 billion, compared to roughly $33 billion in 2018.406Id. at 4. The expectation is that 2021 will beat those numbers.407Siddharth Venkataramakrishnan, Global IPOs Begin 2021, Fin. Times (Apr. 28, 2021), https://perma.cc/E3SP-55TK. This gap is even larger if one adjusts for inflation.

Some have hypothesized that the slowdown of IPOs might simply be due to more companies choosing to remain private.408Patrick Galleher, Why More Businesses Are Choosing to Stay Private, Forbes (Feb. 26, 2020, 7:15 AM), https://perma.cc/87NL-ETLL. This idea is partly backed by data, which shows that the median IPO size has increased significantly since the end of the 1990s.409Der Marderosian, supra note 404. Nevertheless, the fact that IPO volumes have not (or have only recently) reached their late 1990s high does not mean that innovation has ground to a halt. For a start, it is worth mentioning that the records set in 1999 and 2000 were the product of what is arguably one of the largest economic bubbles in history.410Terry Mullen, Five Biggest Economic Bubbles in History, Truelytics: The Market (Jan. 29, 2018), https://perma.cc/8UAN-MCCX. In other words, the IPO values that took place during those years likely overstate the technological contributions of the underlying firms. Second, and more importantly, critics did not claim that innovation would fail to return to its record highs, they argued that Microsoft would cause innovation to considerably slow down.411See, e.g., Mark Cooper, Antitrust as Consumer Protection in the New Economy: Lessons from the Microsoft Case, 52 Hastings L.J. 813, 819 (2001). The healthy IPO activity recorded every year since then suggests they were wrong.

Of course, not all of these venture capital deals, IPOs, and stock market increases took place in the industries where Microsoft was active. Could it be that the rest of the economy was highly innovative while the information technology sector lagged behind? A qualitative analysis rapidly dispels this idea. The first two decades of the twenty-first century have witnessed the commercialization of, among others: the iPod; smartphones and tablets; smartwatches and other connected devices; the Blu-Ray disc; online video and music streaming services; social networks (including Facebook, Twitter, LinkedIn, and Slack); e-books; online video conferencing; the Wikipedia encyclopedia (which easily prevailed over Microsoft’s failed “Encarta”412Noam Cohen, Microsoft Encarta Dies After Long Battle with Wikipedia, N.Y. Times: Bits (Mar. 30, 2009, 10:23 PM), https://perma.cc/6VU2-8DK4.); high-performance search engines; online maps; online shopping; Wi-Fi; Bluetooth; 3G, 4G, and 5G; crowdfunding; rudimentary self-driving vehicle technology; tremendous increases to processing power; and blockchains. These are all fields that are closely linked to the markets where Microsoft operates, and yet the innovation in these fields has been nothing short of incredible.413See generally Conner Forrest, Tech Nostalgia: The Top 10 Innovations of the 2000s, TechRepublic (May 1, 2015, 5:00 AM), https://perma.cc/DJD9-94LS; Top Tech Innovations of the Past 20 Years, Indep. Univ.: Better Life Blog (June 23, 2020), https://perma.cc/G52Y-H889.

From a more quantitative standpoint, Susan Houseman, Timothy Bartik, and Timothy Sturgeon, among others, have shown that the computer and electronics sector has witnessed unprecedented productivity growth from the mid-1990s onwards.414See Susan N. Houseman, Timothy J. Bartik & Timothy Sturgeon, Measuring Manufacturing: How the Computer and Semiconductor Industries Affect the Numbers and Perceptions, in Measuring Globalization: Better Trade Statistics for Better Policy 151, 153–54 (Susan N. Houseman & Michael Mandel eds., 2015). In their words:

[T]he extraordinary growth in real value-added in manufacturing and the accompanying productivity growth in the computer and electronic products industry results largely from two sets of products, computers and semiconductors, that, when adjusted for quality improvements, have prices that are falling rapidly.415Id. at 153.

In short, claims that Microsoft would cause innovation to significantly slow down appear to have been extremely off the mark. Innovation did not slow down, and even if it had, critics would still have to show that this was attributable to Microsoft’s size or its behavior.

D.     Was Antitrust Intervention Against Microsoft Responsible for These Positive Outcomes?

This leaves us with one important question to answer. Could the failure of critics’ predictions regarding Microsoft reasonably be attributed to the antitrust interventions that took place between the mid-1990s and today, or simply the general level of deterrence that existing antitrust laws achieve? This Article argues that neither of these is a convincing explanation. Instead, more plausible theories include the fact that Microsoft ultimately lost its bottleneck position, and that its incentives and ability to exclude rivals in adjacent markets were greatly exaggerated.

1.     Actual Remedies Were Relatively Weak

A first important point is that the remedies that were imposed against Microsoft by antitrust authorities on both sides of the Atlantic were ultimately quite weak. It is thus unlikely that these remedies, by themselves, prevented Microsoft from dominating its competitors in adjacent markets.

The European Commission imposed three remedies upon Microsoft. In a 2004 decision, the Commission required Microsoft (1) to sell a version of Windows without its media player bundled, and (2) to provide server interoperability information to its rivals.416EC Microsoft, at ¶¶ 999, 1011. In 2010, the Commission also required that Microsoft introduce a browser choice screen within new versions of the Windows operating system.417European Commission Press Release IP/10/216, Antitrust: Commission Welcomes Microsoft’s Roll-Out of Web Browser Choice (Mar. 2, 2010), https://perma.cc/EG6F-5UCW. But all three of these remedies were, to some extent, failures.

For a start, Microsoft did sell a version of Windows without the media player: “Windows XP N.”418See FACT SHEET: Windows XP N Sales, Microsoft (Apr. 2006), https://perma.cc/G3M6-JP6T. During the first nine months following its introduction, Microsoft sold just 1,787 copies of this version (compared to 35.5 million copies of the regular version)419Id.—clearly too few to have any kind of competitive effect. Second, the browser choice screen remedy was so ineffective that, when Microsoft illegally stopped implementing it, it took authorities and consumers a full fourteen months to notice.420See Commission Decision, Case AT.39530, Microsoft (Tying), ¶ 64 (Mar. 6, 2013). Finally, it took almost four years for Microsoft and the Commission to agree upon the exact contours of the interoperability remedy.421European Commission Memorandum MEMO/08/125, Antitrust: Commission Decision on 27 February 2008 to Impose Penalty Payments on Microsoft—Frequently Asked Questions (Feb. 27, 2008), https://perma.cc/4R9P-EHX9. Microsoft’s workgroup server market position remained roughly constant from the start of the Commission’s investigation until the time of writing.422See supra Section III.C.1. This suggests that the Commission’s remedy did not significantly boost Microsoft’s competitors, as it had hoped (although we do not know the exact counterfactual).

The remedies imposed against Microsoft in the US do not appear to have had a much more meaningful effect. As explained above, from the mid-1990s to the early 2000s, Microsoft was the subject of two high-profile antitrust suits in the US.423See supra Section III.B. Both proceedings resulted in the adoption of consent decrees that attempted to limit Microsoft’s ability to foreclose rivals.424See Sharon Pian Chan, Long Antitrust Saga Ends for Microsoft, Seattle Times (May 11, 2011, 10:40 PM), https://perma.cc/J9HN-LVPN; U.S. v. Microsoft: Timeline, Wired (Nov. 4, 2002, 12:00 PM), https://perma.cc/3W57-HCZE. However, with hindsight, it is unlikely that either of these proceedings ultimately had a meaningful impact on Microsoft’s market position.

The first Microsoft case, in the mid-1990s, led to a consent decree that entered into force in 1995.425See U.S. v. Microsoft: Timeline, supra note 424. The original deal agreed upon by Microsoft and the DOJ’s antitrust division essentially sought to facilitate the entry of rivals in the desktop operating system market. It attempted to do this by preventing Microsoft from agreeing to “per processor” licenses or imposing “requirement contracts” on OEMs, and by limiting the duration of Microsoft’s license agreements with OEMs to one year.426United States v. Microsoft Corp., No. 94-1561, 1995 U.S. Dist. LEXIS 20533, at *6–8 (D.D.C. 1995).

It is worth noting that these remedies focused solely on the desktop operating system market.427See id. at *2–3. And yet, Microsoft’s desktop operating system market position remained more or less unchanged until 2010, fifteen years after the remedy was implemented.428Alphonse Eylenburg, Operating Systems: Market Shares Since the 1970s, Maps and Tables, https://perma.cc/MX96-HY8S. It thus seems highly unlikely that this first case meaningfully affected the course of competition in the desktop operating system market. More importantly, the first consent decree did not hinder Microsoft’s ability to compete in adjacent markets.429See Norman W. Hawker, Consistently Wrong: The Single Product Issue and the Tying Claims Against Microsoft, 35 Cal. W. L. Rev. 1, 2 (1998) (“Although the consent decree was widely criticized as too weak, two years later, the DOJ seemed to prove the critics wrong when the District Court entered a preliminary injunction against Microsoft after finding that the software behemoth had violated an obscure provision of the consent decree by requiring OEMs to install Microsoft’s internet web browser, Internet Explorer (IE), as a condition for licensing Microsoft’s then current OS, Windows 95. In a stunning set back, however, a split decision of the Court of Appeals reversed the District Court, holding that Internet Explorer and the OS were ‘integrated products’ protected by a proviso in the consent decree’s general prohibition against bundling the OS with other products.”). In other words, it did nothing to assuage fears that Microsoft might leverage its strong desktop operating system market position to foreclose rivals in adjacent markets. In fact, it is this “oversight” that ultimately led the DOJ and State Attorneys General to bring a second, and far broader, case against Microsoft.430See Michael P. Kenny & William H. Jordan, United States v. Microsoft: Into the Antitrust Regulatory Vacuum Missteps the Department of Justice, 47 Emory L.J. 1351, 1388 (1998) (“Following Judge Jackson’s decision, the Division abandoned any pretense of neutrality in the ‘browser wars’ by filing a separate antitrust lawsuit against Microsoft. Perhaps the Division finally realized that, even if it is was ultimately successful in its contempt proceeding and novel tying claim, the effect would be limited to an operating system (Windows 95) that rapid innovation in the computer industry was rendering obsolete. Regardless of its motivation, and the motivation of the twenty states that joined the Division’s efforts, the Division opted to file a much broader, but in some respects more traditional, antitrust complaint against Microsoft on May 18, 1998.” (footnote omitted)).

The remedies imposed following the second major antitrust case do not appear to have been any more effectual. The breakup remedy that was initially imposed by Judge Jackson in June 2000 was overruled on appeal.431See United States v. Microsoft Corp., 97 F. Supp. 2d 59, 64 (D.D.C. 2000), vacated, 253 F.3d 34 (D.C. Cir. 2001). Subsequent remand proceedings led Microsoft and the plaintiffs to agree upon two almost identical consent decrees.432See New York v. Microsoft Corp., 224 F. Supp. 2d 76, 267–70 (D.D.C. 2002), aff’d, 373 F.3d 1199 (D.C. Cir. 2004); Final Judgment, New York v. Microsoft Corp., No. 98-1233, 1998 WL 34097596 (D.D.C. Nov. 1, 2002); Renata B. Hesse, Section 2 Remedies and U.S. v. Microsoft: What Is To Be Learned?, 75 Antitrust L.J., 847, 856–57 (2009) (“The two Final Judgments entered by the court were essentially identical, although the California Group judgment did not have a provision to create a Technical Committee.”). These consent decrees included several noteworthy provisions.433See Hesse, supra note 432, at 857–62. A first set of requirements aimed to prevent Microsoft from foreclosing rival “middleware” providers, such as Java and Netscape.434See Microsoft, 224 F. Supp. 2d at 267–72. Microsoft was also made to share some of its APIs (i.e., application programming interfaces that effectively enable different applications to talk to each other) and communications protocols.435Id. at 268–69. Finally, a technical committee was appointed to monitor Microsoft’s compliance with the remedies.436Id. at 273.

While it is of course difficult to ascertain exactly how much of an effect these measures actually had on competition in the software industry, the general consensus appears to be that they were particularly weak. For instance, Professor David McGowan opined that:

The case produced a peculiar mishmash of liability and remedies, in which the acts that did the most to reinforce Microsoft’s market power were found lawful while the acts found unlawful were effectively trivial. The net result was a tepid tapioca pudding of a consent decree, which almost certainly will do nothing to reduce Microsoft’s market power.437McGowan, Between Logic and Experience, supra note 356, at 1189.

Other commentators have been more positive, though they have largely stopped short of extolling the virtues of the remedies. For instance, former Assistant Attorney General Renata Hesse (who oversaw much of the Antitrust Division’s work on the remedial aspects of the case) noted that, while the remedies may have been relatively positive overall, some of their most important aspects lacked teeth:

[I]t is not terribly surprising that Section III.E has formed the principal basis for claims by some that the Final Judgments have “failed.” The provision attempts to anticipate and address unknown threats to Microsoft’s operating system dominance. No matter how well drafted or vigilantly enforced the provision might be, that goal is difficult to accomplish.

. . . .

Section III.E imposed an affirmative obligation on Microsoft to make technology that it had not previously licensed available via a broad licensing program. As such, the focus on this particular provision was substantial. With each difficulty reported by the plaintiffs regarding Microsoft’s compliance, the perception intensified that the most significant part of the remedy would end up accomplishing nothing.

. . . .

This perception had grown so strong that, in late 2007, certain state plaintiffs advocated extending the Final Judgments in their entirety for an additional five years, in part to account for Microsoft’s delay in implementing Section III.E.

. . . .

It may yet be some time before all parts of the Final Judgments operate together as intended, as the district court—and the government enforcers—hoped would happen.438Hesse, supra note 432, at 867–69 (emphasis added).

Reading between the lines, one of Hesse’s main points seems to be that it is hard to design truly forward-looking remedies, and that the Microsoft litigation was no exception. As was the case with the first Microsoft consent decree, the new wave of remedies mainly addressed the issues identified in the case at hand (namely the challenges that Netscape and Java faced when trying to compete on the Windows platform). However, they did little to prevent Microsoft from flexing its muscles in other adjacent markets. Today, dystopian scholars would likely perceive this as a failure—what was there to stop Microsoft from using its desktop position to dominate the mobile sector, for instance—but it would be far more accurate to characterize this as a (possibly unintended) success. History tells us that Microsoft never did manage to transfer its desktop market position into other adjacent markets,439See discussion supra Section III.C. despite the absence of remedies preventing it from doing so. Perhaps courts at the time wisely recognized that it is almost impossible to snuff out as yet unknown threats, and that it is better to leave the job of guarding this competition to market forces. More likely, however, given the underlying analysis that myopically concentrated on backward-looking relevant markets and very limited conceptions of future innovative entry, such an outcome was an inadvertent byproduct.

The upshot is that the remedies that were imposed upon Microsoft in the EU and the US did very little to prevent the type of fears critics had been voicing,440And some commentators have gone so far as to argue that the cases had a negative effect because they deteriorated the experience of Microsoft Windows users. See, e.g., Ed Bott, How a Decade of Antitrust Oversight Has Changed Your PC, ZDNet (June 8, 2010, 11:54 PM), https://perma.cc/U4WT-A8UX. and did not hamper Microsoft’s ability to compete in the mobile internet segment. Yet, as this Article explains in the following subsection, this is precisely where Microsoft ultimately lost its grip over online markets.

2.     Microsoft Lost Its Bottleneck Position

A second important factor suggests that it was not antitrust enforcement that enabled Microsoft’s competitors to flourish. As the previous subsection already alluded, one of the biggest changes that took place in the digital space was the emergence of alternative platforms through which consumers could access the internet. Because Microsoft was unable to beat competitors in all of these markets, it is no longer a bottleneck to the internet.

According to one source, roughly 94% of all internet traffic came from Windows-based computers in January 2009.441Operating System Market Share Worldwide May 2020–May 2021, supra note 359. Eleven years later, this number has fallen to about 31%.442Id. Android, iOS, and OS X have shares of roughly 41%, 16% and 7%, respectively.443Id. Consumers can thus access the web via a large number of platforms. The emergence of these alternatives surely reduced the extent to which Microsoft could use its bottleneck position in order to force its services upon consumers in online markets. The result is well known: the early twenty-first century witnessed the emergence of innovative and successful internet companies such as Google, Amazon, Facebook, eBay, Spotify, and TikTok. Microsoft could only sit and watch as other firms conquered these new online markets.

This again raises the critical question: would this market outcome have been substantially different without the antitrust cases that were brought against Microsoft? While any answer will necessarily be speculative, the short answer appears to be no. And the reason, once again, lies in the importance of capabilities—something that antitrust analysis simply ignores.

As Ben Thompson has argued, Microsoft did not have the right capabilities to compete in the search and mobile phone markets—two markets with enormous strategic importance.444Ben Thompson, Tech and Antitrust, Stratechery (June 10, 2019), https://perma.cc/4D39-EZZC (“It is my contention that Microsoft failed to compete on the Internet and in mobile because the company was fundamentally unsuited to do so, both in terms of culture and capability. The implication of this conclusion is that the antitrust case against Microsoft was largely a waste of time: the company would have been surpassed by Google and Apple regardless.”). For a start, a company built on providing high-end software was unlikely to successfully compete with Apple in the mobile handset industry. Apple had a track record for producing outstanding hardware—culminating with the groundbreaking iPhone—that Microsoft simply couldn’t match.445Ben Thompson, More on Chrome and AMP, the Case Against Google, Decentralization and Paradigm Shifts, Stratechery (Feb. 22, 2018), https://perma.cc/TS5L-LCP3. Similarly, by the time Microsoft decided to enter the search engine industry, Google had already acquired vast experience in this area.446Ben Thompson, Where Warren’s Wrong, Stratechery (Mar. 12, 2019), https://perma.cc/7KM2-XVCD. This is not to say that subsequent entrants can never overthrow industry incumbents. But, in this case, there is little evidence to suggest that Microsoft ever came close to doing so. In the words of Benedict Evans:

The end of Microsoft’s dominance of tech actually came in two phases. First, as above, it lost the development environment to the web, but it still had the client (the Windows PC) and it then provided lots and lots of clients to access the web and so became a much bigger company. But second, a decade or so later, Apple proposed a better client model with the iPhone, and Google picked that up and made a version for every other manufacturer to use. Microsoft lost dominance of development to the web, and then lost dominance of the client to smartphones.447Benedict Evans, How to Lose a Monopoly: Microsoft, IBM, and Antitrust, Benedict Evans (Jan. 1, 2020) (emphasis added), https://perma.cc/U6ZW-42GZ.

In response to these arguments, some former Microsoft executives, including Bill Gates, have argued that antitrust intervention distracted Microsoft, thus preventing it from effectively competing in the mobile phone industry.448Jordan Novet, Bill Gates Says People Would Be Using Windows Mobile If Not For the Microsoft Antitrust Case, CNBC (Nov. 6, 2019, 5:05 PM), https://perma.cc/HWP4-C5EH; Thompson, supra note 445. Benedict Evans, however, offers three compelling counterarguments to that theory.449See Evans, supra note 447. First, Microsoft competed aggressively in the mobile phone segment before and after the launch of the iPhone.450Id. However, none of its offerings came anywhere close to the successful formula employed by Apple.451Id. Second, many other large firms failed to successfully navigate the mobile internet market, even though they had no antitrust cases to worry about.452Id. Why should Microsoft have fared differently? Finally, the successful formula identified by Google—relying heavily on widely compatible open-source software—was completely at odds with Microsoft’s entire corporate culture.453Id.

In short, it is very difficult to make the case that antitrust intervention caused Microsoft to lose its strong “gateway to the internet” position.

3.     If Antitrust Deterrence Was Sufficient for Microsoft, Then It Also Applies to Today’s Tech Firms

More fundamentally, if this Article is wrong, and antitrust enforcement did indeed prevent Microsoft from dominating online markets, then there is arguably no need to reform the antitrust laws on both sides of the Atlantic nor even to adopt a particularly aggressive enforcement position. As argued above, the remedies that were imposed on Microsoft were relatively localized. Accordingly, if antitrust enforcement did indeed prevent Microsoft from dominating other online markets, then it is antitrust enforcement’s deterrent effect that is to thank, and not the remedies that were actually imposed.

This raises an inconvenient prospect for those who would reform antitrust laws today. As this Article has argued above, the antitrust fears that were voiced about Microsoft bear a strong resemblance to those that are levelled against other big tech firms today. More importantly, many of the market features that are said to cause failures in digital markets were already present during Microsoft’s heyday. For example, Microsoft could also call upon network effects and potentially increasing returns to scale. However, under antitrust laws that still exist today, the result was not the total domination of the internet but thriving competition.

In short, if antitrust laws did indeed prevent Microsoft from harming burgeoning competitors, they still have that same effect today. At the very least, critics have not shown how the characteristics of the tech firms they would target—for instance, Google and Facebook—markedly differ from Microsoft’s. They thus fail to make the case that antitrust law is in need of reform.

E.     Microsoft, Pathways to Entry, and Lessons for Antitrust Scrutiny of Data-Intensive Firms Today

Finally, it is important to consider the alternative: it is possible that Microsoft’s own behavior ultimately sowed the seeds of its relative demise. In particular, the alleged barriers to entry (rooted in nostalgic market definitions and skeptical analysis of “ununderstandable” conduct) that were essential to establishing the antitrust case against the company may have been pathways to entry as much as barriers.

There is a fundamental underlying error in the entire barriers to entry enterprise: it is rooted in the idea that barriers tend to determine the number of firms, and the number of firms determines competitiveness. But this is a far too simplistic view:

[A]s markets grow in size, the industry structure that can emerge is not one of atomistic competition with constant quality but rather one where concentration remains high but product quality increases. Therefore, competition along nonprice dimensions can explain why concentration does not necessarily diminish as industries grow. The significance of this point cannot be overstated. Models that focus on only price competition may fail miserably to correctly predict industry concentration and consumer welfare when there are other product dimensions along which competition occurs. This is likely to be particularly true in industries requiring investment and creation of new products. It is no coincidence that many of the most controversial antitrust and regulatory cases have arisen in high technology industries (e.g., computers and telecommunications) where competition in research and development and new products is paramount.454Dennis W. Carlton, Barriers to Entry, in 1 Issues in Competition Law and Policy 604 (Wayne D. Collins, ed., 2008).

The confusion surrounding the meaning of “barriers to entry” often results because the precise consequence of conduct characterized as an entry barrier is actually unclear. If there are such “barriers,” is anticompetitive conduct a result of the barriers? The proper analysis doesn’t end with entry barriers; it starts with an analysis of what would happen without them, and then assesses whether barriers change anything. In so doing, it must also account for the benefits of existing conduct, including barriers. Where it does not, it again tends the assessment toward protection of the status quo.

A key status quo bias problem in the analysis of entry barriers is the assumption of essentiality of inputs or other relationships created by the early movers. Consider this error in the Microsoft court’s analysis of entry barriers: the court pointed out that new entrants faced a barrier that Microsoft didn’t face, in that Microsoft didn’t have to contend with a powerful incumbent impeding its entry by tying up application developers.455United States v. Microsoft Corp., 253 F.3d 34, 56 (D.C. Cir. 2001) (“When Microsoft entered the operating system market with MS-DOS and the first version of Windows, it did not confront a dominant rival operating system with as massive an installed base and as vast an existing array of applications as the Windows operating systems have since enjoyed.”).

But while this may be true, Microsoft did face the absence of any developers at all, and had to essentially create (or encourage the creation of) businesses that didn’t previously exist. In fact, although the court dismissed this argument in a slightly different context, it noted that, “[a]ccording to Microsoft, it had to make major investments to convince software developers to write for its new operating system, and it continues to ‘evangelize’ the Windows platform today.”456Id. Yet, the court also noted:

Because the applications barrier to entry protects a dominant operating system irrespective of quality, it gives Microsoft power to stave off even superior new rivals. The barrier is thus a characteristic of the operating system market, not of Microsoft’s popularity, or, as asserted by a Microsoft witness, the company’s efficiency.457Id.

The point about quality may be true, and it may even be true that the extent of the purported barrier didn’t correlate with Microsoft’s popularity or efficiency. But it is not true that the applications barrier to entry was independent of Microsoft’s efforts or investment; it was not merely a “characteristic of the operating system market,” as if exogenous to any conduct undertaken by Microsoft in order to obtain its scale in the first place. Rather, as noted, Microsoft invested heavily to create the network of developers in the first place.

Crucially, having done so, Microsoft created a huge positive externality for new entrants: existing knowledge and organizations devoted to software development, industry knowledge, reputation, awareness, and incentives for schools to offer courses. It could well be that new entrants in fact faced lower barriers with respect to app developers than did Microsoft when it entered. In other words, whatever Microsoft’s intent with its efforts to create and capture the app developer input market, the effect was to create an entire ecosystem supportive of app and software development. Microsoft failed to internalize this externality (or it did not have the capability to do so), and subsequent entrants were able to build their platforms more expeditiously and at less relative expense than Microsoft.

This dynamic is arguably crucial in considering the distinction between data pre- and post-entry. Much of the discussion of data as a barrier to entry casually speaks as if, because an incumbent has data, new entrants must also have data in order to compete. But the reality is that incumbents entered without data and produced it subsequent to entry—again, sometimes creating entirely new businesses and business models around it. Facebook is an obvious example of this dynamic, but so are Uber and Google and many others.

Data in this respect is like reputation. Nearly all new entrants suffer reputational disadvantages. And yet new entry happens all the time. Likewise, the more successful the incumbent—the larger its network, the stronger its reputation, the better its product—the more difficult is new entry for rivals. In the US, courts have consistently rejected the idea that reputation operates as a barrier to entry. The Court of Appeals for the Ninth Circuit, for example, has noted:

We agree with the unremarkable proposition that a competitor with a proven product and strong reputation is likely to enjoy success in the marketplace, but reject the notion that this is anticompetitive. It is the essence of competition.458Omega Env’t, Inc. v. Gilbarco, Inc., 127 F.3d 1157, 1164 (9th Cir. 1997) (citing Am. Pro. Testing Serv., Inc. v. Harcourt Brace Jovanovich Legal & Pro. Publ’ns, Inc., 108 F.3d 1147, 1154 (9th Cir. 1997) (“[R]eputation alone does not constitute a sufficient entry barrier in this Circuit.”); see also United States v. Syufy Enters., 903 F.2d 659, 669 (9th Cir. 1990) (citing United States v. Waste Mgmt., Inc., 743 F.2d 976, 984 (2d Cir. 1984)) (“We fail to see how the existence of good will achieved through effective service is an impediment to, rather than the natural result of, competition.”).

Or the Court of Appeals for the Third Circuit, for example, noted:

New entrants and customers in virtually any market emphasize the importance of a reputation for delivering a quality good or service. . . . [Plaintiff’s] argument, without some limiting principle (that it fails to supply), implies that there are barriers to entry, significant in an antitrust sense, in all markets. We find this proposition implausible and . . . precluded by Supreme Court precedent.459Advo, Inc. v. Phila. Newspapers, Inc., 51 F.3d 1191, 1201–02 (3d Cir. 1995) (emphasis omitted).

It is possible that, under some conditions, reputation or product differentiation can operate as a barrier to entry.460See id. at 1202. But there must be special circumstances for that to be true: most notably it has arisen in cases where incumbents undertook actions to prevent or preclude new entrants from developing their own brand reputation in order to compete.461See U.S. Philips Corp. v. Windmere Corp., 861 F.2d 695, 704 (Fed. Cir. 1988). But it can’t be always and everywhere true, or else every market would be characterized by anticompetitive barriers.

The same holds true for data. Data is typically generated by companies after they enter markets, as a by-product or intended consequence of their operations, or in some cases it is purchased beforehand.462See Rubinfeld & Gal, supra note 212, at 357 (“More commonly, data are collected as a (valuable) side-effect of other productive activities.”). It cannot be the case that doing so in the abstract creates an entry barrier, or else every market would be marked by entry barriers and the risk of antitrust liability for incumbents—including offline markets. By definition, data produced as a consequence of ongoing market operations is something only incumbents will have, and incumbents will always have. Defining the possession of data in this context as an entry barrier would be tantamount to inviting antitrust challenges on the basis of a company’s mere existence (and even more so, success) in a market that competitors wish to enter.

What seems to be required in order that data may be treated as a potential entry barrier is that the data at issue be some combination of essential, unique, exclusive, and rivalrous. If a suitable dataset can be created by new entrants or obtained elsewhere, or if other data can be used in its stead, or if alternatives other than data can be used (e.g., synthetic data or artificial intelligence), then it is hard to see any relevant competitive significance from data, regardless of the amount.

A key aspect of the mistake here is an availability heuristic (another form of nostalgia bias): it is often assumed that the successful way something has been done, and is done today, is the only way to do it, or the only way new entrants can do it and be competitive. But, of course, that is never actually true. As noted, Facebook uses a very different method and different data than does Google to match advertisers and users, and yet it entered the online advertising/matchmaking market and became enormously successful without adopting Google’s model (and without obtaining Google’s, or anyone else’s, existing data). Uber entered the transportation network market with a business model that didn’t require capital outlay on a large fleet of vehicles. Digital cameras made film irrelevant and didn’t need to rely on suppliers of film to enter. Fax machines went through a series of improvements, until email and cloud services completely replaced them.

The examples are endless. But they are key to understanding the non-essentiality of data: For some entrants—those adopting incumbents’ business models, minimizing their own innovations, or even piggybacking on incumbents—it seems indispensable. And they may find a willing ear at some antitrust agencies. But innovation has never required implementation of the same business model as incumbents, and especially not access to the particular proprietary inputs incumbents have created.

But most importantly, as noted above, new entrants may face even more welcoming environments because of incumbents. Consider how much Google contributed to the creation of the online advertising industry, consumer acceptance of advertising-financed websites, and web page and app developers’ expectations that advertising would need to be accommodated. Whatever the data used to deliver it, there can be no doubt that a new provider of online advertising today faces an environment in which its product is known and even invited. That was not always true in the past. Knowing precisely how much benefit this advantage confers on new entrants is extremely difficult. The point, however, is that there is no basis for assuming it is simply irrelevant, yet the fact that it is simply ignored by traditional antitrust analysis implies that conclusion. And, of course, in some cases it could well be more significant than the impediment that accompanies it.

Conclusion

This Article has argued that dystopian antitrust prophecies are generally doomed to fail, just like those belonging to the literary world. The reason is simple: while it is easy to identify what makes dominant firms successful in the present day (i.e., what enables them to hold off competitors in the short-term), it is almost impossible to cognize the near infinite ways in which the market could adapt. Indeed, it is today’s supra-competitive profits that spur the efforts of competitors. Surmising that the economy will come to be dominated by a small number of successful firms is thus the same as believing that all market participants can be outsmarted by a few successful ones. This might occur in some cases (or for some time), but, as this Article has argued, it is bound to happen far less often than pessimists fear.

Dystopian and nostalgic commentators reject this premise. Today, they have set their sights on the digital economy. The crux of their case is that data-intensive markets share common features that, purportedly, pose severe threats to competition and innovation. These features include the ubiquity of network effects, increasing returns to scale and data-related network effects. This has led some to call for a complete overhaul of the current antitrust paradigm, including a move away from the consumer welfare standard and the error-cost framework. The proposals that have, so far, been put forward would notably involve blanket prohibitions against self-preferencing by dominant platforms, mandatory data access regimes that would purportedly prevent big tech firms from leveraging their market positions, shifting the burden of proof in merger proceedings that involve digital markets, and upending antitrust law’s light-touch approach to vertical restraints. All of these changes would be achieved by either amending existing antitrust statutes or by adopting ad hoc regulations. And because they target harms that are merely hypothetical, these proposals would effectively lead enforcers to apply a form of precautionary principle within antitrust policy.

However, both this Article’s review of the economic literature and its case study of the Microsoft litigation suggest that dystopian scholars have not successfully made the case for precautionary antitrust. Indeed, the economic features of data make it highly unlikely that today’s tech giants could anticompetitively maintain their advantage for an indefinite amount of time—and much less leverage this advantage in adjacent markets. Because data is just information, it is almost impossible for a single firm to either hoard it for a prolonged period or prevent its competitors from obtaining similar data from alternative sources. Moreover, a burgeoning body of empirical evidence suggests that, while data can be very valuable, it also involves diminishing returns at the margin. Finally, data is of little value without complementary firm-level capabilities. As a result, there is little reason to believe that today’s tech giants have acquired a self-reinforcing competitive advantage. This might explain why these same firms managed to outcompete incumbents who had access to much vaster datasets.

The idea that dystopian scholars have overshot the mark is also supported by this Article’s case study of the Microsoft antitrust litigation. Indeed, the fears that were voiced at the time of the Microsoft antitrust proceedings are strikingly similar to those that are being raised about the digital economy today. Chief among them was a concern that Microsoft would use its desktop operating system market position to dominate adjacent online markets. These fears did not pan out. While the reasons for this are hard to ascertain, there is a strong case to be made that this had little to do with antitrust enforcement and that competitive forces— particularly Microsoft’s inability to offer compelling products in the online space—were the chief cause of this “failure.” The bottom line is that critics have not successfully made the case for replacing current antitrust doctrine with something akin to the precautionary principle. This is not to say that antitrust authorities should never intervene in digital markets. Our argument is merely that competition authorities should focus on behavior that clearly harms consumer welfare, rather than try to prevent hypothetical future harms.

With this in mind, there is one dystopian novel that offers a fitting metaphor to end this Article. The Man in the High Castle tells the story of an alternate present, where Axis forces triumphed over the Allies during the second world war.463See generally Philip K. Dick, The Man in the High Castle (1962). This turns the dystopia genre on its head: rather than argue that the world is inevitably sliding towards a dark future, The Man in the High Castle posits that the present could be far worse. In other words, we should not take any of the luxuries we currently enjoy for granted. In the world of antitrust, critics routinely overlook that the emergence of today’s tech industry might have occurred thanks to, and not in spite of, existing antitrust doctrine. Changes to existing antitrust law should thus be dictated by a rigorous assessment of the various costs and benefits they would entail, rather than a litany of hypothetical concerns. The most recent wave of calls for antitrust reform have so far failed to clear this low bar.

 

Share this article

Twitter
Facebook
LinkedIn