11 June 2020

Antitrust investigations have deep implications for AI and national security

Dakota Foster
Source Link

In late March, Attorney General William Barr announced that “decision time” was looming for America’s leading tech firms. By early summer, Barr expects the Department of Justice to reach preliminary conclusions about possible antitrust violations by Silicon Valley’s largest companies. The DOJ’s investigation is just one of several probes scrutinizing potential abuses by Facebook, Google, Amazon, Apple, and Microsoft. While concerns over consumer protections, anti-competitive practices, and industry concentration have fueled these antitrust investigations, their results will almost certainly have national-security ramifications. 

Secretary of Defense Mark Esper has argued that artificial intelligence is likely to shape the future of warfare, and the national-security community has largely backed that conclusion. The most recent National Defense Strategy, released in 2018, highlights AI’s importance, noting that the Pentagon will seek to harness “rapid application[s] of commercial breakthroughs…to gain competitive military advantages.” With defense officials arguing that U.S. military superiority may hinge on artificial intelligence capabilities, antitrust action aimed at America’s largest tech companies—and leading AI innovators—could affect the United States’ technological edge.


But the effects of such action are highly uncertain. Will a less concentrated tech sector comprised of slightly smaller firms fuel innovation and create openings for a new generation of tech companies? Or will reductions to scale significantly hurt leading tech firms’ ability to leverage the traditional building blocks of AI innovation—like computing power and data—into breakthroughs? The answers to these questions aren’t clear cut but offer a way to begin thinking about how antitrust enforcement could impact artificial intelligence innovation and national security more broadly.

Unlike some earlier national-security technologies, the commercial sector plays an outsize role in AI development. As a result, government access to both AI products and innovation hinges, in large part, on industry. While academia, private research labs, and AI start-ups offer important contributions to AI development, major American technology companies have traditionally led the field. Last year, Microsoft, Facebook, Amazon, Google, and Apple ranked among the ten largest recipients of U.S. artificial intelligence and machine learning (ML) patents.

Changes to the composition of America’s tech sector might boost net AI innovation. From 2013-2018, 90 percent of successful Silicon Valley AI start-ups were purchased by leading tech companies. This is a potentially worrisome trend for AI innovation. After all, incumbent firms and emerging companies can have very different incentives. Entrenched tech giants may be more focused on maintaining market share than disrupting markets altogether.

As Big Tech increasingly moves to acquire AI start-ups, individual firm dynamics also shift. Instead of “building for scale,” start-ups begin to “build for sale,” adopting a mentality that may be ill-suited for moonshot innovations. Would a company like DeepMind (now owned by Google parent-company Alphabet), for example, have developed AlphaGo—the ground-breaking computer program that became the first to beat a human player in Go—if the firm’s primary goal was to be acquired by a bigger player?

Antitrust action could shift these incentives and spur competition, potentially opening the door for new AI innovations—and for a new wave of AI companies. With their smaller statures, some of these firms might focus on more niche AI applications, including defense-related products, as start-ups like Anduril and ShieldAI have done. Today’s tech giants have every financial incentive to cater to foreign markets and the average consumer, not to the U.S. federal government. Indeed, with its global user-base, it is hard to imagine Google tailoring its AI innovation decisions to U.S. defense needs. The same may not hold within an AI ecosystem where some companies built, for example, in the mold of Palantir (a data-analytics company with clear national-security applications) consider government their primary customer and subsequently concentrate on its demands.

National-security agencies, from the Pentagon to the U.S. intelligence community, could stand to benefit from more targeted innovation—and from an industrial base better attuned to their needs. As Christian Brose points out, only a fraction of the U.S.’s billion-dollar tech “unicorns” have operated in the defense sector, leaving the U.S. military “shockingly behind the commercial world in many critical technologies.”

As Silicon Valley’s largest companies consolidate AI talent and novel ideas through acquisitions, these companies gain an ever-larger say in the future of AI. This consolidation, which antitrust action could disrupt, may not favor innovation. But breaking up major tech firms also has potential pitfalls for AI innovation. With scale comes resources, and AI innovation is resource-intensive, requiring large quantities of data, diverse datastores, and vast computing power—known as “compute” in industry jargon. 

American tech giants’ huge revenues uniquely equip them to fund costly AI research. Google’s DeepMind, arguably the world’s leading AI-research organization, is billions of dollars in debt and lost over $500 million in 2018 alone. Google’s fortress-like balance sheet can easily absorb the costs associated with such cutting-edge research, but smaller firms likely cannot. The economics of compute offer a concrete example of this dynamic. The rapidly increasing volume of compute required for deep learning research, coupled with compute’s prohibitively expensive prices, creates significant barriers to entry and innovation for smaller AI firms. As Microsoft co-founder Paul Allen noted in 2019, the “exponentially higher” costs of compute may leave the U.S. with only “a handful of places where you can be on the cutting edge.” Even the most well-funded independent AI organizations rely on Big Tech’s compute resources. OpenAI’s billion-dollar compute partnership with Microsoft, reached after OpenAI spent millions renting compute from leading tech firms, offers one example.

Changes to firms’ scale also may impact their access to data, another key resource required for AI innovation. Studies have linked the performance of deep learning models to the quantity of data fed into them. At present, tech giants have access to unprecedented volumes of data about their users. Google, for example, can harness data from Google Search, Maps, YouTube, Gmail, and other sources. If antitrust enforcement leads to divestment or broader break-ups, access to data may diminish, lessening innovation.

Would reduced access to large, internal datastores hurt U.S. tech companies’ ability to innovate relative to China, whose biggest firms have largely evaded antitrust action? Big Tech executives, including Mark Zuckerberg, have argued that antitrust action could hinder U.S. competitiveness. Data access is a growing point of concern along these lines. The U.S. National Security Commission on AI has reportedly discussed the possibility of data pooling among allied countries to “offset” any data advantage held by China. However, it remains unclear just how central big data will be to the future of AI innovation (promising ML techniques like few-shot learning are not data intensive) and how well big companies can utilize their large datasets in the first place.

National security and antitrust are rarely part of the same conversation. The realities of today’s AI ecosystem should challenge that dynamic. American AI innovation is concentrated in the private sector—particularly within its largest, most dominant firms. As these firms face antitrust scrutiny, policymakers and lawmakers alike need to consider the AI ecosystem that they will have a hand in creating. They will need to contemplate its competitiveness, its innovativeness, its responsiveness to defense and national-security needs, and its accessibility to government. Will its companies have the resources to access and acquire key inputs for AI innovation like compute and data? Will the sector’s composition encourage competition at every level? Or will it stifle new growth and engage in anti-innovative practices? American leadership in AI—a key national security technology—may hinge on an AI ecosystem shaped by antitrust action. It will be imperative that innovation considerations play a role in forging it.

Dakota Foster is a graduate student at Oxford University and a former visiting researcher at the Center for Security and Emerging Technology.

No comments: