Pages

14 January 2021

Data: Governance and Geopolitics


Big data is often perceived as the black gold of the twenty-first century. Despite its fundamental differences with oil, it is indeed as critical, a fact tragically underscored by the lack of data on testing and tracking during the pandemic, especially in the United States. Yet the ways in which data is governed—or not—still are not well understood. Data, information, and big data overlap, and so do the issues involved in governing them. Those issues range from the seemingly prosaic (in what country will data centers be located?) to questions bearing on the nature of democracy itself (how will false news and hate speech be policed, and by whom?). To the extent that data is governed, that governance is a fractal of how the internet is governed—scattered, bottom-up, and driven by loose coordination among many actors, most of them in the private sector.

Given its importance, how data is collected, stored, protected, used, and transferred over national borders is becoming a geopolitical issue. Moreover, governing data inevitably runs into the differences in ideological visions of the internet and fundamental cultural divides. China’s insistence on internet sovereignty could be seen as a legitimate effort to control harmful or hateful information. Yet, from another perspective, it is a non-tariff barrier that limits foreign access to China's digital market.

How data is governed can be thought of along several lines of activity: legislating privacy and data use, regulating content, using antitrust laws to dilute data monopolies, self-regulating by the tech giants, regulating digital trade, addressing intellectual property rights (IPR) infringement, assuring cybersecurity, and practicing cyber diplomacy. Of these, antitrust, regulation, and privacy are most immediately in the spotlight, and are the focus of this commentary, but it will also touch briefly on the connections with other issues.

The Immediate Agenda

The recent announcement of the Federal Trade Commission (FTC) suit to break up Facebook and the ongoing antitrust investigation of Google dominate the tech news at home. Abroad, Germany’s highest court ruled that Facebook broke concentration (antitrust) laws when it combined data from its different platforms, especially WhatsApp and Instagram, as well as other sites and apps. The ruling, which will be appealed, was a direct challenge to Facebook’s business model, for it permitted users to block the company from combining Facebook data with that of other sources.

The FTC case is twofold—that Facebook acquired Instagram and WhatsApp to strangle budding competition, and that customers were hurt by not being able to choose a social media platform that extracted less data for lack of competition in the sector. Legal experts disagree on the strength of the case, particularly the second charge. The one certainty is that the cases will drag on. Moreover, FTC actions seldom break up companies—the last major case was Microsoft in 1999. Instead, investigators usually opt for a less dramatic remedy—for instance, compelling tech giants to share their data in some way with competitors or new entrants, a reminder that data is the critical element.

The challenge of regulation—that internet technologies move fast while government processes are slow, especially if the action sought requires an agreement or treaty among nations—also afflicts legislation. As a result, there is always the risk that by the time a regulation is enacted, it will be obsolete, or worse, counterproductive. That challenge is illustrated by one old piece of legislation that has come under new scrutiny: Section 230 of the U.S. Communications Decency Act of 1996. Enacted in the early years of the web, its goal was to promote innovation, not to protect decency or privacy. As a result, the regulatory regime it established was permissive: platforms were given broad immunity from lawsuits for words, images, and videos posted on websites.

It is increasingly the target of criticism across the political spectrum. Trump and his supporters believe Twitter, Facebook, and their kin muzzle conservative views, and without the 230 protections, voices who felt they had been denied a platform could have sued. The other side of the political spectrum, including House Speaker Nancy Pelosi, maintains that Section 230 has permitted a slew of disinformation and harassment, and absent it, they argue, the sites would have to be much more careful in policing their content. A 2019 bill introduced by Senator Josh Hawley (R-MO) proposed ending legal protections for tech companies that did not agree to an independent audit ensuring that there was no political bias to their monitoring of content. 

The issue has become intensely political, but the fact is that the tech giants are caught in an impossible situation: on the one hand, they can no longer say they are just platforms, with no responsibility for what is on them; yet, on the other, they are neither willing, nor probably able, to become publishers, fully responsible for their content. Regulating them just like public utilities is one suggested approach. Another, somewhat akin to Hawley’s bill, is the model of the financial services industry’s watchdog, the Financial Industry Regulatory Authority (FINRA), which is licensed by Congress but is a private nonprofit organization. Tech companies themselves could create something similar with guarantees of independence in its judgments and, ideally, with a license from Congress.

In a striking demonstration of how much global geometry has changed, neither of the two most noted pieces of legislation about data privacy so far has been enacted by a nation-state. Most important is the European Union’s General Data Protection Regulation (GDPR), in force since May 2018. GDPR stipulates how data controllers and processors must collect and process data from EU citizens, regardless of where they're located. EU users who visit sites, irrespective of their location, must be told what data the site collects from them via cookies and users must explicitly agree.

The other major piece of legislation is the California Consumer Privacy Act, or CCPA, which came into force at the beginning of 2020. In contrast to the GDPR, which in effect requires consumers to opt into data collection, the CCPA allows consumers to opt out. In that sense, it is less stringent than the GDPR. The CCPA gives users the right to ask a company to produce all the personal information it has gathered on them over the years, as well as all the categories of businesses it got that information from or sold it to. For both the GDPR and CCPA, if a consumer asks, companies must delete all the information they have on that consumer, and if they have shared personal data with another company, they must ensure that any subsequent company processing that data deletes it too.

The GDPR both sought to be and is being used as a template for other countries. Indeed, 132 countries have put in place some legislation to secure data and privacy. Brazil, Japan, and South Korea have followed Europe’s lead, and, in general, any nation interested in a trade agreement with the European Union must address data privacy on its terms. As Dean C. Garfield, president of the Information Technology Industry Council put it, “in the absence of another approach, it’s easier for other markets to follow what Europe has done.” In fact, Microsoft allows users to manage their data according to GDPR rules even if they aren’t EU citizens.
Toward a Digital-20

Beyond the financial page headlines are the other lines of activity that govern data—or don’t. There is no globally agreed-on definition of digital trade and thus no set of international laws to govern it. Key issues are treated differently in different trade agreements. The World Trade Organization General Agreement on Trade in Services (GATS), for instance, predates the explosion of the global data flows but since it does not distinguish how services are delivered, it includes digital services. Most other agreements, however, cover physical goods and intellectual property but make no provision for digital goods.

In this context, data localization, which explicitly aims to limit flows across borders by requiring companies to store and process data within national borders, could be seen as a non-tariff barrier, reducing efficiency by increasing costs and decreasing scale—effects that spill over into the entire global supply chain. In recent years, infringements of intellectual property rights have surged, mainly because digital technology makes counterfeiting and its distribution cheap, relatively easy, and hard to trace. Cyber-enabled theft of trade secrets has been particularly concerning for the United States, especially with regard to China.

Neither the government nor the private sector anticipated the speed of this technological revolution and the challenges it would pose. As a result, the internet still operates on vulnerable protocols that date back to the 1960s. Cybercrime alone is predicted to cost the global economy $6 trillion by 2021. Cyber intrusions threaten not only business operations and supply chains but also financial and communications infrastructure, national security, privacy, trade, and commerce. The costs of cyber espionage and cyberwarfare are hard to estimate, but these practices are widespread. States involved in espionage, like criminals involved in crime, try to hide their identities or at least maintain plausible deniability. Moreover, when attribution is possible, nations can prosecute those responsible for cybercrimes but not for espionage—such activities remain the murky domain of clandestine operations. Even when a no-espionage agreement is achieved, it is often ineffective.

The ubiquity of data and artificial intelligence (AI) will also transform diplomacy by increasing both the number of non-governmental actors who influence formal diplomacy and the purposes for which that data is employed. Imagine, for instance, if ethnic cleansing in the former Yugoslavia during the 1990s had occurred in the presence of ubiquitous cellphone cameras. Fresh gravesites, such as those in the massacre at Srebrenica, would have been documented immediately for the world to see. More data for more participants will make formal diplomacy messier and less predictable: witness the 2013 disclosure by Edward Snowden of National Security Agency surveillance programs, which he justified as whistle-blowing and which did, in the end, play some role in the public pressure that lead to the GDPR. Yet, data experts and increasing amounts of data will create new relationships and thus new opportunities for diplomacy.

What is clear is that the mare’s nest of laws, treaties, agreements, regulations, and self-regulation, as two analysts put it, “lack transparency and coherence: The combination drives up the cost of innovation and doesn’t go far enough to encourage healthy competition or to protect the billions of people worldwide who now rely on the products and services tech companies produce.” The digital age presents geopolitical and philosophical problems with complexity and speed beyond the reach of the existing global architecture that underrepresents both emerging powers, like China, and ever more influential private sector actors, from the tech giants to the Gates Foundation. We are addressing twenty-first-century problems with a twentieth-century mindset, approach, and toolkit.

This worrisome geopolitical context calls for an urgent gathering to ensure that the most transformative technologies of our time do not spiral out of control into a world order we will come to regret. The Digital-20 (D-20), proposed by the Global TechnoPolitics Forum, aims to function as a bridge between the existing global architecture and the new geopolitical context. It would build upon the important work led by the Bretton Woods institutions, the founding internet organizations, and other think tanks in establishing international codes and standards as well as demonstrating leadership. The D-20 is in many respects modeled on the G20, but the new group would broaden the dialogue to include new stakeholders and shift the focus to the key geopolitical challenges caused by emerging digital technologies. As an autonomous group with no executive power and no binding decisions, its primary impact would lie in creating trust and peer-to-peer intimacy among members as they develop a shared diagnosis of potential problems and a common analytical framework in small, intimate convenings. Building on this trust, the D-20 would strive to produce actionable and measurable outcomes.

Gregory F. Treverton is a senior adviser (non-resident) with the Transnational Threats Project at the Center for Strategic and International Studies (CSIS) in Washington, D.C., professor of the practice of international relations and spatial sciences at the University of Southern California, and chair of the Global TechnoPolitics Forum. Pari Esfandiari is president of the Global TechnoPolitics Forum, a member of the At-Large Advisory Committee at the Internet Corporation for Assigned Names and Numbers (ICANN), and a non-resident senior fellow at the Atlantic Council’s GeoTech Center.

No comments:

Post a Comment