22 October 2020

Artificial Intelligence Cold War on the horizon

By RYAN HEATH

Welcome to POLITICO’s new Global Public Tech Spotlight — an extension of the Global Translations newsletter. Each week we track major issues facing the globe. Sign up here.

The United States is the world’s leading force in artificial intelligence (AI), for now, but China is rapidly catching up making partnerships among democracies critical to staying ahead of China’s capabilities. Alongside those competitive and security tensions, the world lacks a common rulebook for the ethical use of AI.

Speaking at a POLITICO AI Summit on Thursday, Eric Schmidt, chairman of the National Security Commission on Artificial Intelligence and former CEO at Google, said the U.S. urgently needs a national AI strategy based on the principle of "whatever it takes." Schmidt said Americans could not relax on AI issues because even consumer AI innovations have the potential to be “used for cyber war” in ways that aren’t always evident or anticipated. Schmidt has previously warned against "high tech authoritarianism."

While the U.S. has lacked central organizing of its AI, it has an advantage in its flexible tech industry, said Nand Mulchandani, the acting director of the U.S. Department of Defense Joint Artificial Intelligence Center. Mulchandani is skeptical of China’s efforts at “civil-military fusion,” saying that governments are rarely able to direct early stage technology development.

Tensions over how to accelerate AI are driven by the prospect of a tech cold war between the U.S. and China, amid improving Chinese innovation and access to both capital and top foreign researchers. “They’ve learned by studying our playbook,” said Elsa B. Kania of the Center for a New American Security.

“Many commentators in Washington and Beijing have accepted the fact that we are in a new type of Cold War,” said Ulrik Vestergaard Knudsen, deputy secretary general of Organization for Economic Cooperation and Development (OECD), which is leading efforts to develop global AI cooperation. But he argued that “we should not abandon hope of joining forces globally.” Leading democracies want to keep the door open: Ami Appelbaum, chairman of Israel’s innovation authority, said “we have to work globally and we have to work jointly. I wish also the Chinese and the Russians would join us.” Eric Schmidt said coalitions and cooperation would be needed, but to beat China rather than to include them. "China is simply too big," he said. "There are too many smart people for us to do this on our own."

The invasive nature and the scale of many AI technologies mean that companies could be hindered in growing civilian markets, and the public could be skeptical of national security efforts, in the absence of clear frameworks for protecting privacy and other rights at home and abroad.

A Global Partnership on AI (GPAI), started by leaders of the Group of Seven (G7) countries and now managed by the OECD, has grown to include 13 countries including India. The U.S. is coordinating an AI Partnership for Defense, also among 13 democracies, while the OECD published a set of AI Principles in 2019 supported by 43 governments.

Knudsen said that it is important for AI global cooperation to move cautiously. “Multilateralism and international cooperation are under strain,” he said, making a global agreement on AI ethics difficult. “But if you start with soft law, if you start with principles and let civil society and academics join the discussion, it is actually possible to reach consensus,” he said.

Data and cultural dividing lines

Major divisions exist over how to handle data generated by AI processes. “In Europe, we say that it’s the individual that owns the data. In China, it’s the state or the party. And then there’s a divide in the rest of the world,” said Knudsen. “There is a right to privacy that accrues to everyone,” according to Courtney Bowman, director of privacy and civil liberties engineering at data-mining and surveillance company Palantir Technologies. But “we have to recognize that privacy does have a cultural dimension. There are different flavors,” he said.

Most experts agree there is the scope to regulate how data is used in AI. Palantir’s Bowman says that AI success isn’t about unhindered access to the biggest datasets. “To build competent, capable AI it’s not just a matter of pure data accumulation, of volume. It comes down to responsible practices that actually align very closely with good data science,” he said.

“The countries that get the best data sets will develop the best AI: no doubt about it,” said Nand Mulchandani. But he said that partnerships are the way to get that data. “Global partnerships are so incredibly important” because they give access to global data, which in aggregate is better than even a huge dataset from within a single country such as China.

How can government boost AI?

Rep. Cathy McMorris Rodgers (R - WA) , a leading Republican voice on technology issues, wants the U.S. government to create a foundation for trust in domestic AI via measures such as a national privacy standard. “We need to be putting some protections in place that are pro-consumer so that there will be trust,” in “pro-American” technology, she said.

U.S. Rep. Pramila Jayapal (R-Wa.) wants both government regulation and private sector standards while AI technologies — particularly facial recognition — are still young. “The thing about technology is, once it's out of the bottle, it's out of the bottle,” she said. “You can't really bring back the rights of [Michigan resident Robert Williams who was arrested based on a faulty ID by facial recognition software], or the rights of Uighurs in China, who are bearing the brunt of this discriminatory use of facial recognition technology. Some experts argue that while regulation is needed, it must be sector-specific, because AI is not a single concept, but a family of technologies, with each requiring a different regulatory approach.

Government has a role in making data widely available for the development of AI, so that smaller companies have a fair opportunity to research and innovate, said Charles Romine, Director of the Information Technology Laboratory (ITL) within the National Institute of Standards and Technology (NIST).

On the question of government AI funding, Elsa Kania said that it’s not possible to make direct comparisons between U.S. and Chinese government investments. The U.S. has more venture capital, for example, while “eye-popping investment figures” from China’s central government don’t mean an awful lot if they aren’t matched by investments in talent and education, she said. “We shouldn’t be trying to match China dollar-for-dollar if we can be investing smarter.”

No comments: