25 October 2021

Artificial Intelligence and Big Data in the Indo-Pacific

Jongsoo Lee

What is the impact of artificial intelligence (AI) and big data on societies in the Indo-Pacific? How are countries using AI and big data to enhance their national security and advance their national interests? And what are the major regulatory issues? For a perspective on these and other matters, Jongsoo Lee interviewed Simon Chesterman, dean and provost’s chair professor of the National University of Singapore Faculty of Law and senior director of AI Governance at AI Singapore.

What are nations in the Indo-Pacific doing to develop their artificial intelligence (AI) and big data capabilities? Which countries are successful, and which are not?

The importance of technological innovation to economic development has long been a feature in Asian tiger economies. Wealthy, internet-savvy countries like Japan, South Korea, and Singapore leveraged the benefits of high tech and consumers embraced it. More recently, China made AI a strategic priority and that was a game changer.

China’s size, its growing cohort of tech unicorns, and a relaxed approach to the personal data of its citizens quickly saw it overtake the United States in terms of research papers published and patents filed.

Early definitions of “success” focused very much on growth. Indeed, it was striking that China’s 2017 strategy talked about having regulations to govern AI – by around 2025. I think that view of AI is changing now, with a more nuanced vision of success encompassing scale but also responsible deployment of the technology. That’s certainly how Singapore has been trying to position itself.

How are AI and big data impacting societies in the Indo-Pacific? In what ways is the impact positive or negative?


To the extent that AI enables optimization of services, efficient allocation of resources, and so on, there’s a correlation between adoption of AI and distribution of those benefits to consumers. What’s more troubling is the way in which it concentrates power in the hands of a limited number of companies and how it has facilitated state surveillance.

On the power of companies, contrast the hand-wringing responses to Facebook, et al., in the United States with the iron fist of China when companies are seen as stepping out of line.

On surveillance, however, you have far fewer levers to limit government access to data on a scale previously unimaginable – partly because historic views of privacy have been narrower than in, say, Europe, but also because populations were never really given the option.

How are countries in the Indo-Pacific using AI and big data to strengthen their national security and advance their national interests? Which countries are successful in doing this and what explains their success?

I would break it down to offense and defense.

On offense, the real winners have been China and, in particular, North Korea. Though it’s often more a question of cybersecurity than AI or big data as such, both countries have developed a reputation for hacking. The secret appears to be a strong talent base, almost unlimited resources, and a relaxed attitude to the consequences of getting caught.

On defense, there’s a lot more going on. Most countries are now trying to work out how to defend against hostile information campaigns, for example. Singapore recently passed the Foreign Interference (Countermeasures) Act (FICA) that gives the government extraordinary powers to address threatened or actual attacks. (There is an ongoing debate about how broadly “foreign interference” should be construed, and whether it’s appropriate to exclude the judiciary from challenges to its application.)

Another application, of course, is the manner in which governments are using these tools against their own populations.

Are governments of China and other states in the Indo-Pacific using AI and big data to enhance their surveillance on their populations?

It would be strange if they were not doing so.

Twenty years ago, the September 11 attacks transformed the way in which surveillance was understood in many parts of the world. Part of that was collecting vast amounts of data. What we’re seeing now is increased ability to analyze that data in real time. It was said in 2001 that the U.S. National Security Agency intercepted a message warning of the attacks on the 10th of September but that it was manually translated only on the 12th. Whether that would have made a difference or not, it’s certainly true that both data collection and analytical capacity have increased massively in the following 20 years.

China is widely seen as being at the forefront, with lots of reporting on its social credit system and surveillance of its Uyghur population, in particular. That said, it’s also the case that China this year has adopted a Personal Information Protection Law. It’s not the GDPR, but it’s a step towards recognizing that personal data isn’t just something that can be vacuumed up without consequences.

How can governments in the Indo-Pacific regulate AI and big data in ways that maximize their benefits while minimizing their harmful use and impact?

There’s a growing realization that some form of regulation is necessary to gain the benefits of AI while minimizing or mitigating harm. Until recently, many governments were like China – pushing regulation off into the distance for fear of stifling innovation or driving it abroad. Now, there’s a serious conversation going on about governance, with countries like Singapore adopting a model framework and China starting to crack down on its tech companies.

The challenge here is what’s known as the Collingridge dilemma. Any effort to control new technology faces a double bind. During the early stages, when control would be possible, not enough is known about its harmful social consequences to warrant slowing its development. By the time those consequences are apparent, however, control has become costly and difficult.

In my book, “We the Robots?: Regulating Artificial Intelligence and the Limits of the Law,” I try to map out how regulators might approach these questions.

Is there a rivalry in AI and big data between China and other nations such as the United States? If yes, who is winning this rivalry and what explains their success? Is there a cost to their success?

Of course! Rivalry can be healthy for technology – think of the space race between the U.S. and the Soviet Union.

Rivalry can also stimulate regulation of dangerous technology. Even before the first atomic weapons landed on Hiroshima and Nagasaki, scientists were trying to work out how to get the benefits of nuclear energy while limiting its destructive potential. They would have been pleasantly shocked to learn that no other nuclear bombs have been dropped in anger since then, and only a handful of states even possess them.

The danger of this rivalry is if we start seeing the internet being cut up and the world partitioned in a cyber cold war. Or if we see the massive concentration of power and wealth in the hands of a small number of people in just a couple of countries. All of that would run counter to the vision of the internet and of AI as a liberating technology that can enhance rather than diminish human freedoms.

Are AI and big data a game changer when it comes to intelligence and the information war between nations? Which nations in the Indo-Pacific are effectively using AI and big data to bolster their national intelligence capabilities?

This is, I think, one of the real challenges that will define power in the 21st century and beyond. We’re still used to thinking of geopolitics in terms of “geo” – which literally means land. For decades, there has been an on-and-off discussion within the United States about whether it can contain China. Even in the context of the internet, there is talk that there might be a divide – such as when the U.S. decided to join the Global Partnership on AI (GPAI) precisely in order to exclude China.

But the speed, autonomy, and opacity of modern AI systems are effacing that territorial notion of power. There won’t be a Berlin Wall, a line dividing the world into blocs. It will be a messy competition for power across platforms. And to me the real danger is that this will distract us from the need to maintain some red lines on where that technology goes – in terms of maintaining human control of some classes of decisions, and human involvement in determinations of rights and obligations.

And, though it’s science fiction for the time being, we’re going to need red lines to stop the development of AI systems that are uncontainable or uncontrollable. That means some kind of international cooperation. Elsewhere, I’ve suggested that we need the equivalent of the International Atomic Energy Agency for AI – an International Artificial Intelligence Agency. That is partly a thought experiment, but also with a serious argument that coordination at the global level is needed to distribute the benefits of AI in anything like an equitable fashion, but also if we’re serious about limiting the development of AI to things that will help humanity rather than harm it.

There’s a tendency to think that regulation of AI requires that we start from scratch and come up with entirely new laws. Rather than come up with new rules, what we need are institutions and procedures – and the will – to apply existing rules. That will require nation states to step in, as well as industry to self-regulate. But without some kind of global coordination, it will be too easy to avoid the rules, or else we’ll see a race to the bottom.

Coming out of the ashes of World War II, there was political will to build new institutions to uphold norms that, intuitively, most of us accept – even if we know they are not always upheld in practice. My hope is that we can create some kind of institution that will help prevent the first true AI emergency. Otherwise, we’ll have to do it in a hurry to prevent the second.

No comments: