Pages

4 August 2023

Democracy and the AI Revolution

Francis Fukuyama, Mathilde Fasting

Courtesy of our friends at Civita, American Purpose is republishing an excerpt from Mathilde Fasting’s conversation with Francis Fukuyama. This transcript has been lightly edited, and a recording of the full conversation is available.

Mathilde Fasting: Twenty years ago, you wrote a book called Our Post-Human Future: Consequences of the Biotechnology Revolution. How do you see the ability to modify human behavior or biology affecting liberal democracy?

Francis Fukuyama: In the late '90s, I was running a seminar on new technologies and their impact, focusing on both IT and biotech. At the time, I believed that the biotech revolution could have more significant consequences, and I still think that may be true. While we have witnessed the downsides of social media and information technology in the interim, the biotech revolution has the potential to affect human behavior and biology, thus influencing liberal democracy.

The reason I believe biotech could have significant political effects is because it provides tools for certain individuals to control the behavior of others. Totalitarianism in the 20th century demonstrated the attempts of highly centralized governments to control the behavior of their populations using techniques like agitation, propaganda, re-education, and police state enforcement. However, these methods proved insufficient in the long run, as seen with the breakdown of the Soviet Union and China's struggles to control its population.

Biomedical technologies, particularly germline interventions that can alter heritable human characteristics, could have a profound impact on our understanding of human rights. Human rights are based on our implicit or explicit understanding of human nature, and the most crucial rights are those that respond to the core aspects of being human. Manipulating human nature through biotechnology could ultimately change the nature of rights.

It's important to note that germline engineering is not the only technology with the potential to control behavior. Psychopharmacological interventions and the use of drugs to regulate mood and behavior have already revolutionized our ability to influence behavior. For instance, we currently use drugs like selective serotonin reuptake inhibitors (SSRIs) like Prozac and Xanax and amphetamines such as Ritalin to control behavior in children. These interventions will likely increase in the future.

However, the IT revolution has had more noticeable short-term impacts, particularly in the realm of social media. Initially, many of us believed that increased access to information would be democratizing and beneficial for democracy. We thought that spreading power through increased information access would be positive. While power has indeed devolved, we have also witnessed the elimination of hierarchical structures that certified and verified the quality of information. As a result, a lack of trust permeates societies, and we now struggle not only with disagreements about values but also with an inability to agree on simple factual information.

This lack of trust has fueled polarization, as different factions hold contrasting understandings of reality. In the United States, for example, there is a deep divide where one group believes that the 2020 election was stolen from Donald Trump, and no amount of contrary information can sway their belief. This polarization based on divergent understandings of reality poses a danger, and the forthcoming AI revolution may exacerbate this issue. The verifiability of digital artifacts will become increasingly challenging, making it difficult to authenticate information.

For instance, the advent of deep fakes and advanced image manipulation techniques such as generative fill-in programs like Photoshop raise concerns about the authenticity of digital documents. The decline in trust in digital evidence will extend to social institutions in general.

In the future, this lack of trust and the increasing sophistication of interventions will pose challenges in various domains, including in legal proceedings. Authentication of evidence, such as photographic proof in court cases, may be met with skepticism and accusations of manipulation or fabrication.

MF: Can you say something about biotech and the post-human future? I remember that you wrote something about differences between the West and China when it comes to manipulating things, and I'd like you to elaborate a bit on that.

FF: I think that one of the problems is in regulation. If you look at the history of technology, it's always been this race between technological development and the ability of societies to regulate it. And it always takes a long time for that to happen. Think of the printing press when Gutenberg invented movable type. This had a great impact on the Protestant Reformation, for one thing, and the spread of different ideas or challenges to the Catholic Church. That led to 150 years of religious warfare, and eventually, people reconciled themselves to printing. You think about radio, television, their relationship to the rise of fascism and Stalinism. These were technologies that allowed dictators to connect to mass audiences. It took time, but we've kind of figured out how to regulate and deal with those kinds of technologies.

There's going to be a lag between the time that the technology is introduced and the time that society can catch up with it. With biotech, it's very hard to regulate. I have a colleague at Stanford who works with high school students. There's a kind of standardized biotech lab that can do different kinds of interventions using the CRISPR technology that fits inside a shipping container. Many high schools are having competitions with their students that want to do this kind of genetic manipulation. What my colleague does is to create a set of norms that these student groups will follow because it is impossible to monitor what they're doing. So, how do we regulate nuclear weapons and other dangerous technology? Well, we've got overhead satellite photographs and we've got nuclear inspectors and so forth. And because it requires a country to actually engage in that kind of industrial production of nuclear fuel, we can pretty much monitor what's going on in North Korea, Iran, and this sort of thing. With biotech, it's impossible. The technology does not require large facilities, it's very widespread, and the only way that you can hope to regulate it is through some kind of normative intervention that will give the researchers standards that they need to impose on themselves.

And a lot of the work that's being done is very scary. I have another Stanford colleague who runs a biomedical research lab and has been really worried about this. You're probably following the controversy over the Wuhan lab leak, which was dismissed by many people as right-wing propaganda. But now there's evidence that indicates that the whole COVID epidemic was the result of sloppy security at the Wuhan Institute of Virology. We can expect further things: This colleague of mine has another scary briefing about how an American researcher downloaded the genetic code for a monkeypox virus that he created in his lab. This wasn't a virus that was created from another virus; it was simply digital information that was available to anyone on the Internet. And he used this to create a completely novel variant of monkeypox that could be much more virulent than the existing one. And what's going to prevent the spread of this? The only way that you can monitor whether anyone in the world is doing this kind of research is to depend on the responsibility of the organizations and the individual scientists that are working on this sort of thing. And it's very hard to know at what point somebody's going to breach those norms and do very dangerous things.

I spent a long time in the early 2000s thinking about how to regulate biotechnology. One thing I observed and felt was that this stuff is going to happen in Asia long before it's going to happen in Europe or in the United States because, frankly, this is a cultural thing. In countries with a Christian religious heritage, there is a belief in a fairly sharp dichotomy between human and non-human nature. There is a belief that God endowed human beings with a certain degree of dignity that non-human nature doesn't have. This belief forms the basis for our understanding of human rights or the universality of human rights. But it also means a kind of downgrading of the natural world on the other side of this dichotomy.

In most Asian cultural traditions, this dichotomy doesn't really exist. There is a continuum from non-human nature to human nature, and the ability to manipulate one spills over into the other. In fact, China was the first country where a germline experiment on a human embryo was conducted. Although it was shut down, and the scientist involved was punished, I believe that these cultural inhibitions will be much stronger in Europe and North America than in other parts of Asia.

So, this is the larger problem of regulating technology of any sort. You can decide to regulate it within your territorial jurisdiction, but it's going to happen somewhere else. This is also the problem with the idea of regulating AI. If we do it in Europe or the United States, we still have competition with China and other big countries. They might pull ahead, and we'll ask ourselves, "Are we self-limiting this critical technology that will then be developed by somebody else and used against us?" Unless we have a global regulatory scheme, we're stuck in this competition.

MF: Previously, you wrote in American Purpose on the possible social impacts of AI. Even if you say that it's a fool's errand to predict the long term, for instance, of social consequences of AI and technology, can you tell me what we know so far about the overall effect of social media, the Internet, and technology on democracy?

FF: If you think about the transformation that has occurred over the last fifty years with the rise of a whole class of information technologies, there's a broad conclusion by economists that it has increased socioeconomic inequality. There's something to this because right now the main social divide in most countries, most certainly in most advanced democracies, is one that is really based on education. And if you have a higher education, if you've gone to the University of Oslo or you have a degree, you're doing well, your income is much higher, and the gap between those people and people with just a high school education or less has grown enormously—almost everywhere. There's no question that much of the current populism that we see around the world is fuelled by that division, that people that vote for populist politicians are usually less educated people. They don't live in big cities, they are not connected to a kind of larger global economy, and that is upsetting our democratic politics.

However, one consequence that I don't think people have recognized sufficiently is a massive decrease in inequality that was brought about by the transition from an industrial to a post-industrial economy, and that concerns gender relations. The replacement of physical labor by machines and the shift in the nature of work from an industrial economy to a post-industrial one—in which most people instead of working in factories or lifting heavy objects, are sitting in front of computer screens all day in a service industry—has had an enormous impact on the role of women in the economy. Beginning in the late 1960s, virtually every advanced society began to see significant increases in female labor force participation. Now, some people might say this was an ideological or a cultural change, that there was a movement for women's equality that appeared at that time. This is one of those areas where I think cause and effect are very hard to disentangle. It is certainly the case that you could not have had the degree of female empowerment if you didn't have occupations for women where they could earn salaries, support families, and be independent of their husbands. And this began to happen as tens, hundreds of millions of women all over the world entered the workplace in the late 20th century.

And so the consequences of technological change are complicated and hard to foresee. The shift to digitization has had both positive and negative impacts. But it's very hard to say that overall, it's simply increased inequality.

Audience member: Do you think that artificial intelligence will further accelerate democratic decline, or do you think that there's light at the end of the tunnel?

FF: I think there are two clear challenges that the current generation of artificial intelligence poses for democracy. Firstly, there is the general dissolution of our certainty about the information we receive. Deep fakes make it difficult to determine the authenticity of anything we see on the Internet. However, there are AI-based authentication technologies that can be used to certify the provenance of a particular digital artifact. Regulation needs to catch up with technology because once we recognize the general problem of mistrust, we need technological means to verify the authenticity of digital content. The solution is not to ban the technology but to use it for control and verification.

The second challenge is an intensification of what already exists. Social media has been effective at manipulating people through targeted advertising. With artificial intelligence, targeting can become smarter and more adaptive. Once people realize they are being targeted, the manipulation can change automatically by machines, which poses difficulties in detection. I've observed this on Twitter, where the coverage of the Ukraine war shifted subtly after Elon Musk took over, showing less pro-Ukrainian content and more sympathetic content towards Russia. This subtle manipulation can have cumulative effects over time.

On the question of totalitarian control, China's social credit system represents a new level of individual monitoring made possible by machine learning and large-scale data analysis. With the integration of COVID monitoring and the general social control system, China now has extensive knowledge of an individual's whereabouts, social interactions, and conversations. While it is a powerful form of control, even such high levels of monitoring have not always yielded the desired outcome, as seen when people protested against zero COVID measures.

It is challenging to predict the effectiveness of these technologies in undermining democracy. In many cases, using technology to counter the negative impacts of technology might be our best solution.

Francis Fukuyama is chairman of the editorial board of American Purpose and Olivier Nomellini Senior Fellow and director of the Ford Dorsey Master’s in International Policy program at Stanford University’s Freeman Spogli Institute for International Studies.

Mathilde Fasting is a project manager and fellow at Civita, a Norwegian think tank dedicated to liberal ideas, institutions, and policies based on individual liberty and personal responsibility.

No comments:

Post a Comment