Pages

8 August 2023

Artificial Intelligence and Digital Diplomacy

Lala Jafarova

The coronavirus pandemic (COVID-19) has given a strong impetus to the development of science, the general processes of digitalization and the introduction of an increasing number of electronic services. In healthcare, these processes manifested in creating tracking applications, information-sharing platforms, telemedicine, and more. However, the boom in introducing such technologies also showed the need to develop particular policies and legal mechanisms to regulate their implementation, as although they can provide benefits, their use can also pose potential risks, such as cyberattacks. Digital technologies have also become widely used in politics. Due to the lockdowns around the world during 2020 and 2021, many ministerial meetings and meetings between heads of state were held online. International organizations such as the United Nations (UN) have resorted to mixed event formats allowing presidents to speak online.

The possibilities of the Internet and the application of digital technologies are not new. However, their entry into the political atmosphere, where everything is permeated with diplomatic protocols and certain secrecy, causes some concern. Perhaps the most apparent concern is using “Deepfake” technology to digitally manipulate another person’s appearance. With modern AI technology, voice imitation is also possible.

Diplomatic channels may be scrutinized by the intelligence agencies of other countries and potential criminal groups that can gain access to specific technologies, such as wiretapping. Quite often, “secret” data (photos, videos, audio recordings) as well as “fake news,” which veracity an ordinary person cannot verify in any way, appear in the press. Such manipulations pose a significant threat to social stability and affect public opinion. Modern technologies can also be used in the political struggle against competing forces. Therefore, there is a need to rethink the “familiar” political process, considering new realities and possibly developing new “digital or electronic” diplomatic protocols.

The study of the application of the possibilities of AI in politics is a young field. Thus, a search as of June 23, 2023, in Google Academy for the query “artificial intelligence in politics” returns 61 results, and “AI in politics” has 77 results. Similar queries to the Google search engine for the same period produce 152,000 and 95,600 results, respectively. Publication sources are generally not political journals. More often, these journals publish articles on new technologies and deal with AI use’s ethical aspects (Vousinas et al., 2022).

Speaking about the modern understanding of the concept, what is AI? Kaplan and Haenlein (2019) define it as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.” Suppose this definition is interpreted in relation to politics. In that case, we think AI can be described as a system that allows politicians to process information received from different sources and generalize it to develop a single database used in the decision-making process.

AI can also be used for internal political goals. Thus, a study in Portugal (Reis and Melão, 2013) suggested that the introduction of e-services plays an “active role of governments in responding to the needs of their citizens,” contributing to the development of “e-democracy.” The article points to increased transparency and trust in political institutions due to its widespread use. In our opinion, the paper lacks an analysis of possible counter-effects where AI can become a weapon for falsification and lowering the level of democracy. What are the mechanisms of interaction between the population and political institutions with the complete digitalization of the process? This issue requires a detailed assessment, especially in countries where the principles of democracy are often violated.

The possibility of AI bias poses a potential risk and presents a new challenge for the global community, including politicians. In 2018, a study published by the Council of Europe accessed possible risks of discrimination resulting from algorithmic decision-making and other types of AI use (Zuiderveen Borgesiusа, 2018). Today, with advanced technologies, transferring decision-making power to algorithms can lead to discrimination against vulnerable groups, such as people with disabilities. Therefore, program decision-making need not always be based on cost-effectiveness principles. E.g., caring for vulnerable population groups is a kind of “burden” for the country’s budget. However, it is obligatory in a state of law and ethically justified.

The possibility of making political decisions by the heads of state and government based entirely on AI proposals is also quite controversial. Since even the most rational decision from the algorithm’s point of view may be devoid of ethical grounds, contradict the political slogans of the leaders, or go against the objectives of the government or provisions of the law. Therefore, human control and policy adjustment at this stage of scientific development are mandatory; we believe it will continue to be relevant even in the future.

From the point of view of the private use of AI at the level of an individual state, the possibilities are also wide. For example, online search engines such as Google have significant information about users and their preferences. Accordingly, this information can be used for more “harmless” purposes. For example, it can be used for targeted advertising of political companies. Also, based on the processing of requests from the population, the most pressing issues that require a response can be identified. With the help of AI, special tools for collecting feedback from the population can be developed, improving communication between the government and the population – potential voters. Accelerating and automating the delivery of services to the population, such as, for example, issuing a necessary document, or certificate of employment, is also among the potential “beneficial” results of the AI application. It should be noted that, to varying degrees, the mentioned opportunities are already actively used in countries with high economic development.

However, the application of AI can also be used to spread misinformation and manipulation of public opinion. Thus, AI tools are already used to launch campaigns for mass disinformation and disseminate fake content. Fake news is sometimes observed during election campaigns.

Today, the advent of new technologies creates new challenges. Thus, the GDPR (General Data Protection Regulation), adopted in Europe 2018, obliges informing about data collection. Moreover, in 2021 a new proposal for broad “AI regulation” within the EU was put forward. If adopted, the document will become “the first comprehensive AI regulatory framework” (Morse, 2023). Adopting such a law puts on the agenda the need for international regulation. Perhaps shortly, various countries around the world will begin the process of developing and adopting similar laws. However, the development and adoption of any law require the participation of political institutions, which creates a new direction of activity and research within political science.

The global application of AI laws is also a political issue. Thus, a similar document – the UN cybercrime convention – is already under discussion. However, such laws, especially on a global level, will also have to be based on protecting human rights to exclude the legitimization of increased political control over the population on the Internet. Moreover, in the context of globalization, the mechanism of control over AI-related crimes and punishment implementation are also unclear.

The use of digital platforms for diplomatic processes, such as negotiations, networking, and information exchange, has created a new field in the scientific literature – Digital diplomacy. “Digitalization” of diplomacy takes place on different levels. Thus, ministries and politicians create their profiles on social media/networks, where they share their opinions on specific issues. It is no longer necessary to wait for an official briefing from the Foreign Ministry. Diplomats often express their position online, which can be considered a “semi-official” approach. Ultimately, the publication can permanently be deleted or referred to as “the page has been hacked”; in modern conditions, such a risk exists.

Recently, with the launch of ChatGPT, the media has been filled with articles about its role in the “future of diplomacy.” Diplomats can use AI to automate some of their work, such as preparing press releases. Another possibility is that the prepared information can be distributed simultaneously to all information platforms with a “one click,” which simplifies and speeds up the process. It is crucial as today residents receive information via the Internet most often directly to their smartphones. However, full automation, in this case, is also not without risks.

Although AI can be used to generate ideas, there is some concern about the secrecy of information processing. There is already a threat and information about data leakage entered into ChatGPT (Derico, 2023; Gurman, 2023). How safe is this in the case of secret or diplomatic documents? Or personal information of the diplomat who uses the platform? Moreover, the language of diplomacy is very sensitive regarding wording and expressions used. The text generated by the program may be ideal in terms of grammar but unacceptable in terms of diplomacy.

Use of AI and general digitalization in society also impact diplomacy. Nevertheless, are we ready for “politics generated by AI”? AI opens a new page in politics and creates challenges. Diplomacy always requires a certain amount of flexibility from diplomats, but it must be adapted to digital realities. Politicians and diplomats should be prepared for the possibility of data leakage on the Internet, as well as double-check incoming information.

The potential for bias in AI algorithms is also a significant issue. Moreover, the veracity can be zero since the program is designed to issue an answer, whether it is correct or not, and its content depends on the algorithms specified by the developers. Automation of collecting information in political processes is only sometimes justified. Thus, the human brain cannot physically remember and process the enormous amount of information generated daily. Moreover, if a political officer collects information from official resources, this can simplify the work. However, a reference to an unconfirmed resource may lead to a distortion of the original data and, accordingly, adversely affect the preparation of the report. However, such tools can be extremely useful for politicians when addressing public inquiries and identifying the most pressing issues.

The regulation of AI in practice has some peculiarities. At this stage of historical development, AI still cannot implement decisions independently in real-world practice. It can only implement the tasks that people have assigned it to do. We can analyze the benefits of its use, but ChatGPT and similar models only process information obtained from sources such as the Internet. Yet, the potential of regulation of global politics by AI or its specific programming can expose us to the threat of “digital totalitarianism,” when control can begin to interfere with privacy and human rights. Therefore, legal regulation of AI use is crucial. Moreover, its algorithms should undergo an ethical and political assessment before implementation. Moreover, various countries are interested in obtaining intelligence information in real-life conditions. Given the development of science, the intervention of intelligence services will receive new opportunities. However, in practice, regulation in this area is rarely possible. Moreover, AI is developing fast, and how it will be applied in practice when it reaches “independence” is an issue we will have to solve.

No comments:

Post a Comment