Trisha Ray
Introduction
Sovereign AI has gained a foothold in several capitals around the world. As Michael Kratisios, the Trump administration’s acting director of science and technology policy, stated in 2024, “Each country wants to have some sort of control over our [sic] own destiny on AI.”1 Analysts have mapped the modes and methods to achieve sovereign AI, and the interplay with antecedents like data sovereignty.2 However, there remains a critical gap: analysis of stated goals for these initiatives and what the core pillars of sovereign AI are, distinct from related concepts.
The goals outlined by governments are varied and wide-reaching: some center on preserving values or culture;3 others focus on the privacy and protection of citizens’ data;4 some initiatives center on economic growth and others of national security;5 and finally, there is a set of concerns around the current global governance vacuum, where in the absence of global frameworks, AI companies must be held accountable through physical presence.
However, each of these stated goals require differing levels of indigenized capability and control and will have varied consequences as a result. This paper will:
- Outline the various stated goals of sovereign AI, suggesting illustrative categories.
- Hypothesize the reasons for the emergence of sovereign AI as a concept, with an analysis of industry buy-in for this concept.
- Propose a streamlined definition of sovereign AI and suggest policy implications.
No comments:
Post a Comment