28 February 2026

Is AI sovereignty possible? Balancing autonomy and interdependence

Brooke Tanner, Cameron F. Kerry, Andrew W. Wyckoff, Nicoleta Kyosovska, Andrea Renda, and Elham Tabassi

The concept of artificial intelligence (AI) sovereignty has entered policy discussions as governments confront the strategic importance of AI infrastructure, data, and models amid rising dependence on a small number of firms and jurisdictions. This report defines AI sovereignty as a spectrum of strategies to enhance a country’s capacity to make independent decisions about critical AI infrastructure deployment, use, and adoption, rather than literal autarky. Motivations vary— from protecting national security and resilience and supporting economic competitiveness, to ensuring cultural and linguistic inclusion in model training and datasets and strengthening influence in global governance. These aims are often legitimate, but “sovereign AI” can also become a vehicle for protectionism, fragmented markets and standards, and duplicative or stranded public investment. 

The central finding is that full-stack AI sovereignty is structurally infeasible for almost any country because AI is a transnational stack with concentrated choke points across minerals, energy, compute hardware, networks, digital infrastructure, data assets, models, applications, and the crosscutting enablers of talent and governance. The practical alternative is “managed interdependence,” an approach that relies on strategic alliances and partnerships to reduce risks throughout the AI stack. Countries can operationalize managed interdependence by mapping dependencies by layer, prioritizing feasible interventions, diversifying suppliers and partners, and embedding interoperability and portability through technical standards, procurement, and governance. Done well, managed interdependence can strengthen resiliency and agency while preserving the benefits of open markets and cross-border collaboration.

No comments: