23 June 2025

Trust Issues: The Push for AI Sovereignty

Averill Campion

A willingness to be vulnerable is central to building trust. Yet for nations competing in the AI ecosystem, a key tension has emerged: how to manage reliance on US and Chinese infrastructure while upholding local norms and values. This challenge has pushed decision-makers to design risk management strategies that avoid compromising domestic ideals—especially amid legal conflicts like those between the EU’s GDPR and the US CLOUD Act.

In practice, full AI sovereignty is unrealistic due to the deep interdependencies within the global AI ecosystem. Still, efforts to assert sovereignty reflect underlying trust concerns, national security interests, and a desire for innovation-led prestige. For its part, the US should acknowledge that its allies may seek autonomy in targeted AI domains to achieve national wins without severing ties to the broader ecosystem.

Meanwhile, US hyperscalers must confront persistent skepticism about data privacy, particularly given the US government’s legal authority to access data stored by American service providers. Recognizing this limitation can foster a more honest dialogue and mutual understanding. Choosing local infrastructure, in this light, is often less about rejecting global collaboration and more about aligning AI development with national security priorities and cultural values. That, too, is a form of sovereignty.

Nations Weigh Competition Against Norms and Risk

Three main approaches to sovereign AI are emerging. The first seeks to eliminate as much external influence as possible, aiming for full national control. The second favors a more open model based on strategic autonomy—balancing collaboration with independence. The third focuses on leveraging local norms and regulations to shape the behavior of external hyperscalers, even amid unresolved data privacy concerns. More broadly, the idea of sovereign technology spans multiple domains, including cybersecurity, digital infrastructure, data governance, and AI.

No comments: