25 February 2024

India’s AI Strategy: Balancing Risk and Opportunity

AMLAN MOHANTY, SHATAKRATU SAHU

India’s national strategy on artificial intelligence (AI) is the pursuit of a delicate balance between fostering innovation and mitigating risk.

At the Global Technology Summit (GTS) 2023, India’s AI strategy was a key topic of discussion—India’s ministerial representative talked about the need for policy enablers and guardrails, industry leaders presented a use-case-led AI strategy, while global policymakers emphasized the value of India’s governance model for the rest of the world.

In this essay, we dig deeper into these questions and share insights on the key elements of India’s national AI strategy and the trade-offs involved in balancing risks and opportunities.

INDIA AND THE AI OPPORTUNITY

Over the years, the Indian government has actively encouraged AI applications for social welfare. Some examples include applications to detect diseases, increase agricultural productivity, and promote linguistic diversity.

India’s proposed model to deliver impact at scale using technology is compelling. For example, its recent efforts to leverage digital public infrastructure (DPI) to promote financial inclusion have been endorsed by the World Bank in glowing terms.

Now, with the world’s attention quickly moving to the transformative potential of AI, India’s national strategy will be influential. In particular, India’s pro-innovation and welfare-based approach to AI holds immense value for developing countries in the Global South.

On the global stage, India has delivered this message emphatically. The recent G20 leaders’ declaration, signed in New Delhi, advocates a “pro-innovation governance approach” to AI. Subsequently, at the Global Partnership on Artificial Intelligence Summit, which was also hosted by India, the concept of “collaborative AI” was formulated, wherein member states agreed to promote equitable access to AI resources for the developing world.

KEY ELEMENTS OF INDIA’S AI STRATEGY

In this section, we outline three factors that policymakers should consider as they seek to translate these principles into policy as a part of India’s national AI strategy. 
  1. Data: India already views data as an enabler of innovation. It has created technical protocols for “data empowerment” and drafted national policies to encourage the sharing of anonymized data for public benefit. Recently, India enacted a personal data protection law that incorporates key privacy principles but excludes publicly available personal data from its scope, which may enable the use of such data for training AI models. From an AI perspective, an immediate challenge is the lack of structured data in local Indian languages, which has led to issues of bias and underrepresentation. Therefore, India should prioritize initiatives that make digital content more accessible. At the same time, in the spirit of “collaborative AI,” it should partner with countries on data sharing to ensure that foundational AI models are representative of Indian culture.
  2. Compute: India’s proposal to enhance its computing power, also referred to as “compute,” is likely to face capital, labor, and infrastructure challenges due to the high cost of advanced graphics processing units, the shortage of skilled labor, and market concentration. To ensure that AI-led innovation continues unhindered, India should focus on the development of a “compute stack” that is scalable, self-sufficient, and sustainable. As a first step, policymakers should arrive at a reliable measure of India’s current compute capacity and projected needs. This would help inform strategy on, for example, which types of semiconductors should be incentivized and locally manufactured. India’s policymakers should also evaluate proposals to democratize access to compute, such as those presented at the GTS. This process can continue alongside exploring new technology paradigms to promote global access to compute.
  3. Models: Based on discussions at the GTS, an emerging debate is whether India can meet its national AI objectives through small, open-source models customized for specific use cases as opposed to proprietary models that are general-purpose, compute-intensive, and mostly of foreign origin. The consensus appears to be that this is a false binary—there will be room for both open-source and proprietary models in India’s AI future. Some technologists say that small, open-source models trained on relevant datasets will suit the proposed applications in India. Others believe that closed-source systems may be better suited for military applications and other sovereign purposes. Historically, India has taken a strong position on this issue. In 2014, it announced an official policy to encourage the adoption of open-source software in government organizations. Whether this policy will extend to AI systems is an important strategic decision.
RECOGNIZING THE RISKS

Despite the attention on AI-led innovation, India’s policymakers remain sensitive to the risks at hand. At the AI Safety Summit in November 2023, India’s ministerial representative stated that innovation should not get ahead of regulation and signed the Bletchley Declaration, which outlines the safety risks of “frontier models.” In addition, India has endorsed the need for further engagement on fairness, accountability, transparency, privacy, intellectual property, and the development of trustworthy and responsible AI.

With that said, India’s domestic approach to regulating AI remains lacking. While the principles of openness, safety, trust, and accountability have been a core part of the government’s regulatory agenda, there does not appear to be a coherent strategy to regulate AI at present. For example, the current strategy to combat deepfakes has been to issue ad hoc advisories and legal threats even as the problem persists. This approach has been criticized for being “surface-level, rushed, and lacking deep research” by some.

Instead, the government should adopt a holistic approach to AI governance through the prism of risk and safety. It could issue technical guidance on how the responsible AI principles published in 2021 may be implemented to address the issue of misinformation generally. This may prove more helpful than issuing responses in the wake of specific incidents involving public figures. By doing so, the government would establish thresholds for transparency and accountability that can be transposed to different contexts.

India should also have a clear strategy to address emerging regulatory issues involving AI systems. This will entail developing a risk-based taxonomy, an updated platform classification framework for different actors in the AI value chain, and appropriate liability frameworks, including safe harbor protections for AI systems.

FINDING THE RIGHT BALANCE

Governments around the world are looking for a model that strikes the right balance between innovation and safety. As India looks to formalize its strategy with a national AI program that is expected to cost more than a billion dollars, there is significant global interest.

India’s success in deploying technology at scale through innovative uses of DPI has captured the world’s attention. At the same time, its proposed light-touch approach to AI regulation is likely to resonate with countries in the Global South, which do not want the developed world’s obsession with “existential risk” to hinder their march to progress.

There is a lot riding on India’s AI strategy, both for India and the rest of the world.

No comments: