Prachi Bansal
India’s push for digital IDs and algorithms is creating a hidden layer of inequality. The Aadhaar system, which forms the basis for effective and transparent delivery of several government welfare schemes, has become “more a barrier than an enabler,” especially for women in the informal sector.
About 36 percent of 200 migrant women workers interviewed for a study said they faced biometric authentication failures during pregnancy-related hospital visits. What will be the human cost if the future of welfare schemes is direct benefit transfers enabled by authentication mechanisms, biometrics and artificial intelligence?
Algorithmic bias is not new. Several years ago, a celebrated book focused on the severe gender and racial biases embedded in Google’s autosuggestions. Most illustrations in the book continue to be valid today. For instance, a Google Images search for the term “beautiful” throws up not paintings or a mountain but hundreds of women’s faces. The women are young, light-skinned, and slim.
A second example is how ChatGPT generates different letters of recommendation for men and women students with identical scholarly achievements. Men are described as “ambitious,” “driven,” and “leaders” while women are “compassionate,” “supportive,” and “team players.” Similarly, Amazon scrapped an AI hiring tool that downgraded applicants with the word “women’s” (for example, “women’s chess club”) in their resume.
Even digital mapping platforms such as Google Maps and Wikipedia reflect stark geographic inequalities, with significant under-representation of the Global South. Despite high population densities, regions such as South Asia and Africa remain digitally marginalized.
The examples cited above are technology-specific but hit home when digital governance is used as the primary mechanism for welfare delivery.