Pages

9 February 2026

Dehumanisation, powered by AI

Nishtha Sood,JAGPREET SINGH

There were reports in January 2026 that IIT Bombay, in association with the Maharashtra government, is working on an AI-based tool that could detect and identify so-called “illegal” Bangladeshi nationals and Rohingya refugees. The tool, according to Maharashtra Chief Minister Devendra Fadnavis, is reportedly 60 per cent accurate. In other words, out of every 10 individuals who are tested with this tool, four could be falsely identified. Even if we were to assume that this tool is 100 per cent accurate—a highly unlikely scenario—the very purpose of this tool is highly problematic and dehumanising. It is intended not only to classify speech but also to deny some of the most persecuted populations of the world their sense of belonging and legal existence.

The application of artificial intelligence in state surveillance is often presented as value-free, objective, and efficient. However, as history has repeatedly shown us, technologies are not applied in a vacuum. Rather, they are shaped by the politics, prejudices, and power structures of the societies that apply them.

No comments:

Post a Comment