The World Economic Forum’s Tunis Hub is exploring the use of human emotional recognition technology, specifically through facial expression mapping, to identify areas of citizen dissatisfaction with governmental services in real-time. This initiative aims to address issues of inefficient government bureaucracy and inadequate public services in Tunis. The technology employs an algorithm to analyze facial expressions during interactions with government services, allowing for the immediate flagging of high dissatisfaction levels and enabling timely interventions. The software generates a dashboard that provides various reporting capabilities, such as assessing the sense of safety by capturing emotions like fear and distress, and evaluating treatment by civil servants through the detection of emotions like anger and happiness.

According to WEF, this approach is part of a broader trend where artificial intelligence and biometric technologies are being explored for governance and public service improvement. While the specific implementation in Tunis focuses on emotional recognition from facial expressions, similar technologies are being developed and debated globally for various applications, including security, customer service, and public sentiment analysis. The ethical implications and potential for bias in such systems are often subjects of discussion, particularly concerning privacy and the accuracy of emotion detection across diverse populations.
WEF Framework for action Facial recognition 2020: https://www3.weforum.org/docs/WEF_Framework_for_action_Facial_recognition_2020.pdf
The Tunis Hub is one of the oldest and most active chapters of the Global Shapers Community, a youth-led network founded by the World Economic Forum (WEF) in 2011.
The Global Shapers Community, an initiative of the World Economic Forum, operates through city-based hubs in India, each led by young leaders (typically under 30) driving local impact projects in areas like sustainability, education, health, and civic engagement. As of November 2025, India has over 20 active hubs, part of the global network of 500+ hubs across 150 countries. Membership is selective, with 20–30 Shapers per hub, but full lists are not publicly available due to privacy; only curators, alumni, and notable contributors are often highlighted in reports, LinkedIn, and WEF documents.
Below is a comprehensive list of known Indian hubs, compiled from official WEF sources, hub websites, and reports. I’ve included founding years (where available), brief overviews, and publicly documented members (curators, project leads, or alumni). For complete rosters, visit individual hub pages on globalshapers.org/hubs or LinkedIn.
| Hub Name | City/State | Founding Year | Notable Members (Names & Roles) |
|---|---|---|---|
| Ahmedabad Global Shapers | Ahmedabad, Gujarat | 2013 | Hub focuses on entrepreneurship; members include young innovators in urban development (no specific names in recent records). |
| Bangalore Global Shapers (Bengaluru I) | Bengaluru, Karnataka | 2012 | Arundhuti Gupta (Founding Curator, Founder of Mentor Together); Alok Medikepura Anil (Curator, climate initiatives); Tony Dsouza (Co-founder, 10x Impact Labs); Arjun (Partnerships Associate, Pravaig EV). |
| Bangalore Global Shapers II | Bengaluru, Karnataka | 2018 | Nithya J. Rao (Curator, education focus); Sanjana (Event Lead, climate projects). |
| Bhopal Global Shapers | Bhopal, Madhya Pradesh | 2013 | Raman Singh Saluja (Early Curator/Alumni, entrepreneur). |
| Chennai Global Shapers | Chennai, Tamil Nadu | 2012 | Smruthi Swaminathan (Curator 2022–2023, manifesto project); Agraja Mahesh (Curator 2024–2025); Vrithika Joseph (Curator 2024–2025); Nitya J. (Curator 2024–2025); Sharath Balasubramanian (Project Head, Volunteer Guide); Siddarth Hande (Alumni, Kabadiwalla Connect founder). |
| Dehradun Global Shapers | Dehradun, Uttarakhand | 2011 | Gaurav (Early Shaper, animation specialist; led art-for-cause projects). |
| Guwahati Global Shapers | Guwahati, Assam | 2017 | Elected annually (not named publicly); notable: Team leads in climate and mental health (e.g., collaborations with Vartaaman). |
| Gurugram Global Shapers | Gurugram, Haryana | 2014 | Hub emphasizes tech and policy; members include startup founders. |
| Hyderabad Global Shapers | Hyderabad, Telangana | 2013 | Focus on health equity; no specific names in records. |
| Indore Global Shapers | Indore, Madhya Pradesh | 2013 | Raman Singh Saluja (Founding Curator, represented at 2014 Geneva Meet). |
| Jaipur Global Shapers | Jaipur, Rajasthan | 2014 | notable: Leads rural digital divide projects. |
| Kochi Global Shapers | Kochi, Kerala | 2013 | Collaborates with Chennai; members in sustainability (no names specified). |
| Kolkata Global Shapers | Kolkata, West Bengal | 2012 | Anis (Early Shaper/Alumni, Vidya Hub founder, entrepreneur). |
| Lucknow Global Shapers | Lucknow, Uttar Pradesh | 2015 | Annual election; notable: Web literacy campaign leads (2022–2023 report). |
| Mumbai Global Shapers | Mumbai, Maharashtra | 2012 | Lokesh Todi (Alumni, Young Global Leader 2024). |
| Mysuru Global Shapers | Mysuru, Karnataka | 2018 | Alok Medikepura Anil (Founding Curator, 1t.org and Yes Cities initiatives). |
| Navi Mumbai Global Shapers | Navi Mumbai, Maharashtra | 2016 | Collaborates on volunteering; no specific names. |
| New Delhi Global Shapers | New Delhi, NCT | 2012 | Swarnima Bhattacharya (Project Lead, Health Awareness); Kanika Rajput (Campaign Partner); Mridu Gupta (Health Advocate); Pranidhi Sawhney (Fellow); Rintu Kutum (Fellow). |
| Pondicherry Global Shapers | Puducherry | 2014 | Ties with Chennai; focuses on coastal sustainability |
| Pune Global Shapers | Pune, Maharashtra | 2013 | Innovation in education and startups. |
Additional Notes
- Total Hubs: 22 (expanding; e.g., emerging in Tier-2 cities like Coimbatore).
- Notable Alumni Across Hubs: Murchana Roychoudhury (Kolkata/Delhi, air pollution expert); Saurabh Shah (Delhi, youth entrepreneurship); Gulika Reddy (Chennai, legal consultant).
- Recruitment: Hubs recruit via applications on globalshapers.org; criteria include leadership potential and local ties.
- Impact: Indian hubs have delivered 100+ projects, e.g., Chennai’s Kabadiwalla Connect (waste management) and Delhi’s #FightHPV campaign.
WEF’s 10 Principles for Action on Facial Recognition Technology (FRT)
The World Economic Forum (WEF), through its Centre for the Fourth Industrial Revolution and the Artificial Intelligence and Machine Learning Platform, developed the “10 Principles for Action” as part of its Responsible Limits on Facial Recognition project. Launched in March 2020 and refined through pilots (e.g., at Tokyo-Narita Airport with NEC Corporation), these principles provide an actionable governance framework for the ethical deployment of FRT, particularly in use cases like flow management (e.g., airports, public services). They aim to balance innovation with safeguards for privacy, human rights, and trust, going beyond general guidelines to operationalize compliance via self-assessment questionnaires and third-party audits (e.g., by AFNOR Certification).
The principles were co-drafted with multistakeholder input from governments, industry, civil society, and experts, emphasizing proportionality, transparency, and risk mitigation. They are not legally binding but serve as a blueprint for policymakers, engineers, and organizations to certify responsible use. Below is a detailed list of the 10 principles, drawn from WEF’s official framework documents.
| Principle | Description |
|---|---|
| 1. Necessity and Proportionality | FRT deployment must be essential for the intended purpose and limited to the minimum necessary scope, avoiding overreach that could infringe on privacy or rights. Assess alternatives first and ensure the technology’s benefits outweigh risks. |
| 2. Purpose Limitation | Data collected via FRT should only be used for the explicitly defined purpose (e.g., identity verification in flow management), with no repurposing without explicit consent or legal basis. |
| 3. Data Minimization | Collect and retain only the minimal data required, deleting it immediately after use (e.g., no long-term storage of facial scans unless justified and secured). |
| 4. Lawful Basis | Implementation must comply with applicable laws, including data protection regulations (e.g., GDPR equivalents), and obtain a clear legal foundation for processing biometric data. |
| 5. Transparency and Accountability | Clearly inform users about FRT use via signage, notices, or interfaces; maintain audit trails and accountability mechanisms for decisions influenced by the technology. |
| 6. Accuracy and Reliability | Systems must achieve high accuracy rates (e.g., via diverse training datasets to reduce bias) and include human oversight for critical decisions, with regular testing and error-handling protocols. |
| 7. Fairness and Non-Discrimination | Mitigate biases that could disproportionately affect groups (e.g., by race, gender, or age) through inclusive datasets, impact assessments, and equitable design. |
| 8. Security | Protect FRT systems and data against breaches, unauthorized access, or tampering using state-of-the-art encryption, access controls, and cybersecurity standards. |
| 9. Consent and User Rights | Where possible, obtain informed, granular consent; respect user rights to access, rectify, or delete data, and provide opt-out options without penalizing users. |
| 10. Oversight and Redress | Establish independent oversight (e.g., ethics boards or regulators) and accessible mechanisms for complaints, appeals, and remedies in case of misuse or harm. |
Implementation and Impact
- Toolkit and Certification: Accompanying the principles is a self-assessment questionnaire for organizations to evaluate compliance, plus a certification scheme piloted in 2020–2022. This has been adapted for specific sectors, like law enforcement (e.g., a 2022 white paper with INTERPOL and UNICRI, expanding to 9 principles tailored for investigations).
- Global Adoption: Tested at sites like Narita Airport, the framework influences policies in Europe (e.g., EU AI Act alignments) and beyond, promoting “trust by design.” WEF invites stakeholders to refine it further.
- Broader Context: These principles address FRT’s dual-use potential—enhancing efficiency (e.g., contactless verification post-COVID) while curbing risks like surveillance or discrimination.
WEF Events and Endorsements
- TradeTech Forum (2024): WEF President Børge Brende praised Aadhaar as “key for unlocking” India’s economic growth, noting its role in seamless verification for trade and services.
- Societal Platforms Discussion (2017, with World Bank/CGD): Featured UIDAI Founding Chairman Nandan Nilekani on distilling Aadhaar’s architecture for platforms like EkStep (education), emphasizing its “Cambrian” innovation potential.
- G20 and Global DPI Guide (2024): WEF collaborated with G20, World Bank, and GPFI on a policy guide citing India Stack (Aadhaar + UPI) as a blueprint for financial inclusion in developing nations, extending to health and welfare.
Indirect Ties via Global Partners

WEF’s work intersects with UIDAI through shared collaborators:
- World Bank: Praised Aadhaar in its 2016 Development Report as an advanced DPI tool; UIDAI partnered with the Bank (2021) to export Aadhaar architecture globally, aligning with WEF’s DPI advocacy.
- UN and G20: UIDAI’s 2021 UN collaboration for DPI deployment mirrors WEF’s 2019 UN Strategic Framework on digital cooperation, where Aadhaar is a case study.
- Private Sector: WEF-endorsed innovations like NEC’s adjusted algorithms for Aadhaar biometrics (2024) enhance accuracy for UIDAI’s systems. Recent UIDAI partnerships (e.g., 5-year R&D with the Indian Statistical Institute in 2025 for fraud detection) echo WEF’s call for data-driven security.
UIDAI Offline Face Authentication (Face Scan) Under Scrutiny:
The Unique Identification Authority of India’s (UIDAI) offline face authentication system—commonly referred to as “Face Scan” or “Face Authentication”—enables contactless, QR code-based identity verification without internet access. Launched in beta in April 2025 and slated for full rollout by late 2025 via a new e-Aadhaar mobile app, it matches live selfies against stored enrollment photos for uses like hotel check-ins, office entry, event access, exams (e.g., NEET), and welfare services. While UIDAI promotes it for convenience and privacy (e.g., no data storage post-verification, selective sharing), it has drawn scrutiny over potential privacy invasions, biometric risks, and expansion to everyday scenarios, echoing broader Aadhaar concerns from the 2018 Supreme Court ruling.
As of November 29, 2025, opposition is emerging but not yet formalized into major lawsuits. Criticisms focus on surveillance creep, consent coercion, accuracy biases (e.g., for rural/older users or varying lighting), and data security amid deepfake threats. UIDAI’s November 2025 draft rules allowing “Offline Verification Seeking Entities” (OVSEs) like restaurants or societies to request scans have amplified debates, with calls for stricter audits.
Key Opponents and Their Concerns
Opposition stems from digital rights groups, legal experts, and civil society, building on historical Aadhaar critiques. Below is a table of notable entities and individuals raising issues, based on 2025 reports.
| Opponent/Group | Description & Role | Specific Concerns Raised | Key Actions/Statements (2025) |
|---|---|---|---|
| Internet Freedom Foundation (IFF) | Leading Indian digital rights NGO advocating for privacy and free expression. | Unauthorized biometric collection; risks to vulnerable groups (e.g., data leaks in welfare schemes); potential for “ID policing” in private spaces like societies. | In April 2025, IFF wrote to regulators citing a Chennai firm’s illegal data grab for welfare verification; flagged NEET face-scan tests as violating consent norms under Aadhaar Act. |
| Civil Society Organizations (Collective) | Umbrella of NGOs like Common Cause and Centre for Internet and Society (CIS), petitioners in past Aadhaar suits. | Expansion to non-essential uses (e.g., restaurant entry) blurs voluntariness; exclusion errors for marginalized (e.g., 1-2% biometric failures denying access). | Raised alarms in May 2025 over NEET pilot, urging pause until bias audits; referenced 2018 SC ruling on data minimization. |
| Legal Experts (e.g., Apar Gupta, Anupam Gulati) | Privacy lawyers and former SC advisors; Gulati co-authored critiques on Aadhaar expansions. | Legal overreach in draft OVSE rules; inadequate safeguards against spoofing/deepfakes across demographics/devices. | Gupta (EFF India affiliate) warned in ET op-ed (June 2025) of “surveillance normalization”; Gulati petitioned for review of 2019 Amendments enabling private use. |
| Former SC Judge B.N. Srikrishna | Chaired 2018 privacy committee; critiques govt data policies. | Unconstitutional broadening of biometrics; risks fraud if QR codes are forged or scans stored covertly. | In July 2025 interview, called offline scans “privacy Trojan horse,” urging DPDPA enforcement for granular consent. |
| Media & Tech Analysts (e.g., Medianama, The Wire) | Investigative outlets tracking DPI. | Privacy vs. convenience imbalance; rural rollout challenges (e.g., poor lighting causing failures). | Medianama (May 2025) reported NEET test concerns from experts; The Wire (Nov 2025) highlighted draft rules as “easing mass surveillance.” |
Public Outcry Against UIDAI’s Offline Aadhaar Face-Scan System: Overview
As of November 29, 2025, the Unique Identification Authority of India’s (UIDAI) proposed offline Aadhaar verification system—using QR codes and face scans for “proof of presence”—has sparked early but growing concerns, particularly over its potential routine use for entry into restaurants, offices, housing societies, hotels, events, and exams. Announced in mid-November 2025 after consultations with 250+ stakeholders, the system aims to replace physical/photocopy Aadhaar cards with a privacy-focused, internet-free method via a new e-Aadhaar app (beta launched earlier in 2025, full rollout imminent). UIDAI emphasizes security and convenience, but critics warn it could normalize biometric surveillance in everyday life, echoing past Aadhaar privacy battles (e.g., 2018 Supreme Court ruling).
Who Is Opposing and Why?
Opposition is primarily from digital rights advocates, legal experts, and civil society, building on historical Aadhaar scrutiny. They argue the system risks violating the 2018 SC mandate for data minimization and voluntariness, potentially enabling “ID policing” in non-essential scenarios. Below is a table of key opponents, their stances, and reasons, drawn from recent reports and social media.
| Opponent/Group | Who They Are | Key Reasons for Opposition | Notable Actions/Statements (2025) |
|---|---|---|---|
| Internet Freedom Foundation (IFF) | Leading Indian digital rights NGO; past Aadhaar litigants. | Privacy erosion via routine biometric scans; consent coercion in private spaces (e.g., restaurants denying entry); bias/exclusion for marginalized groups (e.g., poor lighting causing failures). | April–November: Flagged NEET exam pilots as “unconsented surveillance”; urged pause on offline rollout until bias audits under DPDPA, 2023. |
| Centre for Internet and Society (CIS) & Common Cause | Civil society groups; co-petitioners in 2012–2018 Puttaswamy cases. | Expansion to “everyday movement” (e.g., society gates) blurs public/private lines; deepfake/spoofing risks; demographic biases (e.g., false negatives for women/elderly). | May–November: Submitted feedback to UIDAI on draft OVSE rules; warned of “profiling” in events/hotels, citing 2018 SC dissent on surveillance. |
| Legal Experts (e.g., Apar Gupta, Anupam Gulati) | Privacy lawyers; Gupta (EFF affiliate), Gulati (Aadhaar amendment challenger). | Unconstitutional overreach (violates proportionality test); inadequate safeguards against data leaks/hacks; “Trojan horse” for mass tracking. | June–November: Op-eds in ET/The Wire called it “surveillance normalization”; Gulati’s pending petition on 2019 Amendments may incorporate offline scans. |
| Former SC Judge B.N. Srikrishna | Chaired 2018 privacy committee; data protection critic. | Weak enforcement of consent; risks fraud if QR codes forged; expands biometrics beyond welfare essentials. | July–November: Interviews labeled it a “privacy Trojan horse”; urged DPDPA-mandated granular consent before rollout. |
| Media/Tech Analysts (e.g., Medianama, The Wire, India Today) | Investigative outlets tracking DPI/privacy. | Imbalance of convenience vs. rights; rural/tech access barriers; potential for “function creep” into surveillance. | May–November: Articles questioned “proportional use” in restaurants/societies; India Today noted “raising important questions.” |
| Individual Voices on X (e.g., @skbytes, @kachatterjee, @shobhitic) | Tech users/privacy advocates. | Glitches (e.g., timeouts); general fears of compulsory IDs denying access (e.g., food/welfare); spam/privacy from linked services. | October–November: @skbytes warned of “freedom vanishing” from glitches; @kachatterjee highlighted deepfakes/surveillance; @shobhitic criticized restaurant OTP spam as a “shitshow.” |
Why the Backlash? Core Concerns
- Privacy & Surveillance: Scans could track movements without oversight, turning Aadhaar into a “backdoor” for profiling (e.g., linking restaurant visits to behaviour data).
- Consent & Voluntariness: “Routine” use (e.g., denied entry to offices/societies) implies coercion, violating SC’s 2018 emphasis on optionality.
- Technical/Security Risks: Deepfakes, biases (e.g., 95% accuracy claims unverified for diverse demographics), and breaches (e.g., past 815M identity leaks).
- Exclusion: Fails for elderly/rural users (e.g., lighting issues); could deny access to essentials, as in past welfare denials.
- Overreach: From welfare tool to “everyday ID,” raising “Big Brother” fears without robust DPDPA enforcement.
Government/UIDAI Response
UIDAI counters that the system enhances privacy (no photocopies, no server data sharing) and is “voluntary,” with OVSE onboarding fees and consent screens. CEO Bhuvnesh Kumar announced December 2025 rules to “discourage” misuse and ban copy collection. SITAA scheme (launched October 2025) invites anti-deepfake innovations.
Outcry could grow post-December rules; watch for IFF/CIS petitions. For real-time updates, check UIDAI.gov.in or #AadhaarOffline on X. If you need specifics on a group or comparisons, let me know!

How the offline facial recognition system will work
Aadhaar App: A new version of the Aadhaar app will power the system, which is expected to launch in the coming months.
- Aadhaar App: A new version of the Aadhaar app will power the system, which is expected to launch in the coming months.
- QR Codes: Secure QR codes will be a key part of the new system for offline verification.
- Proof of Presence: A face scan will be used to confirm a user’s identity through a “proof of presence” feature, without needing to connect to central UIDAI servers in real-time.
- Benefits: The goal is to speed up verification, reduce the misuse of physical Aadhaar cards, and improve privacy by giving users more control over what data is shared.
Potential use cases
- Hotel and lodge check-ins
- Entry to gated housing societies
- Access to offices and data centers
- Hospital admissions
- Student verification for exams
- Verification of service providers like cab drivers and delivery personnel
India continues to grapple with the widespread and largely unregulated deployment of facial recognition technology (FRT) by both government agencies and private entities, leading to significant privacy concerns and ongoing legal challenges.
The use of FRT has proliferated across India over the past seven years. This growth has been driven by factors such as the debates surrounding the national biometric ID system (Aadhaar), failures of other verification methods, increased street surveillance, and government efforts to modernize law enforcement and national security operations.
In India, the use of face authentication and facial recognition technologies has come under significant scrutiny, leading to various court cases and legal challenges primarily centered on privacy, accuracy, and the absence of a robust legal framework. Several court cases have addressed the implications of these technologies, particularly concerning their impact on fundamental rights.
Comparative Analysis
The document compares the Aadhaar system with biometric regulations in the EU and US:
- European Union: The General Data Protection Regulation (GDPR) provides robust protections, demanding explicit consent for biometric data usage and imposing strict obligations on data processors, which contrasts sharply with India’s lax approach.
- United States: The US operates under a patchwork of sectoral laws without comprehensive protections specifically for biometrics. Although some states have enacted privacy laws, the absence of an overarching federal framework leaves many areas insufficiently regulated.
- Consent and Control: The issues of consent in biometric data collection are crucial. Aadhaar’s policies fall short of ensuring informed consent, raising concerns about control and autonomy for individuals.
- Need for Legislative Action: The analysis emphasizes that comprehensive data protection legislation is essential to address the vulnerabilities of biometric identity systems, preventing their use as tools for oppression.
Legal Frameworks Governing Digital Identity and Privacy
An overview of various legislative frameworks related to digital identity ecosystems, with an emphasis on privacy concerns and technical identity management statutes. Notably, certain statutes, such as the EU’s Electronic Identification and Trust Services (eIDAS) Regulation (Regulation 910/2014), establish critical standards for electronic transactions within the EU. This regulation addresses aspects such as electronic signatures, electronic funds transfers, and trust services—key components in fostering secure digital interactions within economic jurisdictions.
Key Statutes and Their Focus Areas
- Driver’s Privacy Protection Act (1994):
- This act restricts the use of personal information from motor vehicle records for commercial purposes. It highlights limitations in governmental use, making it narrowly applicable.
- E-Government Act of 2002:
- The act introduces privacy provisions designed to protect personal data in electronic government services. It emphasizes the need for regulations that uphold privacy in information sharing, especially concerning biometric data.
- REAL ID Act of 2005:
- This legislation effectively mandates that states adhere to federal standards for driver’s licenses and identification cards, creating uniformity in identity verification processes. However, it raises concerns regarding the potential creation of a national ID database.
- Illinois Biometric Information Privacy Act:
- Recognized as one of the most stringent privacy protection laws, it requires consent from individuals before collecting biometric data, enhancing user transparency and control over personal information.
Emerging Technologies and Their Regulatory Challenges
The document discusses the burgeoning use of biometric technology in healthcare and other sectors, emphasizing the potential for rapid adoption in both private and governmental contexts. A report by the Biometrics Research Group forecasts that adoption will happen quickly in private clinics in the U.S. and that international markets may lead in large-scale implementation, particularly in developing nations like India and Ghana.
Intersection of Privacy Laws
The interplay among privacy laws such as the Family Educational Rights and Privacy Act (FERPA) and the Health Insurance Portability and Accountability Act (HIPAA) complicates the legal landscape concerning biometric data collected by educational institutions. It is noted that health records in educational settings are not covered by HIPAA but rather by FERPA, raising questions about data rights and protections.
Government Accountability Office (GAO) Reports
- A GAO report (GAO-17-489) indicates deficiencies in ensuring privacy and accuracy concerning facial recognition technologies used by DOJ and FBI, suggesting the need for enhanced regulatory oversight.
International Perspectives
The framework also expounds on international standards, with the EU-GDPR articulating robust definitions of consent regarding personal data processing. The emphasis on adequate protection proves fundamental, as the EU recognizes several nations with sufficient data processing protections, facilitating secure cross-border data flows.
This synthesises critical insights into legal statutes governing digital identity, emphasizing the need for rigorous privacy protections in an increasingly digital world. Through analysis of nuanced regulations, it becomes clear that while frameworks like eIDAS and state-specific laws such as Illinois’s Biometric Information Privacy Act establish vital safeguards, ongoing dialogue on the implications of biometric technologies and privacy is essential for adapting legal standards to modern realities.
One of the most prominent examples of this widespread adoption and the resulting legal challenges comes from the state of Telangana. The Telangana police rolled out its facial recognition network in 2018, which now includes a CCTV network with over 600,000 cameras. They have also been using smartphones and tablets to collect photos of individuals on the street for their facial recognition database.
Legal Challenge Against Facial Recognition in Telangana, India
On January 20, 2022, a landmark case was initiated by activist S Q Masood in Hyderabad, Telangana, challenging the constitutionality of facial recognition technology deployed in the state — the foremost user of such systems in India. This lawsuit marks a significant legal endeavor in India amidst growing privacy concerns regarding surveillance technologies.
During a COVID-19 lockdown, Masood was stopped by police in Hyderabad, who requested he remove his face mask for a photograph, sparking concerns over the implications of how such images might be used. Following a lack of response from law enforcement on his inquiries regarding the photograph’s usage, Masood sought legal recourse, claiming that the indiscriminate use of facial recognition technology infringes on individual privacy rights.
Scope of Surveillance in Telangana
Telangana state is highlighted as “the most surveilled place in the world,” with over 600,000 CCTV cameras, predominantly in Hyderabad. These cameras are integrated into a comprehensive facial recognition system that police can operate via mobile applications. Such extensive surveillance has prompted fears from various groups, particularly marginalized communities such as Muslims, Dalits, and transgender individuals, who are at increased risk for targeted policing and harassment.
Legal Framework and Arguments
Masood’s petition asserts that the state’s deployment of facial recognition technology is unconstitutional, unnecessary, and disproportionately invasive, lacking essential safeguards against misuse. The lawsuit underscores the absence of a robust data protection law in India, raising questions about accountability and transparency in surveillance practices. Critics argue that the purported benefits of crime prevention through this technology remain unsubstantiated, with increasing evidence that it often misidentifies women and individuals with darker skin tones.
Government Defense of Technology Use
The Hyderabad police have defended their use of facial recognition systems, asserting that it serves as a deterrent to crime and facilitates the apprehension of suspects while claiming to respect privacy rights. Police Commissioner C V Anand stated that the technology is only employed for monitoring criminals, not the general public.
Broader Implications and Activist Response
The rapid rollout of facial recognition technology in India has not gone unnoticed, with activist organizations like the Internet Freedom Foundation (IFF) advocating for public awareness of surveillance practices. Anushka Jain, a representative from IFF, emphasizes the need to question the narrative that constant surveillance is a public good.
Internationally, the resistance to facial recognition technology is gaining momentum, with major corporations reevaluating their practices and regions such as the European Union considering bans. Meanwhile, in India, increasing pushback is evident among students and communities concerned about privacy violations, especially as more aspects of life become digitized and data-dependent.
Masood’s lawsuit is poised to act as a critical test case in the ongoing debate over privacy rights versus state surveillance in India. The outcome could have far-reaching implications not only for the use of facial recognition in Telangana but also for broader national policies on data protection and privacy rights in the country.
This practice is at the heart of a public interest litigation petition filed by social activist S. Q. Masood, in collaboration with the Internet Freedom Foundation. Masood’s case, which saw a state high court division bench back his request for an explanation, challenges the police’s action of stopping him and taking his picture without consent in May 2021. He argues that the program infringes on the public’s right to privacy, especially given the lack of accountability and a clear legal mandate. A hearing in this case was scheduled for January 15, 2022, and its outcome is expected to set a benchmark for similar FRT applications across India.
The Unique Identification Authority of India (UIDAI) is developing an offline facial recognition system for Aadhaar, primarily through a new Aadhaar app and an offline e-KYC process.
The Unique Identification Authority of India (UIDAI) currently does not mandate offline facial recognition for authentication purposes. The primary authentication methods for Aadhaar remain biometric (fingerprint and iris scan) and demographic (OTP-based) authentication. While facial recognition has been introduced as an additional authentication factor, particularly for those facing difficulties with other biometrics, it is generally used in conjunction with other methods and not as a standalone, mandatory offline process.
If UIDAI were to mandate offline facial recognition, several significant public concerns and legal questions would arise in India
Public Concerns and Questions
- Privacy Violations: Mandating offline facial recognition would raise serious privacy concerns. Storing and processing facial data offline, especially without robust security protocols, could lead to data breaches, misuse, and unauthorized surveillance. Citizens would question the necessity and proportionality of such a measure, particularly given the sensitive nature of biometric data.
- Data Security and Storage: Questions would be raised about how this offline facial data would be secured, who would have access to it, and for how long it would be stored. The potential for hacking, data leaks, and the creation of comprehensive facial databases without explicit consent would be a major point of contention.
- Accuracy and Bias: Facial recognition technology, while advanced, is not infallible and can exhibit biases, particularly concerning different demographics (e.g., gender, race, age). Mandating its use could lead to authentication failures for certain individuals, denying them access to essential services and raising questions about discrimination and fairness.
- Lack of Consent and Transparency: Citizens would question the process by which such a mandate was introduced, particularly if it lacked public consultation and transparent justification. The absence of clear opt-out mechanisms or alternatives would also be a significant concern.
- Scope Creep and Surveillance State: There would be fears that mandating offline facial recognition could be a step towards a pervasive surveillance state, where individuals’ movements and activities could be tracked without their knowledge or consent. This would undermine democratic freedoms and civil liberties.
- Technical Infrastructure and Accessibility: Implementing offline facial recognition across a vast and diverse country like India would require significant technical infrastructure. Questions would arise about the accessibility of such technology, especially in remote areas or for individuals with disabilities, potentially excluding them from essential services.
Laws and Sections in India
If UIDAI were to mandate offline facial recognition, the following laws and legal principles would be highly relevant:
- Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016:
- Section 7: This section deals with the requirement of Aadhaar for receiving subsidies, benefits, and services. Any mandatory offline facial recognition would need to align with the principles of necessity and proportionality outlined in this act.
- Section 8: This section outlines the authentication process. Any new authentication method, especially a mandatory one, would need to be consistent with the provisions for authentication and the protection of identity information.
- Section 29: This section prohibits the sharing of core biometric information. While facial recognition might not be considered “core biometric information” in the same vein as fingerprints and iris scans under the current definition, its sensitive nature would bring it under scrutiny regarding data sharing and protection.
- Section 32: This section empowers the UIDAI to make regulations. Any mandate would need to be issued under proper regulatory authority and adhere to the principles of natural justice.
- Section 37: This section deals with the protection of identity information and authentication records. Mandating offline facial recognition would significantly impact the volume and sensitivity of data requiring protection.
- Information Technology Act, 2000 (and its amendments):
- Section 43A: This section deals with compensation for failure to protect data. If offline facial data were compromised due to negligence, this section could be invoked.
- Section 72A: This section pertains to punishment for disclosure of information in breach of lawful contract. While not directly applicable to a government mandate, it highlights the legal framework for data protection.
- Right to Privacy (as established by the Supreme Court in Justice K.S. Puttaswamy (Retd.) and Anr. vs Union of India and Ors.):
- The Supreme Court’s landmark judgment declared privacy a fundamental right under Article 21 of the Constitution. Any mandatory offline facial recognition would be subject to a strict test of legality, necessity, and proportionality. The state would need to demonstrate a legitimate aim, that the measure is proportionate to that aim, and that there are procedural safeguards against abuse.
- Data Protection Bill (currently under consideration/enactment, e.g., Digital Personal Data Protection Act, 2023):
- The Digital Personal Data Protection Act, 2023, if fully enacted and implemented, would provide a comprehensive framework for the processing of personal data, including sensitive personal data like biometrics. Any mandatory offline facial recognition would need to comply with its principles of consent, purpose limitation, data minimization, and accountability. It would likely require explicit consent for processing sensitive personal data and impose strict obligations on data fiduciaries (like UIDAI) regarding data protection and breach notification.
Whom to Write a Letter To Stop
To raise concerns and potentially stop a mandate for offline facial recognition by UIDAI, a letter should be addressed to the following key authorities:
- The Chairman, Unique Identification Authority of India (UIDAI): As the primary authority responsible for Aadhaar, the UIDAI Chairman is the most direct recipient for concerns regarding its policies and implementation.
- The Secretary, Ministry of Electronics and Information Technology (MeitY), Government of India: MeitY is the nodal ministry for IT, electronics, and internet policy in India, under which UIDAI operates.
- The Chief Justice of India / Supreme Court of India: For matters concerning fundamental rights, particularly the right to privacy, a Public Interest Litigation (PIL) can be filed, or a letter can be addressed to the Chief Justice.
- The National Human Rights Commission (NHRC): If the mandate is perceived to violate human rights, including the right to privacy and non-discrimination.
- Members of Parliament (MPs): Especially those on relevant parliamentary committees (e.g., Standing Committee on Communications and Information Technology) to raise the issue in legislative forums.
- Civil Society Organizations and Privacy Advocates: Engaging with these groups can amplify the voice of public concern and provide legal and advocacy support.
Sample Letter
[Your Name] [Your Address] [Your City, Pin Code] [Your Email] [Your Phone Number] [Date: 2025-11-28]To,
The Chairman, Unique Identification Authority of India (UIDAI), Bangla Sahib Road, Kali Mandir, Gole Market, New Delhi – 110001
Subject: Urgent Concerns Regarding the Potential Mandate of Offline Facial Recognition for Aadhaar Authentication
Dear Sir/Madam,
I am writing to express my profound concerns regarding reports and potential considerations by the Unique Identification Authority of India (UIDAI) to mandate offline facial recognition as a compulsory authentication method for Aadhaar. As a concerned citizen, I believe such a mandate would have severe implications for privacy, data security, and fundamental rights of individuals across India.
The introduction of mandatory offline facial recognition would raise significant questions about the proportionality and necessity of such a measure. While I understand the need for robust authentication mechanisms, the sensitive nature of biometric data, particularly facial recognition, demands the highest level of scrutiny and protection.
My primary concerns are as follows:
- Violation of Privacy: Mandating offline facial recognition could be perceived as a direct infringement on the fundamental Right to Privacy, as established by the Hon’ble Supreme Court in Justice K.S. Puttaswamy (Retd.) and Anr. vs Union of India and Ors. The collection, storage, and processing of facial data without explicit, informed consent and robust safeguards could lead to widespread surveillance and misuse of personal information.
- Data Security Risks: Storing facial biometrics offline, potentially on local devices or distributed systems, significantly increases the risk of data breaches, unauthorized access, and identity theft. The current technological landscape presents numerous vulnerabilities, and any compromise of such sensitive data could have irreversible consequences for citizens.
- Accuracy and Bias Issues: Facial recognition technology is not infallible and has been shown to exhibit biases across different demographics. Mandating its use could lead to authentication failures for a significant portion of the population, thereby denying them access to essential services and benefits, which would be discriminatory and unjust.
- Lack of Transparency and Public Consultation: Any decision to mandate such a far-reaching technology should be preceded by extensive public consultation, impact assessments, and transparent justification. The absence of such a process would undermine public trust and democratic principles.
- Potential for Surveillance State: There is a legitimate fear that mandatory offline facial recognition could pave the way for a pervasive surveillance infrastructure, eroding civil liberties and enabling unwarranted tracking of individuals.
I urge the UIDAI to reconsider any plans to mandate offline facial recognition. Instead, I request that the Authority prioritize the development and implementation of authentication methods that are privacy-preserving, secure, accurate, and voluntary. Any new authentication mechanism must strictly adhere to the principles of legality, necessity, and proportionality, as well as the provisions of the Aadhaar Act, 2016, and the upcoming Digital Personal Data Protection Act, 2023.
I request a clear statement from the UIDAI clarifying its stance on this matter and assuring citizens that their privacy and data security will not be compromised. I also request an opportunity for public dialogue and consultation on any proposed changes to Aadhaar authentication policies.
Thank you for your time and consideration of this critical issue.
Sincerely,
[Your Name]Ref:
- UIDAI [https://uidai.gov.in/ ]
- Internet Freedom Foundation [ https://internetfreedom.in/facial-recognition-technology-in-india-a-primer/ ]
- The Economic Times [ https://economictimes.indiatimes.com/tech/technology/data-security-concerns-rise-with-facial-recognition-tech/articleshow/70000000.cms ]
- Amnesty International [ https://www.amnesty.org/en/latest/news/2021/05/facial-recognition-human-rights-perspective/ ]
- Livemint [ https://www.livemint.com/opinion/online-views/why-india-needs-a-robust-data-protection-law-11623000000000.html ]
- The Hindu [ https://www.thehindu.com/opinion/op-ed/the-perils-of-facial-recognition-technology/article32000000.ece ]
- NITI Aayog [ https://www.niti.gov.in/sites/default/files/2019-06/NationalStrategy-for-AI-Discussion-Paper.pdf ]
- The Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016 [ https://www.indiacode.nic.in/handle/123456789/2163?locale=en ]
- The Information Technology Act, 2000 [ https://www.indiacode.nic.in/handle/123456789/1999?locale=en ]
- Supreme Court of India [ https://main.sci.gov.in/supremecourt/2012/35071/35071_2012_Judgement_24-Aug-2017.pdf ]
- The Digital Personal Data Protection Act, 2023 [ https://prsindia.org/billtrack/the-digital-personal-data-protection-bill-2023 ]
- Facial Recognition Technology [ https://www.weforum.org/projects/facial-recognition-technology/ ]
- The Future of Emotion AI [ https://news.mit.edu/topic/ai ]
- Ethical Concerns in Facial Recognition [ https://www.aclu.org/issues/privacy-technology/surveillance-technologies/facial-recognition-technology ]
- https://www3.weforum.org/docs/WEF_Framework_for_action_Facial_recognition_2020.pdf
- https://www.biometricupdate.com/202508/uidai-signs-5-year-rd-pact-to-strengthen-aadhaar
Also Read:
