Tuesday, June 24, 2025

BAN!! Bill Gates’ AI facial recognition technologies: Government officials are relying on AI facial recognition technologies, but the imperfect nature of AI poses serious risks

Date:

Is the government taking a backseat and letting AI take over their responsibilities? In recent years, there has been a growing trend of government officials relying on AI facial recognition technologies to perform tasks that were once done by human employees. However, the use of AI in such critical roles has raised concerns about the reliability and accuracy of these technologies.

Government Officials Are Letting AI Do Their Jobs. Badly.

One of the main issues with AI facial recognition technologies is their imperfection. While AI has advanced rapidly in recent years, it is still far from being flawless. Facial recognition algorithms can often make mistakes, misidentifying individuals or failing to recognize faces altogether. This can have serious consequences, especially when AI is used in law enforcement or security settings.

Bill Gates, through his various investment vehicles, has been involved with several companies that utilize or develop facial recognition technology.

Microsoft’s venture capital arm, M12, invested in AnyVision, an Israeli facial recognition company. However, Microsoft later divested its stake in AnyVision due to concerns about the company’s use of its technology and the challenges of oversight as a minority investor. Microsoft also offers facial recognition technology through its Azure cloud platform, called Face, which allows developers to integrate facial recognition into their applications.

Bill Gates also backs Evolv Technology, an AI-enabled touchless security screening system that uses digital sensors and artificial intelligence, including facial recognition, to detect threats. Evolv Technology went public through a SPAC deal.

https://techstartups.com/2021/03/08/bill-gates-backed-ai-security-startup-evolv-go-public-via-1-7-billion-spac-deal/

Evolv Technology, a company backed by Bill Gates, went public through a merger with a special-purpose acquisition company (SPAC) called NewHold Investment Corp. This merger, which was finalized in 2021, resulted in Evolv becoming a publicly traded company on the NASDAQ exchange under the ticker symbol “EVLV”. The deal valued Evolv at approximately $1.7 billion. 

Evolv, a partially funded company, is preparing to go public using an S-pac, a method to bypass the need for a public offering. 

• Bill Gates is using his funds to scale the company’s product, similar to Microsoft’s scaling of vaccines. 
Gates’ ownership and control of Microsoft, including farmland, vaccine production facilities, and government institutions, raises questions about his control over these entities. 

Emotion Recognition Technology in Security
• Gates’ company is focusing on replacing the system that scans humans as they walk into and out of places. 
• Emotion recognition technology, which is 99.9% accurate in the US and China, is being used to detect potential drug users and potential illegals. 

Evolv Express
• The company’s free flow weapons detection system screens 3,600 people per hour more than 10 times faster than traditional metal detectors. 
• It uses AI to instantly differentiate personal items from threats, providing a real-time image of its location. 
• The system also includes evolve pinpoint, allowing venues to easily identify persons of interest.

Bill Gates, we collect data and figure out who should use it.

Bill Gates lobbies to keep Microsoft’s A.I. megalab in Shanghai open – despite fears it could create weapons that are used against America – Dailymail

  • Bill Gates’s company has invested over $1 billion on R&D in China in past decade
  • Alumni from Microsoft’s Chinese AI lab have left to work on AI facial recognition for China’s vast national surveillance system of over 200 million cameras
  • Gates firmly backs the lab with President Xi calling him China’s ‘American friend’
Microsoft has been quietly debating the future of its advanced AI lab in China, with founder Bill Gates supporting the lab. When Gates visited China last June (above), President Xi Jinping warmly described the tech mogul as ‘the first American friend I’ve met with this year’

Following his achievement with the fake Virus Lab leak..

Microsoft has been quietly debating the future of its advanced AI lab in China, sources say.
The lab was opened in 1998 and has become one of the most important artificial intelligence hubs in the world, leading to advancements in the company’s speech, image and facial recognition software.
Microsoft Research Lab Asia (MSRA) opened at a time of optimism about China as an emerging democracy but as tensions between the US and thThat pressure has only intensified in recent months, after the Biden administration banned US investments in Chinese tech ventures that might aid the rival superpower’s ‘military, intelligence, surveillance, or cyber-enabled capabilities.’
But, the tech giant’s founder Bill Gates continues to defend the lab and has pushed to keep it open, alongside Microsoft’s research leaders and its current president. e communist state have intensified, internal pressure has mounted to shut or scale it down.

Furthermore, the Bill & Melinda Gates Foundation, which is the primary vehicle for Bill Gates’ philanthropic activities, has identified biometrics, which includes facial recognition, as one of the three key technologies for digital inclusion in developing nations. The Gates Foundation supports the Modular Open Source Identity Platform (MOSIP), which is an open-source digital identity platform that can incorporate biometric data, including facial recognition, for identity verification.

Microsoft’s internal handwringing has reportedly intensified as the Biden administration prepared its ban on any new US investments in Chinese tech ventures that might aid the rival superpower’s ‘military, intelligence, surveillance, or cyber-enabled capabilities’ 

The tech giant’s roughly $1 billion worth of AI investments in the Asia country has  been mutually beneficial, netting Microsoft $212 billion in revenue last fiscal year, and nurturing local talent highly valued by China’s military-industrial complex. 

Alumni of Microsoft’s AI lab in China have gone on to key posts at the nation’s domestic tech giants, with two heading-up the facial recognition developer Megvii — which has helped China power its vast national surveillance system. 

When Bill Gates visited China last June, after a three-year absence due to the coronavirus pandemic, President Xi Jinping warmly described the mogul as ‘the first American friend I’ve met with this year.’

Four current and past Microsoft employees, who spoke to the New York Times anonymously, said that the company’s top leaders are split on the future of Microsoft’s MSRA, which has offices in both Shanghai and Beijing.

Microsoft’s current CEO, Satya Nadella, and its president, Brad Smith, have been debating what to do with the lab over the past year.

The tech giant’s founder Bill Gates and the company’s research leaders — including Chief Technology Officer Kevin Scott and Microsoft’s head of research Peter Lee — remain strident defenders of the lab: MSRA’s Shanghai AI/ML Group (above)

But, the tech giant’s founder Bill Gates and the company’s research leaders — including Chief Technology Officer Kevin Scott and Microsoft’s head of research Peter Lee — remain strident defenders of the lab.

Scott and Lee, according to two sources who spoke to the Times, have argued that the lab has yielded critical breakthroughs in AI, via MSRA’s Shanghai AI/ML Group

In a statement to the Times, Microsoft president Brad Smith appeared to echo the line pushed by Gates, who still regularly advises the company’s executives.

‘The lesson of history is that countries succeed when they learn from the world,’ Smith said. ‘Guardrails and controls are critical, while engagement remains vital.’ 

Those guardrails include restrictions on work related to quantum computing, facial recognition software and synthetic media, a catch-all term that includes ‘deep fakes’ and AI-generated vocal impersonations, according to Microsoft.

The tech giant also stated that its policies prevent the hiring of Chinese students and researchers whose CVs have placed them at any university affiliated with the Chinese military.

But there have nevertheless been gaps: MSRA’s satellite lab in Vancouver grants researchers free access to the supercomputing power and OpenAI systems needed for cutting-edge AI research, two sources said.

While Microsoft has bristled in response to some of China’s surveillance and censorship expectations, shutting down LinkedIn within the Asian country over compliance issues, it has sometimes bowed to those same demands.  

Microsoft’s Bing search engine, now the only foreign search engine in China, has followed the Chinese government’s censorship policies, for example.

And the company offers corporate clients in China access to regulated versions of the Windows operating system, its cloud computing and applications as well.

Adding to the tensions, Microsoft revealed that Chinese hackers had targeted ‘critical’ infrastructure in the US territory of Guam, this past May, sparking fears that Beijing might be testing US cyber-defenses in preparation for a communications ‘black out’ needed to launch a rumored assault on Taiwan.

The US Cybersecurity and Infrastructure Security Agency (CISA) confirmed that China was behind the breach, which struck multiple government and private sector organizations.

Tom Burt, head of Microsoft’s threat intelligence unit, said his team found that the attack hit key nodes in Guam’s telecommunications sector.

China’s apparent focus on Guam is of particular concern, as the US territory is a key military base in the Pacific, and would be a major staging ground for any American response in the event of a conflict in Taiwan or the South China Sea.

Despite Microsoft’s active role in these international tensions, Gates maintains his own ties to China via his philanthropic organization, the Gates Foundation. 

The foundation recently pledging $50 million to Beijing’s municipal government and one top university in China. 

While Gates began stepping away from Microsoft in 2008, devoting more time to these philanthropic projects, no longer even a member of Microsoft’s board of directors, the billionaire remain the company’s largest individual shareholder. 

Alongside his advising of Microsoft’s executives, Gates owns an estimated $35 billion worth of stock in the firm, according to financial data analysts with FactSet.

Ban Bill Gates AI Facial Recognition Technologies

Facial recognition technology is “dangerously inaccurate”

Facial recognition technology used by the Metropolitan Police and South Wales Police has been found to be dangerously inaccurate, with misidentification rates as high as 98% and 91% respectively.

The use of facial recognition technology has resulted in innocent people being misidentified as criminals, leading to the storage of their biometric data without their knowledge.

Civil liberties organization Big Brother Watch has raised concerns about the authoritarian surveillance implications of real-time facial recognition, emphasizing the potential risks to public freedoms.

The campaign against the use of facial recognition technology has garnered support from various rights and race equality groups, as well as shadow home secretary Diane Abbott and shadow policing minister Louise Haigh.

The UK’s data protection authority, the ICO, has faced challenges in dealing with complaints and has expressed interest in automation to alleviate increased workloads.

How Racial Biases can Corrupt Facial Recognition Technology

Lawsuit filed in US cross-country arrest allegedly based on face biometrics

 A Black man filed a federal suit in the U.S. claiming he was falsely accused of credit-card fraud based on a police facial recognition algorithm.

The plaintiff alleges that facial recognition software was the sole “credible source” used by Louisiana police to issue a warrant for his arrest.

At least five Black plaintiffs have filed similar suits, with three alleging misuse of or flawed facial recognition software in Michigan.

The latest plaintiff, referred to as Quran, is charging a detective with false arrest, malicious prosecution, and negligence, and the sheriff with not implementing adequate biometric software protocols.

The plaintiff maintains that he has never been to Louisiana and can prove he was not in that state when the crime was committed.

Racial Bias In Facial Recognition Algorithms

One of the most pressing threats to human rights and racial justice is the proliferation of racist facial recognition technology. March 21st is the International Day Against Racial Discrimination. This is a day to commit to understanding how systemic racism operates and take action toward a better future for all of us.

In Amnesty International Canada’s new podcast series, Rights Back At You, we focus on anti-Black racism. We examine how policing, surveillance and technology collide to perpetuate racial discrimination.

Global demonstrations against police violence in 2020 renewed questions about the impacts of surveillance technology on protesters, particularly Black protesters. In New York, one common protest route had approximately one hundred per cent police CCTV coverage, according to Amnesty International’s Ban the Scan campaign.

Facial recognition technology

Facial recognition is a biometric tool designed to recognize faces. It’s software that uses photos to identify a face.

It maps your facial features, measuring things like the shape of your nose or the distance between your eyes, and then compares the results to another image for verification— kind of like how you might unlock your phone. Or, it compares it to many images in a database, like comparing a photo of a person from a protest to a database of driver’s license photos connected to an address.

Is facial recognition racist?

The results of facial recognition algorithms are notoriously inaccurate and racist.

A study done by the federal government in the United States showed that African American and Asian faces were up to 100 times more likely to be misidentified than white faces and the highest false-positive rate was among Native Americans.

Research from the University of Essex in the UK showed that the technology they tested was accurate in just 19% of cases.

And a groundbreaking study by a trio of Black women (Joy Buloamwini, Timnit Gebru, and Deborah Raji) showed the facial recognition technology they tested performed the worst when recognizing Black faces— especially Black women’s faces.

In Canada, law enforcement agencies have violated privacy rights by using facial recognition software, monitoring the public on social media, and police departments across the country are rolling out the use of body cameras (portable surveillance video devices) to gather video footage of police interactions. Some body cameras have the potential for facial recognition technology.

These surveillance activities raise major human rights concerns when there is evidence that Black people are already disproportionately criminalized and targeted by the police. Facial recognition technology affects Black protesters in many ways.

Discrimination in facial recognition technology

The use of facial recognition technology in policing can perpetuate and exacerbate existing racial biases and discrimination. Black protesters are often subjected to greater scrutiny and suspicion, leading to heightened levels of harassment and arrest. Carding (or “street checks”) is a persistent racial discrimination problem in Canada where Black people, Indigenous people, and people of colour are more frequently arbitrarily stopped and questioned by the police than white people.

Misidentification in facial recognition technology

Facial recognition systems misidentify Black faces at a high rate. Facial recognition is less accurate in identifying people with darker skin tones—especially women. This can result in the misidentification of Black protesters or false positive matches in image databases. In some cases, police have wrongfully arrested people.

Surveillance and risk to freedom of expression

Law enforcement agencies may use facial recognition technology to monitor and track protesters, which can have a chilling effect on exercising freedom of expression and freedom of assembly rights. Black protesters are more likely to be targeted for surveillance due to racial profiling and systemic racism, especially in the context of the Black Lives Matter movement against police violence and community calls to defund the police.

Facial recognition technology in the context of protests and policing raises serious concerns about human rights and racial discrimination.

Technology-facilitated discrimination

The inaccuracy of facial recognition technology means that Black people are at increased risk of discrimination and human rights impacts, especially when paired with systemic racism in policing. Research suggests that the poor identification of facial recognition technology when it comes to people with darker skin is because the images used to train the systems are predominantly white faces.

You might think the easy solution is to just include more diversity in the training systems to make them work better. But there’s a bigger question here: do we want it to work well for policing Black people? For policing anyone? And fundamentally, do we want it to exist at all?

In Detroit, in 2020, the police arrested a Black man named Robert Williams based on a facial recognition identification. He hadn’t done anything wrong.

Introducing biased technology into contexts where racial discrimination already occurs will only exacerbate the problem.

Black people in Canada experience disproportionate levels of police violence and incarceration. Indigenous people in Canada are also disproportionately street checked, harmed, and incarcerated.

Groups that are already over-policed and criminalized will experience negative impacts from facial recognition technology.

Facial Recognition Tech and Racist Policing in Brooklyn NYC

In New York City, activist and Black Lives Matter protester, Dwreck Ingram, was seemingly subject to a facial recognition search by the NYPD. While housing activists Tranae Moran and Fabian Rogers from the Atlantic Towers complex in Brownsville, Brooklyn, were threatened with eviction in the face of their resistance to having facial recognition installed in their apartment.

Overall, more surveillance does not mean more safety

We are often told that more cameras and more surveillance make us safer. This is not necessarily true. Surveillance and facial recognition technology threaten our rights to privacy, non-discrimination, freedom of assembly, freedom of association, and freedom of expression.

This constitutes a threat to democracy and the ability to freely participate in social movements. This, of course, disproportionately affects people from groups that experience high levels of policing and discrimination.

Facial recognition threatens fundamental rights when it works, and also when it doesn’t work

Facial recognition is biometric technology which can be used to identify individuals by their face from millions of images in a database.

Facial recognition threatens the rights of minority communities, and people with darker skin, who are at risk of false identification  and false arrests.  But even when it correctly identifies someone, facial recognition threatens to put discriminatory policing on steroids. 

Despite these human rights concerns, the Indian government has spent 9.6 billion rupees on facial recognition technology. 

The technology is developed through scraping millions of images from social media profiles, police databases, and publicly accessible sources like newspapers without permission or consent. 


Minority communities are at risk of being misidentified and falsely arrested – for instance, Delhi police’s facial recognition system was reported to be accurate only 2% of the time.

Algorithmic Governance and the Rule of Law: Legal Challenges in AI-Driven Decision Making

Algorithmic governance—the use of AI systems to automate state decisions—is reshaping India’s legal landscape. From facial recognition in policing to Aadhaar-linked welfare exclusions, these tools promise efficiency but risk eroding constitutional safeguards. This article explores how opaque algorithms threaten privacy, equality, and due process, drawing on cases like Puttaswamy and Shreya Singhal. It argues that India’s outdated laws and lack of AI regulation leave citizens vulnerable, urging reforms for transparency, accountability, and human oversight.  

Introduction

In a small village in Rajasthan, a widow is denied her monthly ration because an Aadhaar-based algorithm flags her biometric data as “mismatched.” Across India, algorithms are quietly making decisions that alter lives—decisions once made by humans. As someone who has studied digital rights in the Puttaswamy and Pegasus cases, I’ve seen how technology can outpace the law. Now, as AI infiltrates governance, we must ask: Can India’s legal framework protect citizens from the biases and errors of machines?

The Rise of AI in Indian Governance

India’s turn toward algorithmic governance began with ambitious digitization projects. In policing, facial recognition systems like Delhi’s FRT and Hyderabad’s TSCOP scan crowds in real time, aiming to identify criminals. Yet studies show these tools disproportionately misidentify women and darker-skinned individuals, turning innocent citizens into suspects. Predictive policing models, part of the Crime and Criminal Tracking Network System (CCTNS), analyze historical crime data to deploy officers—a practice that risks reinforcing biases against marginalized communities already over-policed for decades.  

Meanwhile, Aadhaar’s integration into welfare systems has created a different crisis. In 2021, Rajasthan’s food subsidy program excluded 1.2 million families due to biometric errors or server glitches, leaving many without rations for months. Courts, too, are experimenting with AI to prioritize cases, but critics fear this could sideline vulnerable litigants whose disputes require human empathy. The common thread? Citizens harmed by these systems often have no way to challenge the algorithm’s logic—or even understand it.  

Legal Issues Raised by Algorithmic Governance

The constitutional cracks in India’s AI experiment are widening. Take privacy: the Puttaswamy judgment (2017) made privacy a fundamental right, requiring state surveillance to be necessary and proportionate. Yet facial recognition systems collect data indiscriminately, scanning millions to find a handful of suspects. This dragnet approach, critics argue, fails the proportionality test, treating every citizen as a potential criminal.  

Equality is another casualty. AI systems trained on biased data replicate societal prejudices. Predictive policing tools, for instance, direct officers to low-income neighborhoods based on past arrests—ignoring that over-policing, not crime rates, drives those numbers. The Supreme Court’s Navtej Singh Johar verdict (2018) condemned laws perpetuating stereotypes, but who holds algorithms accountable for doing the same?  

Free speech suffers too. When Hyderabad police used FRT to monitor protests in 2022, activists reported self-censorship, fearing retribution. The Shreya Singhal judgment (2015) struck down vague internet censorship laws, but AI-driven surveillance operates in secrecy, leaving citizens guessing how their data is used.  

Most alarmingly, automated decisions lack due process. If a welfare algorithm denies benefits, there’s no human to plead with, no form to contest the error. The “black box” nature of AI—what scholars call algorithmic opacity—means even judges struggle to scrutinize these systems.  

Key Case Laws

India’s courts are beginning to grapple with these challenges. The Puttaswamy verdict laid the groundwork, declaring privacy a right and mandating proportionality in state actions. In 2022, the Telangana High Court heard S.Q. Masood v. State of Telangana, a PIL challenging the legality of facial recognition technology (FRT) used by the state police. Petitioners argue that without adequate legal safeguards, FRT violates privacy and enables mass surveillance—a concern echoed in the Aadhaar judgment (2018), where the Supreme Court warned against exclusionary technologies.

The recent Pegasus proceedings add another layer to this debate. In April 2025, the Supreme Court questioned, “What’s wrong if a country is using a spyware?”, emphasizing that the legality of surveillance hinges on its targets—national security versus civil society. While the Court declined to publicly disclose the technical committee’s findings (citing security risks), it noted individual grievances could be addressed. This stance underscores a recurring tension: the state’s security claims often override transparency, even as procedural safeguards under Puttaswamy demand accountability. The Court’s reluctance to confront systemic surveillance excesses, as seen in its deferral of the Pegasus hearing to July 2025, leaves citizens in limbo, reliant on patchwork remedies rather than systemic reform.  

Internationally, the U.S. case Loomis v. Wisconsin (2016) offers a cautionary tale. A defendant sentenced using a proprietary AI tool argued the algorithm’s secrecy denied him a fair trial. Though the court upheld the sentence, it urged transparency—a principle India’s lawmakers have yet to embrace.  

Is India Legally Ready for AI?

The short answer: no. India’s Digital Personal Data Protection Act (2023) focuses on data privacy but sidesteps algorithmic accountability. The IT Act, 2000, is a relic of the dial-up era, irrelevant to AI’s ethical dilemmas. Courts, meanwhile, often defer to the executive on technical matters, as seen in the Pegasus case (2021), where the government refused to disclose surveillance details.  

To bridge this gap, experts propose a three-pronged approach. First, a “right to explanation” would let citizens demand clarity on AI decisions. Second, human oversight could prevent algorithmic errors from becoming human tragedies—imagine a welfare officer reviewing automated denials. Third, independent audits, akin to financial audits, could expose biased code before it harms marginalized groups. Above all, India needs a law regulating AI in governance, balancing innovation with constitutional values.  

In 2020, a farmer in Jharkhand died of starvation after Aadhaar glitches blocked his ration card. His story is a grim reminder: when algorithms govern, human lives hang in the balance. India’s legal framework, built for a pre-digital age, must evolve to ensure technology serves justice—not the other way around. As the Puttaswamy Court affirmed, dignity is non-negotiable. No one should lose their rights because a machine got it wrong.  

FAQs

  1. What is algorithmic governance?

   It’s the use of AI systems by governments to automate decisions in areas like policing, welfare, and public services.  

  1. Are AI tools like facial recognition legal in India

   Currently, there’s no specific law regulating FRT. Courts are reviewing its constitutionality in cases like  S.Q. Masood v. State of Telangana. 

  1. What problems can AI cause in government decision-making

   Bias, lack of transparency, errors with no appeal process, and chilling effects on free speech.  

  1. Has any court in India looked into this issue yet? 

   Yes. The Delhi High Court is hearing challenges to facial recognition, while the Supreme Court’s Aadhaar and Puttaswamy judgments set key privacy principles. The Pegasus case, now scheduled for July 2025, highlights ongoing tensions between surveillance and rights.  

  1. What can be done to make AI use in governance safer?

   Enact laws mandating transparency, human oversight, and bias audits, and empower citizens to challenge algorithmic decisions.

References

1. Justice K.S. Puttaswamy (Retd.) vs. Union of India (2017) 10 SCC 1

Supreme Court of India judgment affirming the right to privacy as a fundamental right.

Link: https://indiankanoon.org/doc/91938676/

2. Shreya Singhal vs. Union of India (2015) 5 SCC 1

Landmark case that struck down Section 66A of the IT Act, reinforcing free speech rights.

Link: https://indiankanoon.org/doc/110813550/

3. Navtej Singh Johar vs. Union of India (2018) 10 SCC 1

Supreme Court ruling that decriminalized homosexuality and emphasized dignity and equality.

Link: https://indiankanoon.org/doc/168671544/

4. Facial Recognition in India – Internet Freedom Foundation

A detailed analysis of India’s use of FRT in policing and civil liberties concerns.

Link: https://internetfreedom.in/the-facial-recognition-project/

5. Aadhaar and Welfare Exclusion – The Wire

Article highlighting exclusion from welfare schemes due to Aadhaar-linked biometric errors.

Link: https://thewire.in/rights/aadhaar-biometric-exclusion-welfare

6. Predictive Policing and CCTNS – Vidhi Centre for Legal Policy

Analysis of CCTNS and ethical concerns surrounding predictive policing.

Link: https://vidhilegalpolicy.in/research/the-cctns-and-the-future-of-predictive-policing-in-india/

7. Algorithmic Governance and Due Process – Internet Governance Project

General concepts and concerns about black-box AI and legal accountability.

Link: https://www.internetgovernance.org/2020/07/14/algorithmic-governance-and-the-rule-of-law/

8. Pegasus Row: Supreme Court Says Won’t Disclose Report That Touches Country’s Security, Sovereignty – The Hindu

Report on the Supreme Court’s stance regarding disclosure in the Pegasus surveillance case.

Link: https://www.thehindu.com/news/national/pegasus-row-supreme-court-says-wont-disclose-report-that-touches-countrys-security-sovereignty/article69504285.ece

Finally,

Facial Recognition technology and The Right to Privacy in India

Facial recognition technology is being used in India for various purposes, including law enforcement, border control, public safety, and public services.

Concerns about the potential misuse of facial recognition technology include mass surveillance, tracking individuals without consent, and the lack of transparency and accountability in its deployment.

Ethical concerns related to facial recognition technology in India include potential discrimination and bias, lack of clear legal framework and regulatory oversight, and the potential for disproportionate impact on marginalized communities.

The right to privacy in India is a fundamental right enshrined in the Constitution, and the legal framework surrounding facial recognition technology is still developing.

The Indian government has been actively promoting the use of facial recognition technology for various purposes, including law enforcement, security, and public services, but these initiatives have raised significant concerns about the right to privacy.

The Indian legal framework, though evolving, faces challenges in effectively addressing the complexities of facial recognition technology and its impact on privacy. The Information Technology Act (2000) and the Aadhaar Act (2016) provide some framework for data protection, but they fall short in addressing the specific concerns related to facial recognition. The absence of clear and comprehensive legislation governing the collection, storage, and use of facial data leaves a significant gap in safeguarding individual privacy.


Furthermore, the ethical implications of facial recognition technology raise serious questions about its potential for bias, discrimination, and misuse. The technology’s reliance on algorithms trained on specific datasets can lead to inaccurate or biased results, potentially impacting individuals from marginalized communities disproportionately. The potential for misuse by authorities or private entities for surveillance, profiling, and even manipulation poses a significant threat to individual autonomy and free.


The Legal Framework


The legal framework surrounding facial recognition technology and the right to privacy in India is currently under development and faces several challenges. While some existing laws offer partial protection, a comprehensive and specific legal framework is urgently needed to address the unique concerns raised by this technology.


The Information Technology Act (2000) provides a broad framework for data protection, including provisions on sensitive personal data and consent requirements. However, its provisions are not explicitly tailored to address the specific challenges posed by facial recognition technology. The Act primarily focuses on data protection in the context of online platforms and transactions, leaving a gap in regulating the collection, storage, and use of facial data in other contexts.


The Aadhaar Act (2016), which governs the use of the Aadhaar biometric identification system, has provisions for the collection and use of biometric data, including facial recognition. However, the Act has been criticized for its lack of clarity on data protection and privacy safeguards.

Moreover, the Act’s focus on biometric identification for specific purposes, such as social welfare schemes, does not adequately address the broader implications of facial recognition technology for other sectors.


The Right to Privacy, recognized as a fundamental right by the Supreme Court of India in 2017, offers a crucial legal foundation for protecting individual privacy against intrusive technologies like facial recognition. However, the application of this right in the context of facial recognition technology remains unclear. The lack of specific legal provisions outlining the scope and limitations of facial recognition technology leaves individuals vulnerable to potential misuse and abuse of their biometric data.


The Personal Data Protection Bill (2019), currently under review by the Indian Parliament, aims to establish a comprehensive framework for data protection, including provisions related to biometric data. While the Bill offers some hope for addressing the concerns surrounding facial recognition technology, its final form and implementation will be crucial in determining its effectiveness in safeguarding privacy.


Ethical Concerns


The ethical concerns surrounding facial recognition technology in India are deeply intertwined with the right to privacy. While this technology offers potential benefits in various sectors, its deployment raises serious ethical questions that require careful consideration.
One primary concern is the potential for misuse and abuse of facial recognition technology.  The technology can be used for surveillance and tracking individuals without their consent, potentially leading to violations of their privacy and freedom of movement. The lack of transparency and accountability in the use of facial recognition technology further exacerbates these concerns.


Another ethical concern is the potential for discrimination and bias. Facial recognition algorithms are trained on datasets that may reflect existing societal biases, leading to inaccurate or discriminatory outcomes. This can result in unfair treatment and profiling of individuals based on their race, ethnicity, gender, or other protected characteristics.


The lack of clear legal framework and regulatory oversight for facial recognition technology also poses significant ethical challenges. The absence of specific guidelines and regulations creates a void where the technology can be deployed without adequate safeguards for privacy and human rights.


Furthermore, the use of facial recognition technology in sensitive contexts such as law enforcement and criminal justice raises ethical concerns. The potential for misidentification and wrongful arrests due to errors in the technology can have severe consequences for individuals. The use of facial recognition technology in these contexts also raises concerns about the potential for disproportionate impact on marginalized communities.


Privacy Rights


The right to privacy in India is a fundamental right enshrined in the Constitution.  While the Supreme Court of India has recognized the right to privacy as a fundamental right, the legal framework surrounding facial recognition technology is still developing.


The use of facial recognition technology raises concerns about the right to privacy because it involves the collection, processing, and storage of biometric data, which can be used to identify individuals without their consent. This raises several concerns:


* Surveillance and Tracking: Facial recognition technology can be used for surveillance and tracking individuals without their knowledge or consent. This can have a chilling effect on freedom of expression and assembly.


* Data Security and Breaches: The storage and processing of sensitive biometric data raise concerns about data security and breaches.  If this data falls into the wrong hands, it can be misused for identity theft, fraud, or other malicious purposes.


* Lack of Transparency and Accountability: The use of facial recognition technology often lacks transparency and accountability. There is a need for clear guidelines and regulations to ensure that the technology is used responsibly and ethically.


* Discrimination and Bias: Facial recognition algorithms can be biased, leading to discriminatory outcomes. This can result in unfair treatment and profiling of individuals based on their race, ethnicity, gender, or other protected characteristics.


The Indian government has issued guidelines for the use of facial recognition technology in the public domain, but these guidelines have been criticized for being insufficient to protect privacy rights. There is a need for stronger legal frameworks and regulations to ensure that the use of facial recognition technology is aligned with the right to privacy and other fundamental rights.


Government Initiative


The Indian government has been actively promoting the use of facial recognition technology for various purposes, including law enforcement, security, and public services. However, these initiatives have raised significant concerns about the right to privacy, particularly in the absence of robust legal frameworks and regulations.


Here are some key aspects of the government’s initiatives and the associated privacy concerns:


* Aadhaar-Based Facial Recognition: The government has been integrating facial recognition technology with the Aadhaar biometric database, which contains the personal information of billions of Indians. This has raised concerns about the potential for mass surveillance and misuse of sensitive biometric data.


* CCTV Surveillance: Facial recognition technology is being deployed in CCTV cameras across various cities and public spaces. While this is being promoted for crime prevention and public safety, it raises concerns about constant monitoring and potential for profiling based on facial features.


* Law Enforcement: The police are increasingly using facial recognition technology for crime investigation and identification of suspects. This raises concerns about the accuracy of the technology, potential for false positives, and the lack of transparency in its use.


* Public Services: The government is promoting the use of facial recognition for accessing public services like healthcare and welfare schemes. This raises concerns about the potential for exclusion of individuals who may not have access to technology or who may have privacy concerns.
While the government argues that facial recognition technology is a valuable tool for improving security and public services, there are significant concerns about the potential for abuse and the need for stronger safeguards to protect the right to privacy. 


The Supreme Court of India has recognized the right to privacy as a fundamental right, but the legal framework surrounding facial recognition technology is still developing. There is a need for clear guidelines and regulations to ensure that the technology is used responsibly and ethically.


Case Studies


Here are some notable case studies that highlight the intersection of facial recognition technology and the right to privacy in India:


* The 2019 Delhi Riots: Facial recognition technology was reportedly used by the Delhi Police to identify and apprehend suspects involved in the communal riots. This raised concerns about the potential for misuse of the technology to target specific communities or individuals based on their appearance.


* The Aadhaar-Based Facial Recognition System: The Indian government has been promoting the use of facial recognition technology for various public services, including Aadhaar-based authentication. This has raised concerns about the potential for mass surveillance and misuse of sensitive biometric data. In 2019, the Supreme Court of India ruled that the use of Aadhaar for non-essential services was unconstitutional, but the government continues to promote the use of facial recognition technology for various purposes.

https://www.hindustantimes.com/india-news/govt-mandates-facial-recognition-for-mid-day-meal-poshan-beneficaries-101750272585688.html


* The Use of Facial Recognition in Schools: Some schools in India have started using facial recognition technology for attendance and security purposes. This has raised concerns about the privacy of children and the potential for surveillance by school authorities.


* The Use of Facial Recognition in Public Spaces: Several cities in India have deployed facial recognition technology in CCTV cameras for crime prevention and public safety. This has raised concerns about constant monitoring and potential for profiling based on facial features.
These case studies highlight the complex and evolving landscape of facial recognition technology in India. While the technology has the potential to benefit society, it is crucial to ensure that its use is balanced with the right to privacy and other fundamental rights. The Indian government and judiciary are grappling with these issues and are working to establish clear guidelines and regulations for the use of facial recognition technology.


Conclusion


The use of facial recognition technology in India presents a complex dilemma. While it holds promise for enhancing security and public safety, its potential for misuse and infringement on privacy rights raises serious concerns. Striking a balance between these competing interests is crucial. 
The Indian government must implement robust regulations that address the ethical and legal implications of facial recognition technology. This includes ensuring transparency, accountability, and oversight in its deployment.  Furthermore, the right to privacy must be enshrined as a fundamental right, providing individuals with control over their biometric data.  Ultimately, the goal should be to ensure that facial recognition technology is used responsibly and ethically, safeguarding both public safety and individual liberties.


FAQS 


Here are some FAQs related to facial recognition technology and the right to privacy in India:
1. What is facial recognition technology? 
Facial recognition technology is a computer-based system that identifies individuals by analyzing their facial features. It uses algorithms to compare a live image or video of a person’s face to a database of known faces.


2.How is facial recognition technology being used in India?
Facial recognition technology is being used in India for various purposes, including security, surveillance, and identification. Some examples include:
    * Law enforcement: Identifying suspects in crime investigations.
    * Border control: Verifying the identities of travelers.
    * Public safety: Monitoring public spaces for suspicious activity.
    * Banking and finance: Verifying the identities of customers.

Facial Recognition Threatens Your Human Rights

In Hyderabad, Telangana state, the government has initiated the construction of a “command and control centre” (CCC),  a building that connects the city’s vast CCTV infrastructure in real time.  Situated in Hyderabad’s Banjara Hills area, the CCC supports the processing of data from up to 600,000 cameras at once with the possibility to increase beyond this scope across Hyderabad city, Rachakonda, and Cyberabad.

Bengaluru May 2025:

Surveillance City

Hyderabad City in Telangana state is already ranked as one of the most surveilled cities in the world.  Yet, the government has initiated the construction of a “command and control centre” (CCC), a building that connects the city’s vast CCTV infrastructure in real time. Situated in Hyderabad’s Banjara Hills area, the CCC reportedly supports the processing of data from up to 600,000 cameras at once with the possibility to increase beyond this scope across Hyderabad city, Rachakonda, and Cyberabad.  These cameras can be used in combination with Hyderabad police’s existing facial recognition cameras to track and identify individuals across space.

Facial Recognition Can Amplify Discriminatory Policing and Threaten the Right to Protest

  • Even when it “works”, facial recognition can further exacerbate discriminatory policing that disadvantages individuals who belong to historically disadvantaged sections of society  – Muslims, Dalits, Adivasis, Transgender communities, and other historically disadvantaged sections of society – even when they are exercising consitutionally protected rights.  It can also prevent the free and safe exercise of peaceful assembly by acting as a tool of mass surveillance. 

– But that’s not all –

  • Law enforcement are promoting facial recognition under the guise of protecting women and children,  but is always used as a system of mass surveillance.
  • But this could happen anywhere (and it is already happening)
  • India’s Automated Facial Recognition System proposed by the Home Ministry contemplates a nationwide centralized database that will enable State actors to use facial recognition to track your every move – in the absence of basic privacy, security and rights protections. 

That’s Why the Amnesty team is Calling on Vendors and Telangana Law Enforcement to Ban the Use of Facial Recognition

Amnesty International, Internet Freedom Foundation, and Article 19’s research unearthed documents by Hyderabad Police disclosing the technical specification of their cameras, which have the capability of capturing imagery from at minimum 2 megapixels and upwards. In practice, this means the cameras have a field of vision with a radius of at least 30 meters.

With the help of a group of Telangana-based volunteers, we mapped the locations of immediately visible outdoor CCTV infrastructure in two sampled neighbourhoods – Kala Pathar and Kishan Bagh, surveying areas of approximately 988,123.5 square meters and 764,207.8542 square meters, respectively. Based on this data, our analysis estimated that in these two neighbourhoods at least 530,864 and 513,683 square meters, respectively, was surveilled by CCTV cameras – that’s 53.7% and 62.7% of the total area covered by volunteers.

In addition to earlier deployments of facial recognition capable devices beyond CCTV cameras, such as tablets and other “smart” cameras, the construction of the Command and Control Centre risks supercharging the already rampant rights-eroding practice, with no regulation in place to protect civilians.

Given that the Indian authorities have a record  of using facial recognition tools in contexts where people’s human rights are at stake, such as to enforce lockdown measures,  identify voters in municipal elections,  and – in other states in India – police protests,  the CCC is a worrying development.   There is currently no safeguarding legislation which would protect the privacy of the citizens of Hyderabad, nor a law which would regulate the use of remote biometric surveillance, which further exacerbates the danger these technologies present increase. In such a situation, the deployment and use of this technology is harmful and must be stopped.

Even without the CCC, Telangana state has in recent years been the site of increased usage of dangerous facial recognition technologies against civilians, and according to a study by the Internet Freedom Foundation, uses the highest number of facial recognition projects in India. 

Automating Harassment

Using open source intelligence, in this case publicly available videos from twitter and news sites, Amnesty International’s Digital Verification Corps discovered dozens of purported incidents filmed from November 2019 to July 2021 showing Hyderabad police using tablets to photograph civilians in the streets, while refusing to explain why. In one case, an alleged offender was subject to unexplained biometric conscription.  Other cases have shown the random solicitation of both facial and fingerprint reads from civilians. 

In May 2021, videos purportedly showed Hyderabad police asking civilians to, inexplicably, remove their masks, to obtain a biometric capture on an accompanying tablet.

Hyderabad City is already ranked as one of the most surveilled cities in the world.

Under the Identification of Prisoners Act of 1920, it is not permitted to take photographs of persons by police, unless arrested or convicted of a crime – neither is the sharing of such photographs with other law enforcement agencies. As the IFF have already stated, photographic capture following mask removal of civilians by police, in this case, would be considered a violation. The IFF, together with Article 19 and Amnesty International, are concerned that these tablets may be equipped with facial recognition software, against the backdrop of its increased usage by law enforcement.

When biometric surveillance is deployed for as diverse contexts as municipal elections, Covid-19 restriction compliance and unexplained stop and frisk, there’s a slippery slope towards the complete surveillance of civic life, threatening our human rights to privacy, freedom of assembly, autonomy and dignity. 

Who Is Behind Facial Recognition In India?

Meanwhile, the vendors of facial recognition technologies to government agencies in India have remained woefully quiet. Amnesty reached out to the following companies to ask about their known facial recognition related activities in India, and to request they share any human rights policies      they may have:

  • IDEMIA – For the deployment of the VisionPass system, for controlling contactless access under the conditions of COVID-19, using facial recognition. 
  • NEC India – It has been widely reported that NEC India provided Surat City Police in Gujarat with NEC’s NeoFace Reveal and NeoFace Watch facial recognition products in 2015.  More recently, the company was awarded contracts for providing facial recognition products at airports, including in Varanasi, Vijayawada, Pune and Kolkata.  This also comes after the company announced that its facial recognition algorithms were capable of identifying faces, even while wearing facemasks. 
  • Staqu – Staqu is known to have sold facial recognition technologies to law enforcement in at least eight states, including Uttar Pradesh – a state in which facial recognition was deployed by the police against anti-CAA protestors – as part of the Police Artificial Intelligence System. 
  • Vision-Box – Vision Box is known to be developing facial recognition products to be used in combination with a digital health pass system in India.  The company is already known for providing facial recognition technologies towards “paperless” travel at Delhi International Airport Limited. 
  • INNEFU Labs – Innefu Labs has sold facial recognition technologies to law enforcement in New Delhi, India, as part of the police’s Advanced Facial Recognition System (AFRS).  Innefu documents this particular relationship in a case study available on its website.  In Amnesty International’s letter from 25 September 2020, we already inquired about Innefu’s sales of surveillance tools to government agencies, as well as its human rights due diligence. In Innefu’s response, the company noted that it did not in fact have a stated human rights policy.

Out of the companies listed, only INNEFU responded to the letter, originally dated July 2021, stating that ‘the user agency is not under any obligation to adhere to any terms and conditions of the vendor’, without granting further responses to any of the 14 questions posed by Amnesty. Furthermore, in an earlier letter responding to questions by Amnesty for another investigation, the company made it clear that it did not have a “stated human rights policy”, but that it was in fact ‘follow[ing] the Indian laws and guidelines’.

Under international standards, such as the UN Guiding Principles on Business and Human Rights,  all companies have a responsibility to respect human rights, meaning they must have a human rights policy in place, and take steps to identify, prevent, mitigate and account for the risks to human rights posed by their operations and any risks they are linked to through their business relationships, products or services. Facial recognition technology inherently poses a high risk to human rights, and these five vendors have failed to demonstrate they are adequately addressing and mitigating the risks of providing this technology to government agencies

Share Resources to Help Your Community Resist!

Documenting uses and abuse of facial recognition?  Reach out to legal groups like IFF, or campaigning and advocacy organizations like Article 19 and Amnesty International, to seek recourse, accountability, and redress

Ref:

  1. Digital IDs are an effective tool against poverty. [ https://www.gatesfoundation.org/ideas/articles/mosip-digital-id-systems]
  2. Digital IDs are an effective tool against poverty. [https://www.gatesfoundation.org/ideas/articles/mosip-digital-id-systems ]
  3. Gates Foundation identifies biometrics as one pillar of tech “trinity” for digital inclusion. [ https://www.biometricupdate.com/201909/gates-foundation-identifies-biometrics-as-one-pillar-of-tech-trinity-for-digital-inclusion]
  4. Microsoft will no longer make minority investments. [ https://www.fool.com/investing/2020/03/30/microsoft-will-no-longer-make-minority-investments.aspx]
  5. Gates Foundation commits $200M to digital ID and other public infrastructure. [ https://www.biometricupdate.com/202209/gates-foundation-commits-200m-to-digital-id-and-other-public-infrastructure]
  6. Bill Gates-backed touchless screening startup Evolv to go public via a $1.7 billion SPAC deal. [ https://techstartups.com/2021/03/08/bill-gates-backed-ai-security-startup-evolv-go-public-via-1-7-billion-spac-deal/]
  7. https://www.itpro.com/data-protection/31117/facial-recognition-technology-is-dangerously-inaccurate
  8. https://amnesty.ca/features/racial-bias-in-facial-recognition-algorithms/
  9. https://www.biometricupdate.com/202309/lawsuit-filed-in-us-cross-country-arrest-allegedly-based-on-face-biometrics
  10. https://lawfullegal.in/algorithmic-governance-and-the-rule-of-law-legal-challenges-in-ai-driven-decision-making/
  11. https://lawfullegal.in/facial-recognition-technology-and-the-right-to-privacy-in-india/
  12. https://www.hindustantimes.com/india-news/govt-mandates-facial-recognition-for-mid-day-meal-poshan-beneficaries-101750272585688.html

Also Read:

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related articles

Mumbai Citizens Protest Forced Smart Meter Installations, Allege Disconnection Threats and Regulatory Violations

Mumbai, India – June 14, 2025 – A group of concerned Mumbai citizens gathered outside the BEST (Brihanmumbai Electric...

Digital India: A Kingdom of Promise or a Castle Built on Sand?

The recent massive data breach highlights a critical vulnerability in the Digital India framework security. 16 billion leaked...

Japanese Study of 18 Million Citizens Links COVID-19 Vaccines to Increased Mortality Rates, Reveals Data Peak in Deaths Post-Vaccination

TOKYO, JAPAN – June 15, 2025 – A group of Japanese lawmakers and medical experts released a comprehensive database...

Security Vulnerabilities of the NaMo App

The NaMo app, used by Prime Minister Narendra Modi, has been found to have significant security vulnerabilities, raising...