Sunday, February 16, 2025

Discover the impact of artificial intelligence on human rights, focusing on things like surveillance, health care, and education.

Date:

In today’s world, artificial intelligence (AI) has become a prominent part of our daily lives. From virtual assistants to smart home devices, AI technology has revolutionized the way we interact with the world around us. 

One of the primary concerns about the widespread adoption of AI in schools is the potential impact on critical thinking skills. When students and teachers rely heavily on AI for tasks that require problem-solving and analysis, there is a risk that they may become less adept at thinking critically on their own. This over-reliance on AI could hinder students’ ability to develop valuable skills such as creativity, adaptability, and independent thought.

Additionally, the use of AI in schools could have adverse effects on students’ health. Excessive screen time and exposure to digital devices have been linked to a range of health issues, including eye strain, insomnia, and reduced physical activity. With the integration of AI technology into classrooms, there is a concern that students may spend more time in front of screens, leading to potential negative health outcomes.

To address these challenges, it is crucial for both government and schools to set clear boundaries around the use of AI technology. Governments can play a key role in regulating the use of AI to ensure that it aligns with human rights principles. By implementing laws and policies that govern the deployment of AI systems, governments can help protect individuals’ rights to privacy, freedom of expression, and non-discrimination.

Schools, on the other hand, can take proactive steps to mitigate the negative effects of AI on students’ learning and development. Educators can emphasize the importance of critical thinking skills and provide opportunities for students to engage in hands-on, experiential learning experiences that do not rely on AI technology. By instilling a culture of inquiry and curiosity in the classroom, schools can help students cultivate the skills they will need to do well in a world where AI plays a big role.

In the healthcare sector, AI has the potential to revolutionize patient care by improving diagnostics, treatment plans, and overall efficiency. However, there are concerns about the ethical implications of relying too heavily on AI in medical decision-making. Doctors and healthcare professionals must maintain a critical role in the decision-making process and not solely rely on AI algorithms for diagnosis and treatment recommendations.

AI algorithms can sometimes be biased, which might cause unequal healthcare outcomes for those who are part of marginalized groups.

As artificial intelligence continues to advance and integrate into various aspects of our daily lives, concerns about its impact on human rights have become more prevalent. From privacy issues to bias in decision-making algorithms, the influence of AI on human rights is a complex and evolving topic.  

The development of artificial intelligence technology has advanced at an unprecedented pace, revolutionizing various industries and sectors. However, as the capabilities of AI systems continue to expand, concerns about the potential infringement of fundamental rights have come to the forefront. 

While the benefits of artificial intelligence are undeniable, it is essential to set boundaries on its full adaptation to prevent potential human rights violations.

AI AND INFRINGEMENT OF FUNDAMENTAL RIGHTS:

The integration of AI into various aspects of society has undoubtedly brought about numerous benefits and advancements. However, as with any technological innovation, the adoption of AI also raises concerns regarding its impact on human rights.

  1. Privacy Concerns: AI systems that collect and analyze personal data can pose a threat to privacy rights. For example, facial recognition technology used in surveillance systems can track individuals’ movements without their consent, raising concerns about mass surveillance and the right to privacy.
  2. Bias and Discrimination: AI algorithms can perpetuate bias and discrimination, leading to unfair treatment of individuals based on race, gender, or other characteristics. For example, AI-powered hiring tools may inadvertently discriminate against certain groups in the recruitment process.
  3. Freedom of Expression: AI has the potential to limit freedom of expression by censoring content or manipulating information. Social media platforms using AI algorithms to moderate content may inadvertently suppress dissenting opinions or promote misinformation.
  4. Automated Decision-Making: The use of AI in decision-making processes, such as in criminal justice or financial services, can raise concerns about accountability and transparency. Biased algorithms might result in unjust outcomes, infringing upon individuals’ rights to due process and fair treatment.
  5. Legal and Ethical Concerns: The lack of clear legal frameworks makes it difficult to hold AI systems accountable for rights violations.

Artificial Intelligence (AI) is revolutionizing numerous aspects of our lives, spanning technology, industry, healthcare, and governance. For example, Chat GPT, BARD AI, Midjourney, etc. are substituting the archaic way of writing content, software development, logistics and much more to ensure responsible AI implementation and address associated legal challenges, India has established a set of laws and regulations.

This article provides a comprehensive overview of the laws governing AI in India, supplemented by relevant case laws and legal provisions.

What is ‘AI’?
Artificial Intelligence (AI) is the replication of human intelligence in machines, where they are programmed to mimic human-like thinking and learning abilities. It involves the development of computer systems that can undertake tasks traditionally requiring human intelligence, including comprehension of natural language, pattern recognition, decision-making, problem-solving, and adaptability to new circumstances.

Increase in Use of Ai in India
According to Hindustan Times, India’s Ai market is likely to see a 20% rise in the next few years . Former CJI SA Bobde has also spoken about increasing and adapting AI in India legal system which may actually help clear a lot of case backlog . Gone will be the days when it would take 20 years to dispose off a criminal case or get a divorce . Many law firms such as Cyril Amarchand Mangaldas, Fox Mandal have adopted this techonology.

There is a growth of legal ai platforms also such as Onelaw ai , legal robot , LeGAI, PatentPal, Latch etc. According to Ibef, number of AI start-ups has increased 14 times from 2000 to 2022. India’s AI market likely to see 20% growth over next five years . 

How Not to Fully Adapt AI: The Dangers of Unchecked Implementation

While AI offers numerous benefits in terms of efficiency and productivity, its unchecked implementation can have severe consequences for human rights. Without proper regulations and ethical guidelines, AI systems may perpetuate existing inequalities and injustices in society. For example, biased algorithms used in predictive policing can disproportionately target marginalized communities, leading to violations of their rights to freedom from discrimination and arbitrary detention.

How AI affects human rights

Issues with Data Developed Using AI & AI Laws in India

Who shall be held responsible for erroneous data developed using AI? Although there is no specific AI legislation in India, there have been attempts to interpret & address such concerns with the help of other legal provisions & Fundamental Rights.

Artificial intelligence (AI) is arguably the next major leap the world is taking in terms of technology. With the ability to simulate human intelligence in different processes, AI allows you to enable seamless workflow and excellent efficiency by training the software to do various tasks.

AI has allowed companies to improve several aspects of their businesses. They are able to do so by training the AI using loads of data, teaching it to analyze the data for correlations and patterns and refer to these patterns to make apt predictions when encountering similar data in the future.

However, it is vital to note that although there are a wide range of benefits of using AI, there are certain issues that must be addressed: 

  • Who shall be held responsible for any erroneous data developed using AI?
  • How does the law in India address issues related to data developed by AI?

Who shall be responsible for erroneous data developed using AI? 

Although we do know about the incredibility of AI, a question quite often asked is ‘who shall be responsible for any incorrect data developed using AI?’ This question usually arises in cases of unclear forms of AI, which relies on several parameters to analyze any data before enabling the system to take a decision, action or inaction.

During the development and deployment process, an AI system is trained by a lot of different individuals/entities. When training AI systems, the development environment itself can have a substantial effect on the system’s efficiency.
As the saying goes, â€˜too many hands, spoil the broth.’

  • The involvement of several entities during the development and deployment of AI systems complicates the task of assigning responsibilities and determining everyone’s liability.
  • You need to first and foremost establish the cause of action when it comes civil suits; however, an ambiguous AI system coupled with a huge number of aspects behind each of its decision makes it quite difficult to acknowledge any error in the data produced and determine the individual liable for such errors.

For example, in 1981, an engineer working at Kawasaki’s heavy industries plant in India was killed by a robot that was used to perform certain tasks in the plant. It was actually the world’s first death that was caused by a robot. It was found that the robot had not been turned off while the engineer was working on repairs. As such, the robot mistook the engineer for obstruction on the manufacturing line and swung its hydraulic arm to clear the ‘obstruction’. The engineer was pushed to an adjacent machine, which resulted in instant death. Although it has been decades since this incident, there is still no criminal structure for cases where robots may be involved in a crime or cause an injury to someone.

Another case of negligence occurred in 2018, where Elaine Herzberg was hit by a test vehicle that was being operated on self-driving mode. This came to be known as the first recorded death caused by a self-driving car. The Advanced Technology Group (ATG) had modified the vehicle and added a self-driving system. Although a human operator was sitting in the car for backup, they were looking at their phone when the collision occurred. After the incident, the National Transport Safety Board (NTSB) investigated the matter and came across the below mentioned facts.

  • ATG’s self-driving system had sensed the individual 5.6 seconds before impact. Although the system continued to observe and record the individual until impact, it never clearly distinguished what was crossing the road – a pedestrian – or predicted the path the item crossing the road was taking.
  • If the vehicle operator had been attentive, they would have had ample time to avoid the crash or at least reduce the damage.
  • Although Uber ATG could supervise the behavior of their backup vehicle operators, they seldom did so. Their decision to remove the second operator from test runs of the self-driving system exposed their ineffective oversight.

However, irrespective of the findings in this case, it was quite difficult to determine the liability for the damage done – whether it would be the safety driver, the ATG group or the technology used itself.

Another case occurred in 2017, when Hong Kong-based Li Kin-Kan assigned an AI-led system the task of managing USD 250 million of their own cash along with the additional leverage they took from Citigroup Inc, totaling the amount to USD 2.5 billion.

The AI system had been built by a company based in Austria and was handled by Tyndaris Investments based in London. The system was designed to work by scanning online sources such as real-time news and social media platforms and make relevant predictions on US stocks.

However, by 2018, money was being lost by the system on a regular basis, where the amount even reached over USD 20 million a day. As such, the investor chose to sue Tyndaris Investments for apparently overstating the capabilities of the AI. However, again, determining the liable entity – developer, marketer or user – was not so simple.

It is vital to understand that AI can potentially be biased and provide results that are against certain sections of society. This can lead to inconsistent results and potentially create conflicting situations. For example, in 2015, Amazon attempted an experiment with a machine learning-based solution to assess applicants by analyzing the old resumes that had been submitted to the company. The system went on to rate male applicants higher than female applicants as the applications provided to the system were majorly of male applicants, which led the system to think that male candidates were preferred over their female counterparts.

It must, further, be noted that incorrect decisions taken by AI systems can lead to individuals getting excluded from accessing certain services or benefits. As AI systems are basically probabilistic systems, any incorrect decision in consequential situations such as – when identifying a criminal or identifying a beneficiary can cause huge complications. If a beneficiary is named incorrectly, they may be barred from availing of certain services or benefits, while an incorrectly named criminal can even end up getting his life ruined because of an error in the system.

Legal Approaches for Managing AI in India

There are specific rules and guidelines for products and services in different high-risk sectors like healthcare and finance. As such, simply introducing AI to these systems in decision-making roles may not be apt.

For example, in addition to the lack of any anti-discrimination law that directly governs the decision-making by AI, the existing laws do not mention the means of decision-making they govern either.

As such, it shall be completely within the authority of the anti-discrimination legislation to regulate decisions made by using AI, especially when the entity using the decision-making AI has some constitutional or legal obligation to remain fair and unbiased.

There are some relevant laws that aim to protect you from AI-related issues in some cases; however, these need to be aptly adapted to tackle those challenges caused by the AI. In addition, the unique aspects of different sectors create a need for sector-specific laws for AI. Considering the quick pace of development of this technology, there will be a need to review the AI ethics principles and guidelines on an ongoing basis as well.

While machine learning models learn by identifying patterns between different data sets, it is usually conducted by relying on a part of the dataset, called ‘test dataset’. These datasets may not necessarily represent real world scenarios. As such, when you fail to aptly understand the connection between input features and output, it can become quite problematic to foresee its performance in a new environment with uncontrolled datasets; making it difficult to reliably deploy and scale such AI systems.

For example, a system is trained to identify animals using various datasets. However, when deployed in an actual uncontrolled environment, the system failed to deliver apt results. This was because it was found the system classified images of previous datasets based on their backgrounds and not the animal itself. As such, although effective to analyze test datasets, the AI system shall be deemed to be incapable of handing issues in the real world.

Besides, when, during training, AI systems are provided with large amounts of test data that include an individual’s personal data, there are bound to be certain privacy concerns. The absence of any privacy protection may lead to AI systems recording and remembering all the personal details of individuals without getting their exclusive consent. This shall, in turn, harm the interest of individuals by disregarding their preference on the use of their data.

The Indian judiciary interprets ‘right to life and personal liberty’ under Article 21 of the Constitution of India to include several fundamental and vital aspects of human life.

In the case of R Rajagopal vs. State of Tamil Nadu, the right to privacy was suggested to fall under Article 21 and deemed relevant for handling privacy matters that arise when AI processes personal data.

Further, in the case of K.S. Puttaswamy vs. Union of India, the Supreme Court of India stressed on the need for an inclusive legislative structure for data protection that shall be capable of addressing the issues that arise out of AI usage. As AI can also be unfair and discriminatory, Articles 14 and 15, which deal with the right to equality and right against discrimination respectively, shall be somewhat attracted to such cases too.

When it comes to the presence of any comprehensive guidance structure for the use of AI systems in India, there are none. Though there are some sector specific structures that have been chosen to use AI systems and develop using it as well.

In 2018, the National Institution for Transforming India (NITI Aayog) introduced the National Strategy on Artificial Intelligence (NSAI), where several provisions related to the usage of AI systems were discussed. The suggestions mentioned by the NITI Aayog therein are listed below:

  • To set up a panel that includes the Ministry of Corporate Affairs and the Department of Industrial Policy and Promotion to keep an eye on the regulations needed in intellectual property laws.
  • Forming appealing IP procedures for AI upgrades.
  • Introducing legal networks for data protection, security and privacy.
  • Creating different ethics according to different sectors.

The Ministry of Information and Technology launched four committees to assess various ethical issues. Besides, in addition to the Bureau of India Standards having launched a new committee for systematic and levelled AI, the Government is working on creating several safety parameters to reduce the risks associated with the usage of this technology.

In January 2019, the Securities and Exchange Board of India (SEBI) issued a circular to stockbrokers, depository participants, recognized stock exchanges and depositories and in May 2019, to all mutual funds (MFs)/ asset management companies (AMCs)/ trustee companies/ Board of Trustees of Mutual Funds/ Association of Mutual Funds in India (AMFI) on reporting requirements for AI and machine learning (ML) applications and systems offered and used.

In 2020, NITI Aayog prepared documents based on launching a supervising body and enforcement of responsible AI principles that covered the following key aspects:

  • Evaluating and employing principles related to responsible AI.
  • Forming legal and technical network.
  • Specific standards to be set through clear design, structure and processes.
  • Educating individuals and making them aware about responsible AI.
  • Creating new tools and techniques for responsible AI.
  • Representing India on a global standard.

Legal provisions governing AI in India


At this point In India, there are no specific provisions that deal with AI but the government is expressing concerns regarding the non – availability of these laws. However recently, IT Minister Ashwini Vaishnaw said that there are no regulations regarding Ai in India and the government is not possible to bring provisions regulating Ai since there are a lot of moral and ethical issues with Ai growth in India. Indian government has also set up a MeiTY (The Ministry of Electronics and Information Technology) in India.

Here are some provisions that more or less deal with AI:
Information Technology Act, 2000:
The Information Technology Act, 2000 (IT Act) serves as the fundamental legislation governing electronic transactions and digital governance. Although it does not explicitly mention AI, specific provisions within the Act are applicable to AI-related activities. Section 43A of the IT Act enables compensation in case of a breach of data privacy resulting from negligent handling of sensitive personal information. This provision is particularly relevant in the context of AI systems that process user data. Another provision is Section 73A of this act.
 
Case Law: In the landmark case of Justice K.S. Puttaswamy (Retd.) v. Union of India (2017), the Supreme Court of India recognized the right to privacy as a fundamental right under the Indian Constitution. This ruling emphasizes the need to safeguard personal data from AI-based systems.
 
Personal Data Protection Bill, 2019:
The Personal Data Protection Bill, 2019 (PDP Bill) is currently under consideration and aims to establish a comprehensive framework for protecting personal data. The bill introduces principles and obligations for entities processing personal data, including consent, purpose limitation, data localization, and accountability. Additionally, it proposes the creation of a Data Protection Authority to oversee and enforce the provisions of the bill.The PDP Bill includes provisions addressing profiling and automated decision-making. It mandates explicit consent from individuals when processing personal data using 
AI algorithms that significantly impact their rights and interests.
 
Indian Copyright Act, 1957:
The Indian Copyright Act, 1957 safeguards original literary, artistic, musical, and dramatic works, granting exclusive rights to creators and prohibiting unauthorized use or reproduction. The rise of AI-generated content has prompted discussions regarding copyright ownership and infringement liability. Case Law: In the case of Gramophone Company of India Ltd. v. Super Cassettes Industries Ltd. (2011), the Delhi High Court determined that AI-generated music produced by a computer program lacks human creativity and, therefore, is ineligible for copyright protection. This case clarifies the copyrightability of AI-generated content in India.
 
National e-Governance Plan:
The National e-Governance Plan aims to digitally empower Indian society by providing online government services. AI plays a vital role in enhancing the efficiency and accessibility of e-governance. Various government departments have integrated AI systems to automate processes, improve decision-making, and enhance citizen services.
 
New Education Policy:
The Indian government recently launched its New Education Policy (NEP), which includes provisions regarding special coding classes for students of the 6th standard. The government is focusing on establishing India as the next innovation hub.
 
AIRAWAT:
Recently, Niti Ayog (planning commission of India) also launched AIRAWAT, which stands for AI Research, Analytics, and Knowledge Assimilation platform. It considers all the necessary requirements of AI in India.

Loopholes in the legal system of AI in India


The Indian system suffers from a lot of disadvantages when it comes to AI. Some of them are:
Insufficiency of Comprehensive AI-Specific Legislation:
Presently, India lacks dedicated legislation that specifically caters to AI. While certain provisions within existing laws like the Information Technology Act, 2000, and the forthcoming Personal Data Protection Bill, 2019, touch upon AI-related aspects, they do not comprehensively address the unique challenges and complexities posed by AI technologies.
 
Absence of Clear and Enforceable Ethical Guidelines:
The absence of well-defined and enforceable ethical guidelines for AI development and usage in India poses a challenge. This dearth of comprehensive guidelines may lead to inconsistent practices and potential misuse of AI systems.
 
Bias and Discrimination Concerns:
AI systems can inadvertently perpetuate biases and discrimination, as they heavily rely on historical data that may reflect existing societal biases. The current legal framework in India does not explicitly tackle issues related to bias and discrimination in AI algorithms, leaving room for potential discriminatory practices.
 
Accountability and Liability Challenges:
The complexity and autonomy of AI systems make it difficult to assign liability in case of harm or errors caused by these systems. Determining responsibility and accountability for AI-related incidents or accidents can pose legal challenges under the existing laws.
 
Lack of Sufficient Regulatory Oversight:
While the Personal Data Protection Bill, 2019, proposes the establishment of a Data Protection Authority, there is a need for a dedicated regulatory body to comprehensively oversee AI technologies. The absence of a specific regulatory authority for AI can result in fragmented oversight and limited enforcement of AI-related regulations.
 
Intellectual Property Rights (IPR) Ambiguity:
The existing intellectual property laws in India may not adequately address the protection of AI-generated content, inventions, and innovations. Questions regarding copyright ownership and the patentability of AI-generated works can create ambiguity and uncertainty. These issues are known as ‘Attribution issues.’

By providing insights on how not to fully adapt AI in schools, doctors, and healthcare, this article aims to shed light on the importance of maintaining critical thinking and ethical standards in the use of artificial intelligence. Governments, schools and hospitals must collaborate to establish clear boundaries and regulations that protect human rights and ensure accountability in the age of AI.

Ref: Law Firm in India, Legal Service India,Linkedin-image

Also Read:

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related articles

The Fintech Revolution: Be Wary of Bill Gates’ Mission for a Cashless World Through 5G Advancements

Bill Gates' Global Mission for Universal Digital Financial Inclusion and 5G Networks Are you curious about why Bill Gates...

Are iPhone Users Being Fleeced? App Subscription Price Discrepancies Spark Outrage

Exposing the Overlooked Pricing in Today’s Digital Age A digital storm is brewing online, and at the heart of...

Bill Gates: The Untold Risks of Genetically Modified Mosquitoes and Wolbachia Bacteria on Public Health

Bill Gates' Support for Lab Research on Genetic Modification of Gram-Negative Organisms One area that Bill Gates has shown...

Did Bill Gates intentionally try to instigate the GBS outbreak for vaccination promotion?

There are certain pieces of evidence that suggest Bill Gates may have had intentions to instigate the Guillain–Barré...