AI is being misused for mass surveillance by governments and markets:
In this session, Professor Amitav Banerjee introduces a discussion on the implications of artificial intelligence (AI), referencing the book “AI Snake Oil,” written by professors from Princeton University, Arvind Narayanan and Sayash Kapoor. The term “snake oil” is used to describe the deceptive practices surrounding AI, emphasizing both its benefits and the significant risks it poses.
What is “Snake Oil” in AI?
The term “snake oil” suggests that some AI technologies might be overhyped or misused—much like a questionable remedy that promises too much. While AI has incredible potential, it also raises significant issues, especially regarding ethics and legality.
The Dual Nature of AI
- Benefits of AI:
- AI can help create new content by analyzing vast amounts of data, essentially learning from patterns.
- For instance, programs like Stable Diffusion can take text prompts to generate images using a training dataset of over 5 billion images.
- Potential Harms:
- As AI learns from existing images, it sometimes reproduces them without giving credit to the original artists, which raises copyright issues. Imagine a student copying an entire essay without referencing the author—it’s stealing someone else’s hard work.
- Many artists have expressed their concerns over this technology, fearing it could replace their work, leading to a decline in artistic creativity.
The Issue of Consent and Copyright
Using AI involves ethical dilemmas, particularly surrounding consent. When AI is trained on images created by artists, it often does so without asking for their permission. This situation isn’t just unfair—it could also be illegal. In the U.S., laws meant to protect copyright are outdated, as they don’t address the complexities introduced by AI, since they haven’t been revised since 1978.
- Creative Commons allows some use of artwork if credited, but many AI systems use images for commercial purposes without adequate attribution, often leading to legal gray areas.
Surveillance Concerns
AI isn’t only impacting creative fields; it’s also being used for surveillance. This poses serious risks to privacy. Technologies like facial recognition are being employed by governments and companies, often without the consent of individuals. For example:
- Some authorities in places like Telangana, India, have leveraged facial recognition during routine activities like traffic stops without informing those being photographed, leading to legal challenges.
Additionally, companies like ClearView AI gather billions of images from social media to develop their surveillance tools. This practice raises concerns about privacy invasion and misuse.
The Need for Regulation
With these issues in mind, it’s clear we need regulations to protect individual privacy and creative rights. Public pressure and advocacy can lead to stricter laws governing how AI should be used for surveillance. In various countries, there have already been fines imposed on companies misusing facial recognition technologies, highlighting the urgent need for rules that adapt to the evolving digital landscape.
Also Read:
