Thursday, April 2, 2026

India’s 2024 election is being influenced by AI, particularly deepfake technology, impacting the democratic process.

Date:

Deepfake democracy: Behind the AI trickery shaping India’s 2024 election

Want an opponent to campaign for you? Confuse voters between a real and a fake video? As India prepares for the world’s largest elections, parties are turning to AI for novel – and dangerous – strategies.

Qvive Editor: Deepfake technology, a form of artificial intelligence that can create highly realistic fake videos and audio recordings, is becoming increasingly prevalent in the political landscape of India. With the 2024 election on the horizon, the use of deepfakes is posing a serious threat to the integrity of the democratic process.

The spread of deepfakes has raised concerns about the authenticity of information in the digital age. With the ability to create convincing fake content, it has become increasingly difficult for the public to discern what is real and what is fake. This has the potential to undermine trust in the electoral process and erode the foundation of democracy.

The use of deepfake technology in the upcoming election is a stark reminder of the power of AI to manipulate and deceive. It is imperative that steps are taken to safeguard the democratic process and ensure that the voices of the people are not drowned out by the deceptive tactics of technology.

In a first, BJP leader used AI-generated videos during campaigning

Fox News: In March 2023, Gates released a blog post called “The Age of AI has begun,” where he touched on his thoughts about AI 

Gates showed his views on how AI would affect our workplace and touched on various cases where AI would have a direct effect on the jobs that humans do. Essentially speaking, Gates shares his idea on how AI can replace humans.

Modi’s BJP has been both a pioneer in the use of AI in campaigning and a victim of deepfakes

Indian political parties are using deepfakes for the 2024 Lok Sabha Election campaigns, sparking worries about digital deception.

Al Jazeera: As voters queued up early morning on November 30 last year to vote in legislative elections to choose the next government of the southern Indian state of Telangana, a seven-second clip started going viral on social media.

Posted on X by the Congress party, which is in opposition nationally, and was in the state at the time, it showed KT Rama Rao, a leader of the Bharat Rashtra Samiti that was ruling the state, calling on people to vote in favour of the Congress.

The Congress shared it widely on a range of WhatsApp groups “operated unofficially” by the party, according to a senior leader who requested anonymity. It eventually ended up on the official X account of the party, viewed more than 500,000 times.

It was fake.

“Of course, it was AI-generated though it looks completely real,” the Congress party leader told Al Jazeera. “But a normal voter would not be able to distinguish; voting had started [when the video was posted] and there was no time for [the opposition campaign] to control the damage.”

The astutely timed deepfake was a marker of the flood of AI-generated, or manipulated, media that marred a series of elections in India’s states in recent months, and that’s now threatening to fundamentally shape the country’s coming general elections.

Between March and May, India’s nearly one billion voters will pick their next national government in the world’s, and history’s, biggest elections. The threats posed by deceptive AI-generated media caught the world’s attention when faked sexually explicit images of the artist Taylor Swift appeared on social media platforms in January. In November, Ashwini Vaishnaw, India’s information technology minister, called deepfakes a “threat to democracy” and Prime Minister Narendra Modi has echoed those concerns.

But with the increased availability of handy artificial intelligence tools, teams across India’s political parties, including Modi’s Bharatiya Janata Party and the Congress, are deploying deepfakes to influence voters, managers of nearly 40 recent campaigns told Al Jazeera. While several AI tools used to generate deepfakes are free, others are available on subscription for as little as 10 cents per video.

‘Creating perception’

The BJP, arguably India’s most technologically sophisticated party, has been at the forefront of using illusions for campaigning. As far back as 2012, the party used 3D hologram projections of Modi so that he could simultaneously “campaign” in dozens of places at the same time. The strategy was deployed widely during the 2014 general elections that brought Modi to power.

Narendra Modi’s first 3D holographic projection speech in Ahmedabad, Gujarat

Watch: Narendra Modi’s hi-tech Election Campaign with 3-D

There was little deception involved there, but in February 2020, Manoj Tiwari, a BJP member of parliament, became among the world’s first to use deepfakes for campaigning. In three videos, Tiwari addressed voters in Delhi ahead of the capital’s legislative assembly elections in Hindi, Haryanvi and English – reaching three distinct audiences in the multicultural city. Only the Hindi video was authentic: The other two were deepfakes, where AI was used to generate his voice and words and alter his expressions and lip movement to make it almost impossible to detect, just on viewing, that they were not genuine.

n recent months, the Dravida Munnetra Kazhagam (DMK), which rules the southern state of Tamil Nadu, has used AI to resurrect its iconic leader M Karunanidhi from the dead, using lifelike videos of the former movie writer and veteran politician at campaign events.

Now, consultants and campaign managers say the 2024 elections could turbocharge the use of deepfakes even further.

“Politics is about creating perception; with AI tools [of voice and video modulation] and a click, you can turn the perception on its head in a minute,” said Arun Reddy, the national coordinator for social media at the Congress, who oversaw the party’s tech-savvy Telangana election. He added that the team was bursting with ideas to incorporate AI in campaigning, but that they didn’t have enough “trained people” to execute them all.

Reddy is strengthening his team – as are other parties.

“AI will have a resounding effect in creating the narrative,” Reddy told Al Jazeera. “The political AI-manipulated content will increase multifold, much more than what it ever was.”

‘Campaigns are getting weirder’

From the desert town of Pushkar in western India, 30-year-old Divyendra Singh Jadoun runs an AI startup, The Indian Deepfaker. Launched in October 2020, his company cloned the voice of Rajasthan state’s Congress chief ministerial candidate Ashok Gehlot for his team to send personalised messages on WhatsApp, addressing each voter by their name, during November assembly elections. The Indian Deepfaker is currently working with the team of Sikkim’s Chief Minister Prem Singh Tamang for holograms during upcoming campaigns. Sikkim is one of India’s smallest states in the northeast, perched on the Himalayas between India, Bhutan and China.

That’s the clean, official work, he said. But in recent months, he has been swamped by what he describes as “unethical requests” from political campaigns. “The political parties reach out indirectly via international numbers on WhatsApp, burner handles on Instagram, or connect on Telegram,” Jadoun told Al Jazeera in a phone interview.

In the November election, his company denied more than 50 such requests, he said, where potential clients wanted videos and audio altered to target political opponents, including with pornography. As a startup, Jadoun said his company is particularly careful to avoid any legal trouble. “And it is a very unethical use of AI,” he added. “But I know many people who are doing it for very low prices and are readily available now.”

During the election campaigns for the state legislatures of Madhya Pradesh in central India and Rajasthan in the west last November, police registered multiple cases for deepfake videos targeting senior politicians including Modi, Shivraj Singh Chauhan, Kailash Vijayvargia (all BJP) and Kamal Nath (Congress). The deepfake content production is often outsourced to private consulting firms, which rely on social media networks for distribution, spearheaded by WhatsApp.

A political consultant who requested anonymity told Al Jazeera that numbers of ordinary citizens with no public profile are registered on WhatsApp and used for the campaigns to make it harder for anyone to directly trace them back to parties, candidates, consultants and AI firms.

This consultant ran six campaigns in assembly elections last year for both the BJP and Congress. “In Rajasthan, we were using phone numbers of construction labourers to run our network on WhatsApp,” they said, “where deepfakes were primarily circulated.”

Meanwhile, AI-manipulated audios are particularly valuable tools in smaller constituencies, “targeting candidates with forged call recordings about arranging ‘black money’ for elections or threatening someone to buy votes,” the consultant said, whose own candidate was targeted with one such recording. The recordings are generally masked with candidates’ voices to cast them as evidence of corruption.

“Manipulating voters by AI is not being considered a sin by any party,” they added. “It is just a part of the campaign strategy.”

India has 760 million internet users – more than 50 percent of the population – behind only China.

Among all the requests, one from a constituency in southern Rajasthan stood out to Jadoun. Ahead of the state election in November, the caller requested that Jadoun alter a problematic but authentic video of their candidate – ​​whose party he did not disclose – to make a realistic deepfake. The aim: to claim that the original was a deepfake, and the deepfake the original.

“The opposition had a troubling video of their candidate and they wanted to spread it quicker on social media to claim it is a deepfake,” he said, bursting into awkward laughter. “Political campaigns are getting weirder.”

Threats to election integrity

Indian laws currently do not define “deepfakes” clearly, said Anushka Jain, a policy researcher at Goa-based Digital Futures Lab. The police have been using laws against defamation, fake news or violation of a person’s modesty, combined with the Information Technology Act, to try and tackle individual cases. But often, they’re playing whack-a-mole.

“The police are prosecuting on the effect of the deepfake and not because it is a deepfake itself,” she said.

Analysts say that the Election Commission of India (ECI), an autonomous body that conducts polling, needs to catch up with the shifting nature of political campaigns.

In the days leading up to the voting in Telangana state elections last year, ruling Bharat Rashtra Samithi party leaders repeatedly warned their followers on social media to stay alert against deepfakes deployed by the Congress party. They also appealed to the ECI against the deepfake clip that the Congress shared on the morning of the vote.

But the video remains online and the party never received notice from the ECI, two Congress leaders aware of the issue told Al Jazeera.

Al Jazeera has sought comments from the ECI but is yet to receive a response.

“Even if one person is misled into believing something and that changes his mind, it vitiates the purity of the election process,” said SY Quraishi, former chief election commissioner of India. “Deepfakes have made the problem of rumour-mongering during the polls graver by a thousand times.”

Quraishi said that deepfakes need to be moderated in real time to minimise the damage they can cause to Indian democracy.

“The ECI needs to take action before the damage is done,” he said. “They need to be a lot more prompt.”

‘Truth is out of reach’

The Indian government has been pressing major tech companies, including Google and Meta, to actively make efforts to moderate deepfakes on their platforms. IT minister Rajeev Chandrasekhar has met officials from these firms as part of deliberations over the threats posed by deepfakes.

By asking the tech sector to take the lead, the government escapes any criticism that it is trying to selectively censor selective deepfakes, or that it is trying to crack down on emerging AI technologies more broadly.

But by passing the buck to private companies, the government is raising questions about the sincerity of its intent to regulate manipulative content, said Prateek Waghre, the executive director of India’s Internet Freedom Foundation, a leading New Delhi-based tech policy think-tank. “It is almost wishful thinking,” he said.

Arguing that the tech companies have not been able to deal with the existing problems with content moderation, Waghre said that “the rise of AI now” has compounded challenges. And the current approach to content moderation ignores what’s really at the heart of the problem, he said.

“You are not solving the problem,” he said. “The design [of algorithms] is just flawed.”

On February 16, major tech companies signed an accord at the Munich Security Conference to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world. But the vaguely worded pact left many advocates and critics disappointed.

YouTube has announced that it will enable people to request the removal of AI-generated or altered content that simulates an identifiable person, including their face or voice, using its privacy request process.

“I’m not very optimistic about the platform’s capabilities to detect deepfake,” said Ravi Iyer, managing director of the Neely Center for Ethical Leadership and Decision-Making at the University of Southern California’s Marshall School of Business. “With low digital literacy and rising consumption of videos, this poses a grave risk to India’s election integrity.”

Identifying every AI-manipulated media is not a reasonable task, Iyer said, so companies need to redesign algorithms that don’t promote polarising content. “Companies are the ones with the money and resources, they need to take reasonable steps to tackle the rise of deepfakes,” he said.

The Internet Freedom Foundation has published an open letter urging electoral candidates and parties to voluntarily refrain from using deepfake technology ahead of the national elections. Waghre isn’t confident that many will bite, but he said it’s worth a try.

Meanwhile, political campaigns are bolstering their AI armouries – and some, like Reddy, the national coordinator for social media at Congress, concede that the future looks dark.

“Most people using AI are out there to distort the facts. They want to create a perception that’s not based on truth,” said Reddy. “Combine the penetration of social media in India with the rise of AI, the truth will be out of reach of people in the elections now.”

How AI-Generated Images Took Centre Stage In Telangana Elections During the Telangana elections, AI-generated images were widely circulated. What will this trend mean for Lok Sabha elections 2024?

Ref: https://www.boomlive.in/decode/how-ai-generated-images-took-centre-stage-in-telangana-elections-24076

India is committed to responsible and ethical use of AI: PM Modi

PM Modi addressed the Global Partnership on Artificial Intelligence (GPAI) Summit at Bharat Mandapam, New Delhi. “The development mantra of India is ‘Sabka Saath Sabka Vikas’, the Prime Minister said, underlining that the government has drafted its policies and programs with the spirit of AI for All. He said that the government strives to take maximum advantage of AI’s capabilities for social development and inclusive growth, while also committing to its responsible and ethical usage.

Source: Aljazeera, Pune Pulse-Image, Creativeblock-Image, Businesstoday-Image, Decode-Image

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related articles

The Politics of Launda Naach in Bihar

Bihar is currently the center of discussion across the country due to an incident involving former Bihar Chief...

Explosion outside BJP office, investigation suggests crude bomb-like object

An explosion occurred outside the BJP Punjab office in Chandigarh on Wednesday (April 1). The blood bank building...

powered glasses will be smart, Meta launches Ray-Ban Optics Styles glasses

Expanding its AI-powered smart glasses range, Meta has launched the new Ray-Ban Meta Optics Styles. These glasses are...

The Politics of Launda Naach in Bihar

If you experience frequent headaches, you may have ignored them many times. Sometimes it's work pressure, sometimes it's...
news-1701

sabung ayam online

yakinjp

yakinjp

rtp yakinjp

slot thailand

yakinjp

yakinjp

yakin jp

yakinjp id

maujp

maujp

maujp

maujp

sabung ayam online

sabung ayam online

judi bola online

sabung ayam online

judi bola online

slot mahjong ways

slot mahjong

sabung ayam online

judi bola

live casino

sabung ayam online

judi bola

live casino

SGP Pools

slot mahjong

sabung ayam online

slot mahjong

SLOT THAILAND

article 138000631

article 138000632

article 138000633

article 138000634

article 138000635

article 138000636

article 138000637

article 138000638

article 138000639

article 138000640

article 138000641

article 138000642

article 138000643

article 138000644

article 138000645

article 138000646

article 138000647

article 138000648

article 138000649

article 138000650

article 138000651

article 138000652

article 138000653

article 138000654

article 138000655

article 138000656

article 138000657

article 138000658

article 138000659

article 138000660

article 138000661

article 138000662

article 138000663

article 138000664

article 138000665

article 138000666

article 138000667

article 138000668

article 138000669

article 138000670

article 138000671

article 138000672

article 138000673

article 138000674

article 138000675

article 138000676

article 138000677

article 138000678

article 138000679

article 138000680

article 138000681

article 138000682

article 138000683

article 138000684

article 138000685

article 138000686

article 138000687

article 138000688

article 138000689

article 138000690

article 138000691

article 138000692

article 138000693

article 138000694

article 138000695

article 138000696

article 138000697

article 138000698

article 138000699

article 138000700

article 138000701

article 138000702

article 138000703

article 138000704

article 138000705

article 208000456

article 208000457

article 208000458

article 208000459

article 208000460

article 208000461

article 208000462

article 208000463

article 208000464

article 208000465

article 208000466

article 208000467

article 208000468

article 208000469

article 208000470

208000446

208000447

208000448

208000449

208000450

208000451

208000452

208000453

208000454

208000455

article 228000306

article 228000307

article 228000308

article 228000309

article 228000310

article 228000311

article 228000312

article 228000313

article 228000314

article 228000315

article 228000316

article 228000317

article 228000318

article 228000319

article 228000320

article 228000321

article 228000322

article 228000323

article 228000324

article 228000325

article 228000326

article 228000327

article 228000328

article 228000329

article 228000330

article 228000331

article 228000332

article 228000333

article 228000334

article 228000335

article 238000336

article 238000337

article 238000338

article 238000339

article 238000340

article 238000341

article 238000342

article 238000343

article 238000344

article 238000345

article 238000346

article 238000347

article 238000348

article 238000349

article 238000350

article 238000351

article 238000352

article 238000353

article 238000354

article 238000355

article 238000356

article 238000357

article 238000358

article 238000359

article 238000360

article 238000361

article 238000362

article 238000363

article 238000364

article 238000365

article 238000366

article 238000367

article 238000368

article 238000369

article 238000370

article 238000371

article 238000372

article 238000373

article 238000374

article 238000375

article 238000376

article 238000377

article 238000378

article 238000379

article 238000380

article 238000381

article 238000382

article 238000383

article 238000384

article 238000385

article 238000386

article 238000387

article 238000388

article 238000389

article 238000390

article 238000391

article 238000392

article 238000393

article 238000394

article 238000395

article 238000396

article 238000397

article 238000398

article 238000399

article 238000400

article 238000401

article 238000402

article 238000403

article 238000404

article 238000405

article 238000406

article 238000407

article 238000408

article 238000409

article 238000410

sumbar-238000336

sumbar-238000337

sumbar-238000338

sumbar-238000339

sumbar-238000340

sumbar-238000341

sumbar-238000342

sumbar-238000343

sumbar-238000344

sumbar-238000345

sumbar-238000346

sumbar-238000347

sumbar-238000348

sumbar-238000349

sumbar-238000350

sumbar-238000351

sumbar-238000352

sumbar-238000353

sumbar-238000354

sumbar-238000355

sumbar-238000356

sumbar-238000357

sumbar-238000358

sumbar-238000359

sumbar-238000360

sumbar-238000361

sumbar-238000362

sumbar-238000363

sumbar-238000364

sumbar-238000365

sumbar-238000366

sumbar-238000367

sumbar-238000368

sumbar-238000369

sumbar-238000370

sumbar-238000371

sumbar-238000372

sumbar-238000373

sumbar-238000374

sumbar-238000375

sumbar-238000376

sumbar-238000377

sumbar-238000378

sumbar-238000379

sumbar-238000380

sumbar-238000381

sumbar-238000382

sumbar-238000383

sumbar-238000384

sumbar-238000385

sumbar-238000386

sumbar-238000387

sumbar-238000388

sumbar-238000389

sumbar-238000390

sumbar-238000391

sumbar-238000392

sumbar-238000393

sumbar-238000394

sumbar-238000395

sumbar-238000396

sumbar-238000397

sumbar-238000398

sumbar-238000399

sumbar-238000400

news-1701