India has announced a new proposal that could reshape how the internet handles artificial intelligence.

The government plans to make tech companies clearly label all AI-generated content, including deepfake videos, synthetic audio, and manipulated images, to combat rising online misinformation.

The proposed law requires labels to be visible on at least 10% of an image or to appear in the first 10% of an audio or video clip, according to details shared in a recent report.

The push follows a surge in AI misuse across India’s social platforms, where fake videos and cloned voices have started to blur the line between fact and fiction.

Just last month, a fabricated video showing Punjab Chief Minister Bhagwant Mann making false statements went viral, prompting an official police investigation into its origin.

The viral clip, later confirmed to be a deepfake created with generative AI, is now part of a broader national conversation about how far technology can be allowed to go, as detailed in an ongoing investigation.

Authorities have been equally concerned about the damage done by fake advertising and identity theft.

In one notable case, a Delhi court ordered the removal of AI-generated promotional content featuring false images of Sadhguru, warning platforms to take “immediate responsibility” for misleading material.

The court’s decision has intensified calls for accountability and stricter moderation, a point echoed in a judicial directive earlier this month.

The new rules also tie into a global trend of governments grappling with AI’s power to deceive.

Similar steps are being discussed in Europe and the United States, with countries experimenting with watermarking systems for generative media.

Analysts point out that India’s regulation could become a model for other democracies, given its enormous online population and rapidly expanding tech sector.

A recent analysis noted that as India moves toward major elections, curbing AI-generated misinformation may prove essential for protecting public trust.

Under the proposed framework, major platforms like Google, Meta, and X will have to deploy advanced detection systems to automatically tag AI-made content and ensure compliance.

Startups, however, worry the move could raise operational costs and create barriers to entry in the fast-growing AI industry.

Still, for policymakers, the priority is clear — safeguard digital authenticity, even if it means rewriting how online content is produced and shared.

Public consultation on the proposal remains open until early November, but officials have hinted that enforcement could begin as soon as next year.

In a world where truth can now be generated, India is taking its first real swing at making sure everyone knows when it is.

Leave a Reply

Your email address will not be published. Required fields are marked *