India has taken a bold step toward reining in artificial intelligence misuse with a new proposal that would force tech platforms to visibly label all AI-generated content, including deepfakes, manipulated videos, and synthetic audio.
The proposed framework mandates that an AI-made image carry a clear marker covering at least 10 percent of its surface, while audio or video must display the tag within the first 10 percent of playback, as outlined in a recent report.
Officials say the move comes amid growing alarm over AI-driven misinformation and deepfake scandals that have rocked the country’s political and entertainment circles.
Just last month, a fabricated clip of Punjab Chief Minister Bhagwant Mann making false claims went viral, prompting police to launch an investigation into the creator of the video, according to ongoing coverage of the incident.
Cases like this have become frequent enough that regulators are racing to implement guardrails before the next election cycle.
The Ministry of Electronics and Information Technology has opened the draft for public feedback until early November, but officials have signaled they want quick enforcement.
This effort aligns with India’s broader crackdown on digital manipulation, which also saw the Election Commission issue a formal warning to political parties not to misuse AI for campaigning or satire, as detailed in the commission’s statement.
Experts say India’s quantified approach—requiring fixed percentages for visibility—is one of the most precise standards in the world.
Similar policies in the European Union and Spain have leaned on metadata or watermarking systems, but none as visually explicit as India’s proposal.
A comparative analysis from the World Economic Forum highlights that these laws could reshape how the global tech ecosystem approaches AI transparency.
From where I stand, the intention feels right: we need transparency in an age when digital deception is practically effortless.
Still, it’s hard not to wonder how this will play out in practice. Policing billions of uploads across languages and platforms is no easy feat.
And while the idea of mandatory labeling sounds solid, it risks being only as strong as the systems that detect violations.
For India, this could be a defining experiment in AI governance. If these rules work, they might become the blueprint for the rest of the world.
But if enforcement falters or compliance turns cosmetic, it could simply add another layer of bureaucracy to an already tangled digital landscape.
One thing’s clear—how India handles this will echo far beyond its borders.


