India’s latest push on synthetic media is not a brand-new “AI law” so much as a firmer, more explicit warning to platforms that the obligations already sitting inside the country’s intermediary framework will be enforced against AI-generated misinformation and deepfakes. Reporting including Times Now’s coverage of the directive frames it as an order for social media companies to detect and label synthetically generated content—especially where it could mislead people—rather than leaving users to guess what is real.
Prior to this month, the underlying policy vehicle was an advisory from the Ministry of Electronics and Information Technology (MeitY), which reminded intermediaries (social platforms, messaging services, and other online services that host user content) of “due diligence” requirements and urged them to deploy measures to curb deepfakes and misinformation. The advisory itself is publicly available as MeitY’s “Advisory to intermediaries on deepfakes and misinformation” PDF, and it repeatedly points back to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
In practical terms, the government is telling platforms: you already have to stop users uploading unlawful material and take it down quickly once you have “actual knowledge” (including via official notice); now apply that posture to synthetic media at scale, and make potentially misleading synthetic content more identifiable to ordinary users.
The legal spine: IT Rules, “safe harbour”, and why platforms care
MeitY’s advisory is built around the idea of “safe harbour”: the conditional legal protection intermediaries receive for third-party content. Under Indian law, that protection is tied to compliance with due diligence and takedown obligations; lose it, and a platform can be exposed to legal liability for what users post.
The baseline protection is in Section 79 of India’s Information Technology Act, 2000, which sets out when an intermediary is exempt from liability. MeitY’s message is that if a platform does not follow the rules—particularly around removing certain unlawful content after “actual knowledge” or official notice—it risks losing that exemption.
The operative requirements live in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Those rules oblige intermediaries to publish user rules, act on prohibited categories of content, and operate grievance and compliance functions. MeitY’s advisory links synthetic media risks to these existing obligations, including content that is misleading, impersonating, or otherwise unlawful (common features of harmful deepfakes).
The government’s own summary of the advisory, via the Press Information Bureau, underscores that the “safe harbour” protection can be jeopardised for non-compliance—see PIB’s release on the deepfakes and misinformation advisory. That matters because safe harbour sets much of the boundary between hosting user content and being treated, in some circumstances, more like a publisher.
Labelling synthetic media sounds simple—until you try to do it at scale
“Detect and label” is an appealing policy phrase, but it is technically and operationally difficult. Detection can involve watermarking, metadata checks, provenance systems, machine-learning classifiers, user reporting, and cross-platform threat intelligence. Each approach has trade-offs.
Even best-in-class AI detectors are imperfect, and accuracy can shift with new model releases, compression artefacts, edits, re-uploads, translation, and adversarial tactics designed to evade identification. The risk is two-sided: false negatives (deepfakes slipping through) and false positives (legitimate content being labelled as synthetic, potentially chilling speech).
MeitY’s advisory leans on a “reasonable efforts” framing—an expectation that platforms implement appropriate measures rather than guaranteeing perfect detection. It also signals urgency: several media reports say the government expects fast action on reported deepfakes, including removal within a specified period. For example, TechCrunch’s report on the advisory notes an expectation that platforms act on flagged synthetic content within no more than 3 hours after receiving notice. (In practice, timeframes and compliance triggers can vary depending on the type of notice, the category of platform, and the facts of a case, so platforms will typically interpret obligations with legal counsel.)
Labelling itself is not one uniform thing. Platforms can add visible “AI-generated” tags, attach context panels, limit distribution, restrict monetisation, or require disclosures at upload time. The more prominent the label, the more likely users are to notice it—but also the higher the chance of disputes when creators believe their work has been misclassified.
Why Delhi is moving now: deepfakes, elections, and the attention economy
India’s regulatory pressure arrives amid a global rise in concern about AI-generated political propaganda, non-consensual synthetic pornography, financial scams, and misinformation that can spread faster than fact-checking can respond. Deepfakes can be particularly persuasive in a high-volume, video-first attention economy, especially when they exploit people’s tendency to trust what appears to be a recording.
Reuters, citing the advisory, reported that India asked platforms to label AI-generated content and warned of consequences for failing to prevent misinformation—see Reuters’ coverage of India’s request to label AI-generated content. While the advisory itself is technology-neutral, the policy logic presented by government communications is that platforms should invest in integrity systems that reduce harms associated with synthetic manipulation.
The timing may also reflect political realities. India is one of the world’s largest online markets, with extensive messaging and social media penetration. In such an environment, synthetic content can amplify communal tensions, distort public debate, or swamp timelines with plausibly misleading material. MeitY’s approach indicates a view that the platform layer—rather than individual posters alone—is a practical point for detection and friction at population scale.
The compliance burden will land unevenly
Large, well-resourced global platforms can respond by expanding trust-and-safety teams, deploying automated classifiers, integrating provenance standards, and adding new reporting tools. Smaller Indian platforms, startups, forums, and niche communities may find it harder to implement detection pipelines, content moderation at scale, and rapid-response processes—especially where moderation relies on small teams or volunteers.
This is one reason “detect and label” obligations can push the ecosystem towards consolidation: the cost of compliance is easier to absorb when a company has mature tooling, extensive resources, and specialist legal support. Over time, that may reshape competition, potentially disadvantaging smaller services even when they are not major sources of harmful deepfakes.
There is also a product and accessibility question: how should a platform design labels so they are meaningful in India’s multilingual context? A subtle English tag may do little for a Hindi, Tamil, Bengali, or Marathi user. If labels are too intrusive, some creators may shift to encrypted or fringe platforms; if labels are too subtle, they may be ignored. The government has not published a single mandated label format, leaving implementation to intermediaries—at least for now.
Civil liberties, over-removal, and who decides what is “misleading”
Rules aimed at misinformation can collide with legitimate speech. Synthetic media includes clearly harmful uses (impersonation, fraud) but also satire, parody, dubbing, accessibility edits, and creative expression. A broad push to label “AI content” can end up stigmatising benign uses, particularly if labels are treated by users as a proxy for “untrustworthy”.
The advisory model—rather than a narrow statute defining prohibited synthetic acts—can also raise questions about consistency and process. Who decides whether a piece of media is sufficiently “synthetic” to be labelled? What appeal pathways exist for creators? How transparent must platforms be about their detection methods and error rates? These questions matter because automated enforcement can skew towards removal when penalties are high, especially if safe harbour is perceived to be on the line.
Some Indian digital rights commentators have argued that intermediary obligations can incentivise over-removal to avoid liability. MeitY’s deepfakes advisory, and the government’s public messaging about safe harbour, may intensify that dynamic—particularly during high-stakes periods when disinformation concerns are elevated.
What users will notice next on their feeds
Over the coming months, Indian users are likely to see more explicit “synthetic” or “AI-generated” labels on images, audio and video; more prompts asking uploaders to disclose AI use; and more friction around reposting content flagged as manipulated. A parallel change may be less visible: expanded logging, quicker response workflows, and tighter coordination between platforms and government channels for notice-and-action.
If implemented well, labelling could help normalise a simple habit: do not treat content as authentic just because it looks like a recording. If implemented poorly, it could become another noisy tag users ignore, or a blunt instrument that removes lawful content without adequate explanation.
But the amended rules don’t just say “label it” and walk away. They add multiple obligations, including:
- Technical measures: intermediaries must deploy technical measures so users don’t create/share unlawful synthetic content (and the rules explicitly contemplate automated tools).
- Prominent labels + audio disclosure: labels must be prominent for visual display; for audio/audiovisual there’s an “appropriate” notice and even a prefixed disclosure concept.
- Provenance/traceability: “permanent metadata or provenance mechanisms,” including a unique identifier to identify the computer resource used to create/modify the synthetic content (to the extent technically feasible).
- Anti-removal: platforms shouldn’t enable suppression/removal of labels/metadata/identifiers.
- User declarations + verification: significant social media intermediaries must require users to declare whether content is synthetic, and take technical measures to verify those declarations and ensure labelling.
India appears to be signalling—through enforcement messaging and its intermediary advisory framework—that “we just host it” is less likely to be accepted as a sufficient posture when synthetic media can be used to convincingly impersonate people. The core test is whether platforms can build detection and labelling systems that are robust, transparent, and fair, without turning the response into overbroad moderation that unnecessarily affects ordinary users and creators.
