Seeing Through the Fake: India’s New Labelling Rules for AI-Generated Content

A CMPR research-led analysis of India’s new AI labelling rules (DPDP), exploring how disclosure requirements for synthetic media affect elections, newsrooms, creators, and public trust.

Artificial intelligence is rapidly reshaping how media is produced, circulated, and consumed. From news visuals and advertising creatives to entertainment trailers and social media posts, AI-generated content is becoming a routine part of the digital ecosystem. At the Centre for Media and Policy Research (CMPR), our ongoing media monitoring shows a sharp increase in the everyday use of synthetic images, audio, and video across Indian digital platforms. Let us see more about India’s New Labelling Rules for AI-Generated Content

While these tools enable faster production and new creative possibilities, they also raise serious concerns around misinformation, authenticity, and public trust. In response, India has proposed new labelling requirements under the evolving Digital Personal Data Protection (DPDP) framework and IT governance rules. These measures aim to ensure that audiences can clearly identify when content has been created or altered using AI. This article examines the new rules from a media research perspective and outlines their implications for platforms, creators, and newsrooms.

What Counts as AI-Generated Content?

The Ministry of Electronics and Information Technology (MeitY) defines synthetically generated information as content that is:

“Artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that appears reasonably authentic or true.”

This definition covers a wide range of outputs, including AI-generated images, deepfake videos, cloned voices, synthetic news visuals, and algorithmically altered audio. The emphasis is not just on creation, but on realism; content that appears authentic enough to mislead audiences.

India’s New Labelling Rules for AI-Generated Content

The Growing Problem of Synthetic Media

CMPR’s research tracking visual and audio trends shows that synthetic media is no longer experimental. Newsrooms increasingly use AI-generated illustrations and explainers. Entertainment studios rely on generative tools for pre-visualisation and marketing assets. Short-form video platforms host thousands of daily uploads featuring AI filters, face swaps, and voice cloning.

Most of this content is benign. However, our misinformation studies consistently show spikes in AI-generated political content, especially during elections, protests, or crises. Deep-faked public figures and fabricated visuals often circulate without a clear context, creating a gap between creator intent and audience interpretation. This gap has become a key driver behind India’s push for clearer labelling standards.

What India’s New Rules Propose

Under the proposed framework aligned with the DPDP Act and future IT Rules amendments, AI-generated or significantly altered content must be clearly labelled or watermarked. Platforms will be expected to identify synthetic media and display visible notices when users encounter it.

Creators, influencers, and publishers will be required to disclose when AI tools have been used in creating or modifying content. Special attention is given to politically sensitive material. Any AI-generated content involving elections, public figures, or government communication must carry explicit disclaimers.

From a research perspective, this signals a growing policy focus on information integrity during election cycles rather than a blanket restriction on AI creativity.

Why Labelling Matters?

Our media audits show how quickly manipulated visuals can influence public sentiment, even when later debunked. While labelling is not a complete solution, it provides an immediate contextual cue for audiences.

The proposed rules serve three key public interests:

  • Protecting elections from AI-fabricated political messaging
  • Reducing misinformation by distinguishing synthetic media from authentic content
  • Rebuilding trust in digital information environments where visual proof is increasingly unreliable

Implications for Newsrooms

Verification workflows will need to evolve. Traditional checks, such as metadata analysis and reverse image searches, are often ineffective against synthetic media, which may lack metadata or convincingly mimic real footage.

CMPR recommends that newsrooms adopt:

  • Mandatory AI-detection tools during verification
  • Clear disclosure policies for AI-generated visuals and reconstructions
  • Cross-functional training for editorial, design, and social media teams

Future of AI Content India’s New Labelling Rules for AI-Generated Content

What This Means for Creators and Platforms

Creators will need to integrate labelling and disclosure into their production pipelines. Influencers and advertisers using AI-assisted visuals will also fall under these requirements, making transparency essential for credibility.

Platforms face greater challenges. They must invest in detection systems, automated labelling, and reporting mechanisms. Our analysis suggests that smaller platforms may struggle with compliance costs, potentially leading to uneven enforcement across the ecosystem.

How India Compares Globally

India’s approach broadly aligns with international trends:

  • EU: The AI Act mandates disclosure of deepfakes, especially in political contexts
  • US: Platform-led voluntary labelling dominates, with state-level variations
  • China: Strict enforcement with mandatory watermarking and creator identification

India’s model sits between rights-based regulation and enforcement-heavy governance, with a clear emphasis on electoral integrity and platform responsibility.

Challenges Ahead

Detection technology is still evolving, watermarking standards vary, and public awareness remains limited. CMPR’s audience research shows that many Indian users are unfamiliar with terms like “deepfake” or “synthetic media.” Without parallel investments in media literacy, labelling alone may have a limited impact.

References

  1. Ministry of Electronics and Information Technology (MeitY), Government
    of India
    Advisories on AI-generated content and misinformation
    https://www.meity.gov.in
  2. Digital Personal Data Protection Act, 2023 (India)
    Official text and explanatory materials
    https://www.meity.gov.in/data-protection-framework
  3. Press Information Bureau (PIB), Government of India
    Government statements on deepfakes and AI misuse
    https://pib.gov.in
  4. Election Commission of India
    Guidelines on misinformation and digital campaigning
    https://eci.gov.in
  5. European Commission – Artificial Intelligence Act
    Transparency and disclosure requirements for synthetic media
    https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-
    intelligence
  6. White House – Executive Order on Artificial Intelligence (USA)
    Voluntary and regulatory approaches to AI transparency
    https://www.whitehouse.gov/briefing-room/presidential-actions
  7. Cyberspace Administration of China
    Regulations on deep synthesis and AI-generated content
    https://www.cac.gov.cn
  8. UNESCO
    AI, media integrity, and information disorder
    https://www.unesco.org/en/artificial-intelligence
  1. Brookings Institution
    Deepfakes, democracy, and policy responses
    https://www.brookings.edu

Author: Bilvraj Mangutkar

Read more…