India tightens IT Rules to rein in AI deepfakes

India has strengthened its IT Rules to regulate AI-generated and deepfake content, introducing tighter compliance requirements for social media platforms such as X, Facebook, Instagram and Telegram. The new rules define “synthetically generated information” and mandate prominent labelling along with permanent metadata to ensure traceability. Platforms must seek user declarations and deploy verification tools for AI content, with non-compliance risking loss of safe-harbour protection.

Takedown timelines have been sharply reduced, with certain orders requiring action within three hours and some complaints within two hours. The changes reinforce India’s assertive online regulation amid concerns over censorship and platform accountability.

Artificial intelligence_TPCI

India has introduced sweeping changes to its internet governance framework to regulate AI-generated and deepfake content more strictly, requiring online platforms to ensure clear labelling, embed permanent metadata and comply with significantly faster takedown timelines. By formally notifying a new law to curb the misuse of artificial intelligence (AI), the Centre has ushered in a stricter compliance regime for social media platforms such as X, Facebook, Instagram and Telegram.

On February 10, the Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026. The amendments explicitly expand the scope of the law to cover what is termed “synthetically generated information.”

The revised rules will come into force on February 20, 2026, according to the official Gazette notification.

Definition and labelling of synthetic content

The notification defines synthetically generated information as any audio, visual or audio-visual content that is artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that makes it appear real, authentic or true. However, the government has clarified that routine editing, formatting, technical corrections and the good-faith preparation of documents, PDFs, research outputs or educational materials will not fall within this category.

Under the amended framework, intermediaries that enable the creation or dissemination of such content must prominently label it. The rules require that synthetically generated information be marked in a manner that ensures clear and immediate identification by users. In addition, platforms must embed permanent metadata or technical provenance mechanisms, including a unique identifier, to trace the computer resource used to generate or modify the content.

Social media Intermediaries are expressly barred from enabling the removal or suppression of these labels or embedded metadata.

Social media platforms will also be required to seek user declarations on whether content being uploaded is AI-generated, and to deploy automated tools to verify such disclosures. Where user declarations or technical verification confirm that the content is synthetically generated, platforms must ensure that it is clearly and prominently labelled.

Failure to comply could have serious consequences. If an intermediary is found to have knowingly allowed such content in violation of the rules, it may be deemed to have failed its due diligence obligations, thereby risking the loss of its safe-harbour protection.

Reduced takedown and grievance timelines

The amendments also tighten compliance timelines. Intermediaries must now act on certain lawful orders within three hours, compared to the earlier 36-hour window. User grievance redressal timelines have been reduced from 15 days to seven days, with specific categories of complaints requiring action within two hours.

Furthermore, AI-generated content involving child sexual abuse material, non-consensual intimate imagery, false documents, or misleading depictions of real individuals or events is explicitly prohibited.

Violations may result in immediate content removal, suspension or termination of user accounts, disclosure of user identities to affected individuals, and mandatory reporting to law enforcement under applicable criminal laws.

In recent years, India has issued thousands of takedown directives, as reflected in platform transparency reports. Meta alone blocked access to over 28,000 pieces of content in the country during the first half of 2025 in response to government requests.

Table: New timelines for social media platforms to act on prohibited content

Action/requirementPrevious timeline

 

New timeline
Removal of non-consensual sexual content (content depicting nudity, sexual acts, or morphed images/deepfakes)Within 24 hoursWithin 2 hours
Removal of other unlawful content upon govt/court order intimationWithin 36 hoursWithin 3 hours
General grievance resolutionWithin 15 daysWithin 7 days
Action on specific prohibited content (defamation, harassment, privacy invasion etc)Within 72 hoursWithin 36 hours
User notification of rules & policiesAt least once every yearAt least once every 3 months

 

The move underscores India’s standing as one of the most assertive regulators of online content, compelling platforms to navigate compliance in a market of nearly one billion internet users while addressing rising concerns about potential government overreach. The directive, however, did not specify any reason for revising the takedown timelines. The amended rules also eased an earlier proposal that would have required platforms to display AI-generated labels across 10% of a post’s surface area or duration. Instead, they now require such content to be “prominently labelled.”

Conclusion

The amendments signal a new phase in India’s digital governance, seeking to deter deepfakes and harmful AI misuse through traceability, tighter deadlines and stricter platform accountability. By mandating labelling and embedded metadata while scrapping the 10% watermarking proposal, the government has attempted to balance enforcement with industry concerns. The effectiveness of these rules will depend on implementation, oversight and their impact on innovation, user rights and freedom of expression.

Read more

The Rise of Artificial Intelligence and Deepfakes

Deepfakes and the crisis of knowing

 FAQ

  1. What are the new IT Rules aimed at?
    The amended IT Rules are designed to regulate AI-generated and deepfake content by introducing stricter compliance obligations for social media platforms, including labelling, metadata embedding and faster takedown timelines.
  2. What is “synthetically generated information”?
    It refers to audio, visual or audio-visual content that is artificially or algorithmically created or altered using computer resources in a way that makes it appear real or authentic.
  3. What are platforms required to do under the new rules?
    Platforms must prominently label AI-generated content, embed permanent metadata for traceability, seek user declarations, deploy verification tools and act within shortened compliance timelines.
  4. How have takedown timelines changed?
    Certain government or court orders must now be acted upon within three hours, while some categories of complaints—such as non-consensual intimate imagery—require action within two hours.
  5. What happens if platforms fail to comply?
    Non-compliance may result in loss of safe-harbour protection under the IT Act, exposing platforms to potential legal liability for user-generated content.

 

Leave a comment

Subscribe To Newsletter

Stay ahead in the dynamic world of trade and commerce with India Business & Trade's weekly newsletter.