The Ministry of Electronics and Information Technology (MeitY) has officially notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026. This isn’t just a policy update; it is a fundamental shift in how India defines “truth” in the digital age.

At the heart of this legislative strike is a brand-new legal entity: “Synthetically Generated Information”.

Also Read: India’s New Deepfake Law: Is Your AI Content Still Legal?

By defining exactly what a deepfake is and stripping away the “safe harbor” protections for platforms that fail to police them, the Indian government has signaled that the honeymoon period for unregulated AI is over.

What is Synthetically Generated Information ?

According to the notification, “Synthetically Generated Information” refers to any audio, visual, or audio-visual content—including voice notes, images, or videos—that is created or modified using computer resources.

Also Read: India’s New Deepfake Law: Is Your AI Content Still Legal?

The government’s “Litmus Test” for this content is its realism: if the information portrays a natural person or a real-world event in a way that is indistinguishable from reality, or is likely to be perceived as such, it falls under the new regulatory net.

However, the law provides a “Safe Zone” for professionals. Routine editing (color adjustment, noise reduction), standard document creation (PDFs, PPTs), and AI used solely for translation or accessibility are exempt, provided they do not “materially alter” the meaning or create a false record.

Also Read: India’s New Deepfake Law: Is Your AI Content Still Legal?

The “3-Hour” Kill Switch

In a dramatic tightening of timelines, the government has slashed the window for platforms to act. Previously, intermediaries had 36 hours to remove content following a court or government order. As of February 20, 2026, that window is now just three hours.

For complaints involving non-consensual sexual content or nudity, the response time has been reduced from 24 hours to a mere two hours.

Also Read: India’s New Deepfake Law: Is Your AI Content Still Legal?

Comparison: Penalties and Consequences Under BNS 2023

The 2026 rules explicitly link IT violations to the Bharatiya Nyaya Sanhita (BNS), 2023. Users who create or disseminate illegal synthetic information are no longer just breaking “internet rules”—they are facing criminal charges.

Violation TypePlatform Action (Immediate)Legal Liability (BNS 2023 / POCSO)
Deepfake Pornography (Non-consensual imagery)Removal within 2 hours; immediate account termination.Prosecution under BNS 2023 and POCSO Act (if involving minors).
Impersonation/Deception (Voice/Action cloning)Removal of content; disclosure of user identity to the victim.Charges for Forgery and Cheating under BNS 2023.
False Evidence/Records (AI-generated fake documents)Removal and preservation of evidence for law enforcement.Prosecution for creating False Electronic Records.
Incitement/Public Order (Synthesized riots/fake events)Removal within 3 hours of official notice.Prosecution for Inciting Public Mischief under BNS 2023.
Unlabeled AI Content (Benign but realistic)Mandatory labeling; repeated failure leads to account suspension.Violation of Due Diligence; Platform loses immunity.

The Metadata “Digital Fingerprint”

The rules introduce a revolutionary requirement: Technical Provenance. Every piece of synthetic information must be embedded with permanent metadata and a unique identifier. This digital fingerprint must identify the specific computer resource used to create the content.

Also Read: India’s New Deepfake Law: Is Your AI Content Still Legal?

Crucially, the law forbids platforms from allowing users to remove or suppress this metadata. If you make it with AI, the “who” and “how” will be baked into the file forever.

The “Identify the Creator” Clause

In a move that will likely spark privacy debates, the new rules empower victims of deepfakes. If a user’s likeness is violated through synthetic media, platforms are now mandated to disclose the identity of the violating user to the complainant or victim, subject to applicable laws.

Also Read: India’s New Deepfake Law: Is Your AI Content Still Legal?

Leave a comment

Your email address will not be published. Required fields are marked *