The Indian government overhauls intermediary rules to enhance oversight of AI-generated content, imposing strict labelling requirements and reducing content removal timelines to combat manipulation and deepen platform accountability.
The Indian government has overhauled its intermediary rules to tighten oversight of content produced or altered by artificial intelligence, imposing mandatory labeling and much faster takedown obligations on large online platforms. According to coverage of the new measures, firms that host user material will have an explicit duty to mark synthetic audio, visual and audiovisual items so audiences can distinguish manipulated material from original content. (Sources: Times of India, Business Today)
The amendments, to take effect on 20 February 2026, shrink the window for platforms to remove content deemed unlawful by competent authorities to three hours in most cases and to two hours for especially sensitive categories such as non‑consensual intimate imagery and deepfakes. Industry reporting says the change represents a substantial acceleration from the previous 24–36 hour compliance period. (Sources: Times of India, India Today)
The rules formally define “synthetically generated information” as audio, visual or audiovisual material created or altered to appear authentic, bringing such material squarely within the scope of the IT Rules’ unlawful content provisions. Government notices and reporting also make clear that routine camera edits, accessibility adjustments and bona fide educational or design work are excluded from that definition. (Sources: Business Today, Onmanorama)
Regulators are demanding not only visible labelling but, where technically practicable, embedding persistent metadata and unique identifiers to support traceability. Draft proposals earlier called for more prescriptive coverage of labels, but the finalised amendments soften some of those requirements while retaining obligations for platforms to obtain disclosures from users and to prevent removal of labels or identifiers once applied. (Sources: Times of India, Times of India (business))
Enforcement is being stepped up: platforms that fail to comply risk forfeiting safe harbour protections that shield intermediaries from liability for user‑posted material, and the rules instruct companies to demonstrate due diligence in monitoring, detection and removal. The government has also encouraged the use of automated tools to curb the spread of illegal, deceptive or sexually exploitative synthetic content. (Sources: Business Today, Onmanorama)
Practical adjustments accompany the tougher deadlines. The ministry has allowed multiple designated officers in populous states to issue takedown directions to avoid bottlenecks, and the regulations include carve‑outs for minor automated edits applied by smartphones. Observers say the package reflects a broader push to balance online safety, accountability and technical feasibility as AI‑generated material becomes more widespread. (Sources: Times of India, Onmanorama)
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The article reports on recent amendments to India’s IT intermediary rules, notified on February 10, 2026, set to take effect on February 20, 2026. This is the earliest known publication date for this specific information, indicating high freshness. The narrative does not appear to be recycled from other sources, and there are no discrepancies in figures, dates, or quotes. The content is original and timely.
Quotes check
Score:
10
Notes:
The article does not include direct quotes, which is appropriate for a factual news report. The information is paraphrased from the original sources, and no identical quotes appear in earlier material. This approach ensures originality and avoids potential reuse of content.
Source reliability
Score:
10
Notes:
The article is sourced from reputable publications: The Times of India, Business Today, and Onmanorama. These are established news outlets with a history of reliable reporting. The lead source, The Times of India, is a major news organisation, enhancing the credibility of the information.
Plausibility check
Score:
10
Notes:
The claims made in the article align with the reported amendments to India’s IT rules, including mandatory labelling of AI-generated content and accelerated takedown timelines. These developments are corroborated by multiple reputable sources, confirming the plausibility of the information.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The article provides timely and original information on India’s new IT rules regarding AI-generated content, sourced from reputable and independent news outlets. The content is factual, with no discrepancies or concerns identified. All checks have been passed with high scores, indicating strong credibility.

