DECODING 3-HOUR DEADLINE FOR COMPANIES TO AXE FLAGGED AI-MADE POSTS

The latest amendments to the Information Technology rules have brought artificial intelligence within the legal ambit for the first time, while also mandating drastically shorter timelines for technological intermediaries to take down flagged unlawful content.

Focussed on bringing in a labelling mandate for AI-generated content in India, the rules have gone through a series of changes since the government floated draft amendments in October last year. Meanwhile, the Ministry of Electronics and Information Technology argued for much quicker compliance in a series of cases, necessitated by unlawful content and deepfakes targeting women and children going viral within hours of being posted. Subhayan Chakraborty decodes the shifting legal landscape.

Key terms:

Intermediaries: Entities that receive, store, or transmit electronic records on behalf of another person, or provide services concerning such records.

Synthetically Generated Information (AI content): Audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource making it appear real, authentic or true. Also, information depicting any individual or event in a manner which is likely to be perceived as indistinguishable from a natural person or real-world event.

Good Faith Edits: Those which use AI to only format or enhance content quality, such as technical correction, colour adjustment, noise reduction without materially altering the underlying information.

Unlawful content: Any information which is prohibited under any law, including relating to national sovereignty, integrity, state security, friendly relations with foreign countries, public order, decency or morality, contempt of court, defamation and incitement to an offence.

Changes from the draft rules released in October, 2025:

Good faith and routine edits exempted from mandatory labelling

Reason: Officials say monitoring all instances of AI-generated or -modified content unnecessary, would take away resources of intermediaries from combating deepfakes.

Condition of minimum 10% of the surface area of images, and audios being devoted to labelling dropped in favour of 'prominent labelling'

Reason: According to officials, industry argued that the 10% rule would take up too much space, making content difficult to view, especially on small screens.

Child sexual exploitative and abuse material, non-consensual intimate imagery, obscene, pornographic, paedophilic content clearly spelt out

Reason: Rising instances of deepfakes targeting vulnerable groups seen across platforms. In January, the Centre sent notices to social media platform X over its AI chatbot Grok churning out controversial images.

Major changes in compliance timeline:

For mandatorily taking down flagged content, whether AI-generated or not, which is used to commit an unlawful act prohibited under any law in force: 3 hours (36 hours earlier)

For resolving all user complaints received by the grievance officer: 7 days (15 days earlier)

For resolving grievances specifically related to content which is pornographic, invades another person’s privacy, harms a child, impersonates another person, contains a virus, misleads communication or advertises banned online games: 36 hours (72 hours earlier)

For removing content showcasing nudity, sexual acts, or non-consensual intimate imagery of individuals after receiving complaints from that individual: 2 hours (24 hours earlier)

For informing users that their access rights may be terminated in case of non-compliance with the intermediaries’ rules and regulations, privacy policy or user agreement: once every 3 months (earlier at least once every year)

For more news like this visit The Economic Times.

2026-02-12T00:35:19Z