Internet companies may need to set automated filters only for content related to child pornography in thedraft rules for technology intermediaries, which are expected to be notified in the next few weeks.
The Ministry of Electronics and IT (MeitY), in its latest draft sent to the law ministry for vetting, had proposed that content related to terrorism and child pornography should be weeded out by built-in automated technology tools.
However, content related to terrorism could be dropped from automatic filtering as the definition is “broad”, a top government official told ET.
“We don’t want the guidelines to be misused by anyone,” the official said.
The final call will be taken after completing discussions with the law ministry, the official added.
This is a climbdown from MeitY’s December 2018 draft IT intermediary guidelines, according to which any “unlawful content” should be identified automatically by deploying software tools.
This had led to protests by social media companies, who argued that the rules were a means of censorship since unlawful content had not been clearly defined.
The narrowed definition is, however, expected to reduce ambiguity and assuage these companies, which had argued that they were only intermediaries hosting user-generated content on their platforms.
Many social media companies already have technology tools to weed out such content, but the government has been asking for more, especially from encrypted platforms such as WhatsApp, where the origin of messages is difficult to track.
More than 25,000 cases of suspected child pornography material were uploaded across social media platforms in India in the last five months, the US-based National Center for Missing and Exploited Children told India’s National Crime Records Bureau, according to a report by the Indian Express.
A Rajya Sabha panel headed by Congress leader Jairam Ramesh — which looked into the issue of child sexual abuse material on social media — has also recommended changes to the Information Technology Act and its intermediary rules. It has recommended that law enforcement agencies be allowed to break encryption to trace the people distributing such material.
Internet service providers must bear liability in detecting and blocking websites showing such content and search engines must ensure that such websites are blocked and report any such website to authorities, the panel has said.
ET reported last month that the government was expected to notify within two weeks the revised IT intermediary guidelines that seek to make social media companies more responsible for content on their platforms.
The latest version of the guidelines may propose additional responsibilities on social media companies compared to other intermediaries. These include verifying users through mobile numbers, tracing origin of messages required by a court order and building automated tools to identify child pornography and terror-related content.
MeitY released the draft guidelines in December 2018, seeking to amend the Intermediary Guidelines Rules of 2011. The rules fall under Section 79 of the Information Technology Act, 2000 which provides internet companies a safe harbour from content hosted on their platforms, although it requires them to carry out a certain level of due diligence.
The guidelines, which have seen several rounds of changes, are keenly awaited since it may require social media firms such as WhatsApp, TikTok and Facebook to significantly overhaul their operating models to comply with the law.
Leave a Reply