TikTok has been working in recent months to integrate mechanisms to prevent teens from viewing certain types of content. These actions are part of a large-scale work aimed at ensuring the safety of young users of the platform. Previously, the developers introduced the “Levels of Content” rating system to filter content that is not suitable for a young audience, and at the end of the year announced the integration of new filtering mechanisms.
Image Source: NurPhoto / Getty Images
We are talking about a new version of the technology used to automatically detect “materials of a sexually explicit nature, obscene or potentially harmful” content. According to the developers, the new technology is much better at detecting the so-called “potentially dangerous content” – videos that do not violate the rules of the platform, but for some reason may not be suitable for viewing by a teenage audience.
Note that TikTok isn’t the only one trying to filter out certain types of content from recommendations. Instagram* has long used a similar system to weed out potentially harmful content. TikTok developers did not specify how accurate the new system is, but noted that over the past 30 days, more than 1 million sexually explicit videos have been prevented from viewing teenagers’ accounts. In addition to this, the platform allows content creators to limit the viewing of their videos if they are aimed at an adult audience. Previously, this feature was only available during live broadcasts, but now this feature can also be used for short videos.
* Included in the list of public associations and religious organizations in respect of which the court made a final decision to liquidate or ban activities on the grounds provided for by Federal Law No. 114-FZ of July 25, 2002 “On countering extremist activity.”
If you notice an error, select it with the mouse and press CTRL + ENTER.