TikTok is performing towards the continued unfold of harmful challenges and hoaxes on its platform. The corporate mentioned it surveyed over 10,000 teenagers, dad and mom, and lecturers throughout a number of international locations, together with the U.S. and UK.
The findings of this survey reveal that at the least 31% of teenagers participated in an internet problem of some sort. Teenagers have been additionally requested concerning the threat degree of every problem they discovered on-line. To this, roughly 48% of them mentioned the problem was protected, whereas 32% mentioned it had some threat, whereas 14% of them deemed it as harmful or dangerous. Round 3% of the respondents mentioned the challenges have been “very harmful.”
Solely 0.3% of the kids taking part within the survey mentioned they took half in a harmful problem. TikTik’s examine additional finds that 46% of the kids on the platform search extra info and wish to perceive the dangers concerned.
TikTok can also be updating its Security Middle with tips about figuring out hoaxes or harmful challenges
Furthermore, 31% of the respondents mentioned that hoaxes associated to suicide and self-harm had a unfavorable affect on them personally. This concern isn’t simply particular to teenagers, nonetheless. Practically 37% of the grownup respondents mentioned they couldn’t debunk or focus on self-harm-related hoaxes with out attracting pointless consideration.
TikTok claims it removes hoaxes on its platforms and takes the mandatory steps to restrict its attain. However the firm additionally acknowledges that it must do extra.
“The analysis confirmed how warnings about self-harm hoaxes — even when shared with the very best of intentions — can affect the well-being of teenagers since they usually deal with the hoax as actual,” TikTok mentioned in a release (via).
“Whereas we already take away and take motion to restrict the unfold of hoaxes of this nature, to additional shield our group we are going to begin to take away alarmist warnings about them as they might trigger hurt by treating the self-harm hoax as actual. We’ll proceed to permit conversations to happen that search to dispel panic and promote correct info.”
TikTok can also be updating its Security Middle with info on recognizing hoaxes and dangerous challenges. Moreover, the platform is including expertise that may detect a spike in “violating content material linked to hashtags.”
So every time a consumer searches for content material that has been recognized as a harmful problem or hoax, they are going to instantly see a warning label with hyperlinks to its Security Middle. TikTok mentioned it labored carefully with a behavioral scientist and a medical youngster psychiatrist to make the brand new warning labels simpler to grasp.