top of page
Writer's picture(27) Ece Akdogan

Social Media Platforms Keep Failing in Stopping the Spread of Harmful Content

The ongoing failure of social media platforms to curb harmful content on their platforms is very alarming. Social media has played a huge role in shaping modern societies through enhancement of communication and provision of self-expression platforms that the world had never experienced before, thus helping people communicate, share ideas, and engage in social and cultural discourses. According to personal opinions, social media platforms are the driver of entertainment, communication, and almost all lifestyle endeavors in contemporary. In other words, great power goes hand in hand with great responsibility, but alarming reports from this week underline a very distressing trend—social media giants have failed abysmally in removing explicit content like child sex abuse imagery, dangerous self-harm content, and other forms of hostile material. This article takes a look at what went wrong for many of the major social media platforms, the consequences of their lack of action, and how much more should be done to safeguard their vulnerable users.


WhatsApp: A Haven for Child Sex Abuse Imagery?

WhatsApp, owned by Meta, has been in hot water over its failure to stop the spread of child sex abuse images on its platform. Internet Watch Foundation has accused Meta for not having proper controls to prevent explicit material from being widely shared. The case of former BBC broadcaster Huw Edwards—found with indecent images of children sent via WhatsApp—highlights the seriousness of that issue.

End-to-end encryption had been one of WhatsApp's cornerstones for a long time, although for the same reason, this now plays a double-edged sword. On the one hand, it guarantees the privacy of the user in a most reliable way, but on the other hand, it prohibits law enforcement or the platform from spying to detect and remove harmful content. Critics argue that Meta has to come up with technology that can find a balance in protecting privacy while ensuring that explicit content is never shared—but in such a way that it doesn't undermine encryption.


Statistics have shown the magnitude of the issue. The IWF says: "no technology at WhatsApp stops the known abusive content at the moment." This is not only an institutional failure of Meta's but a betrayal of those children who become the victims of exploitation through these platforms. Therefore, the need for more concrete detection techniques is being placed, which is not only legally binding but morally obligatory.

Facebook and Instagram: Weak Response to Suicide, Self-Harm Content

Also in hot soup is Facebook and Instagram, both owned by Meta, for not responding well to the material that is suicide and self-harm-related. According to one study by the Molly Rose Foundation, of more than 12 million decisions on content moderation, Facebook and Instagram only combined to catch 1% of all the content on suicide and self-harm. In fact, this is almost scandalous in its revelation, given the high number of consumers and the time spent on the various platforms.


According to Meta, more than 50.6 million pieces of harmful content were taken down worldwide. However, criticism has emerged to the effect that these efforts are patchy and inadequate. Content moderation on platforms like Instagram is often a combination of automated AI systems and human review. While AI moderation does well in the quick flagging of harmful content and subsequent removal, at the same time, it risks overmoderation, where benign posts might get unfairly censored, raising concerns of liberty of expression. On the other hand, undermoderation—where dangerous content falls through the cracks, like the one in fifty suicide and self-harm posts that end up as Reels—exposes the systemic flaws of content moderation. In this way, a twofold challenge can be made to the need for a more balanced approach that secures safety without smothering free speech.


What it all demonstrates is a pleading by the establishment for coherent, far stronger practices in the number of content moderators it invites to assess and filter material. The current system, "inconsistent, uneven, and unfit for purpose," leaves young and vulnerable users in a situation of preventable harm, by their assessment, long-standing. The sad case of Molly Russell, 14, who took her own life after accessing harmful material on social media, brings this inaction and normalisation of online hate and consequences into stark reality.


Snapchat: Low Moderation, High Stakes

On Snapchat, content moderation is constructed in a very different way. The platform has no open newsfeed and therefore no pre-distribution content moderation. However, when Self-Harm and Suicide were found, reviews criticised the platform for not efficient detection of such content and its removal. This platform approach may remain the diffusion of harm but does not prevent it.


Snapchat claims to act quickly in removing content that encourages suicide or self-harm and shares prevention resources; it also notifies emergency services if such a situation arises. However, most of the time, the platform's actions are not proactive but only in reaction to reported harmful content, which exposes users until then. In this way, dangerous content is circulated before dangerous content is removed and action taken.


Snapchat has unveiled a range of new features designed to protect users from online harms and sextortion attempts, including expanded in-app warnings, strengthened friending protections, simplified location-sharing options, and improved blocking capabilities. These updates aim to ensure that Snapchat users can only be messaged by individuals they have added as friends or phone contacts. The in-app warnings, originally introduced last November, have been upgraded to alert teens when they receive messages from non-mutual contacts, and now include new alerts for messages from users reported or blocked by others, enhancing the platform’s ability to prevent scams and online harm.


TikTok and X: Enforcement Inconsistencies

Policy with respect to dangerous content has branded recent criticism of TikTok and X. In the case of TikTok, reporting under its substantial user base and prevalent videos showed that their platform had detected nearly 3 million items related to suicide and self-injury but only suspended two accounts. This left many people with a great deal of concern around the efficacy of enforcement policies of the platform. This gap between identification and action simply means that, though TikTok may be having tools to detect the presence of harmful content, it does not really implement the rules.


By contrast, X has conducted only 700 content moderation decisions related to suicide and self-injury – an almost negligible quantity in light of the platform's size. On the one hand, content moderation is necessary to ensure that users are kept safe from other users. In severe cases, overzealous content moderation plainly runs afoul of the commitments articulated with respect to freedom of expression. This can be an extremely tricky balancing act—one that international experience—from France's Avia Law—has shown was initially implemented againstュ harmful content but later restricted due to free speech concerns. The extent to which X is falling short in providing effective content moderation all too clearly highlights and demonstrates a much more significant problem: far too many platforms are failing to demonstrate an ability or willingness to effectively implement and enforce their policies, with the result that many users are being exposed to content that is harmful in one way or the other.

The Big Picture of Social Media: A Call to Action

The challenges of these platforms are not singular cases but rather symptomatic of a larger issue that affects the entire social media industry. A survey conducted by the Ministry of Digital Development and Information showed that two-thirds of users could see harmful content on the relevant services and 61% ignored it. Such indifference together with the irregular enforcement on the part of the platforms creates a dangerous atmosphere even for the most vulnerable users. This indifference can be for many reasons: the normalization of online hate, the dehumanization of people over screens, or maybe even because these platforms simply do not have the resources and dedication to effectively enforce the rules. These are a few elements that create the context in which harmful behavior goes unchecked, putting the vulnerable user at risk.

There, social media companies should act quickly and definitely toward resolving the problem. This includes developing technologies that are able to detect and remove harmful content without breaching the privacy of a user, enforcing the existing policies more effectively, and working closely with governments and organisations to help create a safer online environment.


The current state of content moderation on social media indeed does represent the failure of technology but, more fundamentally, the failure of responsibility. Platforms have to realize that their responsibility goes far beyond gaining profit and engaging users. They can take care of their users; they are morally obliged to protect them, especially children, from impending danger. Until these companies do take this responsibility seriously, the internet will remain a dangerous place for many.

bottom of page