From the vantage point of a child protection agency, it is impossible to grasp the full scope of safety failures on the social media platforms that our kids and teens use every day.
Many factors make it difficult to unpack, not the least of which is the technology industry’s lack of transparency about their content moderation and safety design practices.
This is why Australia’s office of the eSafety Commissioner's latest report scrutinizing the child protection practices and resource investment of five major technology companies — Google, TikTok, Twitch, Discord, and X (formerly Twitter) — is crucial information needed to hold these companies accountable. The results reinforce why the self-regulatory model simply does not work and is seriously damaging children/survivors.
Thanks to their willingness to apply their legislative powers to request detailed information from these platforms, eSafety details many concerning findings.
Some key findings include:
- Discord and Twitch — both of which rely partly on community content moderation — are not automatically notified when volunteer users find child sexual abuse material (CSAM). Some self-appointed moderators, among others, can also set up dedicated channels for the exploitation and abuse of children;
- Companies are not making use of widely available lists of URL block lists for sites devoted to known CSAM;
- Platforms are opting to forgo the use of widely available tools for blocking and preventing the upload of CSAM by users on certain services; and,
- Discord, a platform designed in part for livestreamed content, is not deploying harm detection tools on livestreamed content citing “prohibitively expensive” costs.
What’s more, the eSafety Commissioner has now fined X just over US$380,000 for non-compliance, while Google has been put on notice for providing insufficient information. This continues to underscore a general defiance demonstrated by the technology industry when basic and reasonable expectations are imposed upon them by duly-elected governments.
These are just a few examples of the woefully inadequate patchwork of safety mechanism and policies captured by eSafety’s second instalment of their world-leading Basic Online Safety Expectations reports. These findings also build upon the information revealed in their inaugural report in 2022.
These findings are sadly consistent with C3P’s experience working to support survivors of online abuse, and through the operation of Project Arachnid, an international tool for disrupting the distribution of known CSAM and harmful/abusive material of children online. Numerous C3P research reports highlight the ongoing failures of technology companies to prioritize child safety.
On behalf of those working on the front line of combatting online exploitation, of parents, and, most importantly, of survivors — we are grateful for the office of the eSafety Commissioner’s ongoing pursuit of accountability. More than ever, we need governments to urgently act and protect their citizens online — particularly those most vulnerable to sexual abuse and exploitation.