Skip to main content Skip to section navigation

Blog: New Twitter policy elevating right to privacy a pivotal shift toward online safety


Written by , Executive Director of the Canadian Centre for Child Protection.

It was an unexpected shift in the content moderation doctrine of one of the largest social media platforms in the world.

On November 30, Twitter announced that going forward, sharing private images or information about a person without their consent is banned on the platform. The news came just a day after Twitter’s co-founder and CEO, Jack Dorsey, stepped down.

Explaining why this is a fundamental shift toward a safer ecosystem on Twitter is difficult to articulate. We’ve been conditioned to accept the broad idea that once our privacy is violated on the internet, if it doesn’t violate criminal law, there’s nothing to be done. Accept your fate or make what is likely to be a futile plea to a faceless abuse reporting inbox; those are generally your options.

Rinse, recycle, repeat

The rigid fixation with positioning criminality at the centre of how we view what is and what is not acceptable to distribute online about private citizens is a woefully broken lens; a lens that leaves behind a trail of devastation, especially among youth. Definitions of criminality are unique and enforced within each country across the globe, all the while online harm propagates in a seemingly extrajudicial fashion.

It’s little wonder why the deck is stacked so heavily against victims and survivors.

The continuum of harm resulting from the invasion of one’s privacy extends staggeringly far before ever reaching the threshold of the unambiguously criminal. Re-setting our minds to view and action online harm through a corrected lens that does away with this problematic approach is the only way to protect children, and quite frankly every one of us.

Many electronic service providers (ESPs) the Canadian Centre for Child Protection intersects with accept at face value and remove prepubescent child sexual abuse material (CSAM) content we flag. In nearly all cases, there is no doubt to the observer these images are criminal in nature. However, these images alone only account for a fraction of the overall harm online. Convincing ESPs, as well as many organizations who purport to have the best interest of children in mind, to move away from a criminal law mentality for content removal has proven to be massive challenge.

Faced with the evolving nature of online harm reported to us by the public and discovered online by Project Arachnid, we broadened the scope of content we action in late 2019. In addition to CSAM, under what we call our “Child Protection and Rights Framework” we began taking ESPs to task for making harmful or abusive content of children available to the public, even if in isolation, the depictions would not normally be considered criminal. This framework is grounded in the best interests of the child, and their right to dignity, privacy and protection from harm. It does not focus on whether a judge would deem it illegal.

Most mainstream ESPs have created their own terms of service or so-called “community standards” to address content posted by users. Of course, these standards are designed to balance the need to demonstrate some form of social responsibility, while ensuring the continued economic viability of their business models that largely revolve around monetizing mostly unfettered user-generated content. It should surprise no one that under these circumstances these policies are often inadequate or selectively applied.

All of this brings me to why Twitter’s shift in content moderation policy gives me cautious optimism.

To the person—the victim—having their private life violated on Twitter over the unwanted distribution of information: they no longer need to justify or demonstrate criminality, harm or abuse to when requesting content be taken down.

It’s privacy by default, as it should have always been.

Some have already criticized Twitter, noting certain bad actors are taking advantage of the situation to cleanse their online presence. Some seemingly legitimate information has also been caught up in the nascent policy. But, it’s much too soon to judge Twitter’s commitment to get it right based on these early moderation hiccups.

Nearly all platforms err on the side of harm for the sake of frictionless information flow. Little consideration is ever given to collateral damage happening to others. By placing the right to individual privacy as a core tenet of the platform, Twitter is shifting the balance on the side of safety, even for bad or insincere people. This is the trade off, and it doesn’t give wrongdoers a free pass. But it does mean the rules of online engagement cannot put others at risk.

Let me be clear: this change in no way exculpates Twitter for the harms survivors have been raising publicly—harms we gave voice to earlier this year. I am also skeptical of their commitment toward enforcing the policy and adequately resourcing it. Other problems also persist on the site.

Although there is now a right to privacy on Twitter, that right must be policed by the victims and survivors. Assuming they are even aware of violations against them, the burden to intercept and prevent harm, as always, rests with the individual, not with the platform.

Twitter’s acceptance of adult pornography within the same ecosystem as otherwise innocuous social content is also very troubling. It is also at odds with its tech peers. Discussion of the mass collateral damage to children’s sexual health as a result of unfettered access to pornography is a topic unto itself for another day.

Countries, including Canada, are increasingly grappling with how to define and regulate online harm. As part of their resistance to policies that may undermine their business models, many technology companies are exploiting the fact that neatly defining “criminal” and “harmful” online content is largely impossible. Along with feigned concerns over “free speech”, this is then weaponized to delay change and sow doubt in the minds of policymakers.

Perhaps governments should consider bypassing this thorny issue entirely, dispensing with attempts at defining online harm outside of what is unquestionably criminal, and instead make the right to personal privacy the backbone of how we protect citizens online.

-30-

About the Canadian Centre for Child Protection: The Canadian Centre for Child Protection (C3P) is a national charity dedicated to the personal safety of all children. The organization’s goal is to reduce the sexual abuse and exploitation of children through programs, services, and resources for Canadian families, educators, child serving organizations, law enforcement, and other parties. C3P also operates Cybertip.ca, Canada’s tipline to report child sexual abuse and exploitation on the internet, and Project Arachnid, a web platform designed to detect known images of child sexual abuse material (CSAM) on the clear and dark web and issue removal notices to industry.

Support our work. Donate today.

Be a part of something big. As a registered charitable organization, we rely on donations to help us offer our programs and services to the public. You can support us in helping families and protecting children.

Donate Now