When Brandi Morin, an Indigenous journalist, received her first death threat in an email last fall, she immediately contacted the police in fear for her and her children’s safety.
“They told me they couldn’t do anything unless the situation ‘escalated,'” Morin, a French, Cree and Iroquis journalist from Treaty 6 in Alberta, recalled in a video testimony given at a People’s Tribunal — essentially a mock trial — on June 14.
“Escalated to what? To someone showing up at my door? To my life being threatened in person? Before I hung up with the officer, she told me ‘I’d advise you to keep your doors locked.'”
Morin’s is one of six testimonies describing similar disturbing experiences from various women who’ve been at the receiving end of online hatred and harassment directly as a result of their anti-racism advocacy.
The People’s Tribunal is part of a national campaign running from June 14 to 17 by Informed Opinions, a non-profit collecting data about the prevalence and real-life effects of these online attacks. Shari Graydon, the organization’s director and catalyst, says the goal ultimately is to pressure the government into regulating social media platforms.
“The big picture is that we are right now letting the social media companies that profit from the dissemination and amplification of hate to set the guidelines,” she told New Canadian Media.
“They’re the ones who dictate what’s okay and what’s not….The platforms have the capacity to do much, much better than they are doing, (but) it will take regulation.”
Increase of online hate
During the pandemic, because people turned to the internet en masse for work, school, shopping, health care and social interaction, there was also a greater “risk for different types of criminal offences facilitated by the Internet, including instances of harassment, discrimination or hate,” a 2020 Statistics Canada report found.
As a result, the proportion of police-reported hate crime incidents also recorded as cybercrimes “has been increasing in recent years, from 5.1 per cent in 2018, to 6.9 per cent in 2019 and 7.1 per cent in 2020,” the report states.
Accordingly, more than 90 per cent of the 200+ women surveyed as part of the Toxic Hush campaign – particularly those with intersectional identities – said they’ve experienced or witnessed an increase in online abuse since the pandemic began in 2020. Toxic Hush is a campaign working to end online hate and abuse.
Rosel Kim is a staff lawyer with the Women’s Legal Education and Action Fund (LEAF) who served as an expert witness during the Tribunal. She said when these acts of “tech-facilitated violence” get reported to the police, women are often told to stop posting content and avoid using certain digital platforms.
That not only limits women’s ability to engage in public life, she said at the Tribunal — in itself a “threat to democracy, which is premised upon free and equal participation” — but is virtually impossible “when we are connected to and rely on these social networks for many aspects of our lives.”
Additionally, Graydon said, this not only puts the onus on the victim — something she calls a “toxicity tax” — but demonstrates the reactive rather than proactive nature of the authorities.
“I think the solution to this problem is not to make it about after-the-fact reports or measures (but) to implement the kind of systemic response that would stop the abuse from the beginning,” she says.
Further, one of the “particularly detrimental aspects of tech-facilitated violence,” Kim says, is that the acts of violence aren’t limited to physical spaces but are cast far and wide via social media — beyond the individual target to people’s families, friends, and co-workers.
Consequently, “the globalized and amplified nature of the harm can also lead to other devastating consequences for women and gender diverse people experiencing the violence, including social isolation and withdrawal, damage to their reputation and careers, deterioration of their mental health, and in extreme cases, dying by suicide.”
Online hate speech seems to have hit a peak between November 2015 and November 2016, with the Canadian Women’s Foundation reporting a 600 per cent increase during that time.
Graydon says the timing coincides with the election of Donald Trump as president of the United States, who helped normalize the use of disinformation and online hate speech as a strategy to disagree and discredit ideological opponents. She also cites white supremacy as underlying the kind of violence heard online.
The pandemic exacerbated that by pushing everyone online and emboldening the “disgruntled” to hoist the blame onto BIPOC, immigrants and other visible minorities – people who “don’t belong here,” she explains.
According to Statistics Canada, “of the 575 hate crimes that were also recorded by police as cybercrimes between 2016 and 2020, these most commonly targeted the Muslim population (16%), the Black population (15%), a sexual orientation (13%), and the Jewish population (13%).”
“Uttering threats (39%) was by far the most common type of hate-motivated cybercrime, followed by indecent or harassing communications (24%), public incitement of hatred (12%) and criminal harassment (11%).”
Take harmful content down
On June 10, the 10th and final session of the federal government’s Expert Advisory Group on creating a regulatory framework for online safety took place.
In February, the government promised to reintroduce a new version of Bill-36, which would help prevent racist comments and threats by means of a peace bond that, if breached, could carry up to four years in prison, Global News reported.
However, advocates want the upcoming framework to have more teeth than that, including being able to force social media platforms to remove racist and hateful comments.
“It is not sufficient to leave the responses to platforms themselves, who have been slow and inefficient in their design and their policies, and have even exacerbated and profited from the harms to users,” Kim said.
“Research to date has shown that platforms are often inconsistent in handling abusive, violent content and are not addressing the systemic problems and causes of violence.”
Accordingly, she says the Government of Canada must regulate social media platforms and demand accountability and transparency in their content moderation practices.
However, most important to survivors is “immediate technical safety support, like getting the harmful content taken down, as well as emotional support and information from people with subject matter expertise,” she added.
Near the end of the Tribunal, Independent Senator Kim Pate, who served as one of three citizen judges, cited the Charter of Rights and Freedoms’ Section 2(b) to stress the fact that “simply put, one right, freedom of expression, does not trump the rest.”
A report from the Tribunal can be accessed here.