Empowering Change Through Technology: How Algorithm for Change, Backed by the Canadian Race Relations Foundation, Tackles Hate Speech and Algorithmic Bias

November 13, 2024

In a world where digital spaces increasingly influence our perceptions and interactions, technology remains an immense force for connection and a sign of progress. While we applaud advances in technology, this seemingly boundless power also has the potential to amplify hate speech and discrimination. As a result, our team is committed to safeguarding progress in innovation without harming humanity.

CILAR, in partnership with the Canadian Race Relations Foundation – National Anti-Racism Fund, is taking a stand with the Algorithm for Change program, a two-part hackathon designed to tackle hate speech and address algorithmic bias. This initiative invites underrepresented Canadians to collaborate, innovate, and lead the way towards safer, more inclusive digital communities.

Why This Program Matters
Hate speech has devastating impacts on mental health, safety, and social inclusion. Underrepresented Canadians are disproportionately affected:

In 2024, Statistics Canada reported that online hate was more common among young Canadians with a disability. Also in the report, young people aged 15 to 24 with a disability (29%) were over 2.5 times as likely as young people without a disability (11%) - Statistics Canada, 2024. In 2021, the CRRF discovered that racialized Canadians were almost three times more likely to have experienced racist behaviour online than non-racialized (14% vs. 5% among non-racialized Canadians) - CRRF, 2021.

According to YWCA Canada, 44% of women and gender-diverse Canadians aged 16 to 30 report experiencing personal targeting by hate speech online, with the most affected groups including people with disabilities, 2SLGBTQIA+ individuals, Indigenous Peoples, and Black communities. YWCA, 2024.

Nearly half (48%) of BIPOC Canadians report online hate or racism. Hate speech finds its way onto our screens through the influence of social media algorithms, dictating what we click on, view, and share. For example, algorithms driven by hate prioritize and amplify harmful content, leaving communities exposed to constant discrimination.

Beyond hate speech, AI systems tend to perpetuate harmful stereotypes. For instance, facial recognition technologies have been found to misidentify Black and Indigenous individuals at disproportionately high rates, leading to misidentifications and reinforcing harmful biases (Amnesty International, 2023). By bringing together voices from affected communities, our Algorithm For Change hackathon and AI workshop aims to turn the tables, enabling underrepresented Canadians to design AI solutions that prioritize racial equity and fair representation.

The Program’s Goals and Impact

1) Increasing Awareness
: Through immersive workshops and collaborative sessions, participants will deepen their understanding of algorithmic bias and hate speech, learning how these issues uniquely affect underrepresented communities.

2) Creating AI Solutions: The hackathons focus on developing real-world AI solutions to detect and counter hate speech. Participants work together to prototype tools and technologies, driving innovation in ways that support inclusive digital spaces.

3) Corporate Impact and Social Influence: Solutions created during the hackathons are presented to corporate partners, empowering companies to adjust their algorithms and adopt practices that reduce hate speech and bias. Our goal is to establish long-term influence in AI development, building toward a future where technology actively supports racial equity​.

Statistics that Highlight the Urgency of Action

1) Mental Health
: Persistent exposure to hate speech online has been shown to worsen mental health outcomes, with Statistics Canada reporting that 61% of Black Canadians experience discrimination, often exacerbated by online interactions (Statistics Canada, 2024)

2) Algorithmic Bias: Studies have shown that platforms like YouTube and Facebook can inadvertently push radical content that frequently targets BIPOC individuals. Without intervention, these algorithms can deepen societal divides, impacting individuals and communities on a large scale (Royal United Services Institute for Defence and Security Studies, 2019)

Why You Should Join Us
By joining Algorithm For Change, participants gain the unique opportunity to collaborate with industry leaders and social justice advocates, learning how AI can become a tool for positive change. Participants develop valuable skills in AI, work closely with mentors from the tech and social justice sectors, and contribute to creating a safer, more inclusive digital world. Together, we can shape the future of AI to reflect the principles of fairness and justice that are essential to a thriving society.

Author: Tristan Morris
Tristan Morris is a program development professional at CILAR, sharing tools and resources to support diversity and inclusion in tech and innovation. Follow him on LinkedIn: https://www.linkedin.com/in/tristan-a-morris/