The AI + Society Initiative is excited to continue amplifying the voices of emerging scholars from around the world through our bilingual blog series Global Emerging Voices in AI and Society, stemming from the Global AI & Regulation Emerging Scholars Workshops and the Shaping AI for Just Futures conference. This series showcases the innovative research and insightful discoveries presented during these events–and beyond–to facilitate broad knowledge sharing, and engage a global network in meaningful interdisciplinary dialogue.
The following blog is authored by Habibe Deniz Seval and Renan Gadoni Canaan and who were selected by an international committee to present their poster “Reforming AI for Gender Equality: An Examination of Canadian Algorithmic Impact Assessment” as part of the poster session at the Shaping AI for Just Futures conference. An in-depth discussion is forthcoming in “Reforming AI for Gender Equality: An Examination of Canadian Algorithmic Impact Assessment,” in In(Visible) Signs of Gender-Based Violence, edited by Anne Wagner and Angela Condello, Springer Cham, 2025.
Addressing gender bias in AI: Insights from Canada's algorithmic impact assessments and beyond
Artificial Intelligence (AI) is revolutionizing critical sectors, offering the promise of reducing human biases, including those related to gender discrimination. However, AI also poses significant risks by perpetuating and amplifying these very biases. To mitigate such risks, Canada introduced the Directive on Automated Decision-Making in 2019, which includes a key tool: the Algorithmic Impact Assessment (AIA). While AIAs represent a positive step toward addressing bias in AI systems, they are not sufficient on their own. A holistic approach that includes feminist methodology is necessary to truly tackle the embedded gender biases within these technologies.
The Promise and Perils of AI
AI technologies are increasingly integrated into core functions like optimization, recommendation, and prediction. This rapid evolution promises greater objectivity, with the potential to eliminate entrenched human biases that often fuel gender discrimination. Yet, paradoxically, AI systems themselves are prone to reinforcing these very prejudices—a phenomenon often referred to as "automating inequality."
Biases in AI can stem from multiple sources: poor-quality training data, algorithmic design, and human influence. One striking example of this is facial recognition technology, where datasets predominantly composed of male images lead to lower recognition accuracy for women (Hill, 2020). In sensitive contexts like law enforcement, these errors can have dire consequences. Beyond data, biases can be embedded within the algorithms themselves, as seen in the case of the job search engines, where algorithms disproportionately favored men over women in job recommendations (Drage & Mackereth, 2022). These examples reflect broader societal patterns that manifest within AI systems, perpetuating inequality, especially for marginalized groups such as women of color.
Addressing these biases is not only essential for improving the accuracy of AI technologies but also for upholding women’s rights and dignity in an increasingly digital world.
Canada’s Algorithmic Impact Assessments (AIA)
In response to concerns about algorithmic discrimination, Canada introduced the Directive on Automated Decision-Making, covering any automated systems developed or acquired by the federal government after April 1, 2020. The goal of this directive is to enhance transparency and minimize the risks associated with automated decision-making, particularly in critical domains like healthcare, credit, and public policy.
Central to this initiative is the Algorithmic Impact Assessment (AIA), a tool for evaluating the potential direct and indirect effects of AI technologies on stakeholders and users. The tool is a questionnaire that assigns a risk score to AI systems to determine whether they pose any significant threats. AIAs typically focus on four key impact areas: individual rights, health, economic interests, and sustainability (Ada Lovelace Institute, 2021).
In the healthcare sector, for instance, AIAs have been used to uncover gender biases in AI algorithms. A study conducted by researchers in the United States found that an AI algorithm used to predict the likelihood of readmission to a hospital was biased against women (Grant, 2019). The algorithm was trained on data that reflected historical healthcare practices and patient characteristics, which showed a pattern of unequal treatment of women. By identifying the sources of bias using AIAs, adjustments were made to improve the algorithm's fairness.
Although AIAs represent a significant step toward combating gender bias in AI, they are not a complete solution. The static nature of these assessments, often conducted only once during system development, fails to account for the evolving nature of AI algorithms. AI systems continuously learn and adapt post-deployment, making it essential for assessments to be ongoing and iterative to ensure that biases do not creep in over time.
Feminist Methodologies in AI
To move beyond static impact assessments, AI development needs to incorporate feminist methodologies, which offer a critical lens through which to examine how AI systems affect women and marginalized groups. One such approach is the "woman question," which interrogates how legal standards and technological systems may inadvertently harm or exclude women (Bartlett & Kennedy, 2018). By posing the “woman question”, disparities in AI systems can be revealed. This critical reflection helps pinpoint areas where women’s experiences diverge from men’s and allows for the identification of biased aspects within AI functioning.
Ensuring diversity in AI development teams is one way to put the "woman question" in practice and address gender bias. Teams should include individuals from diverse genders, cultural backgrounds, and socio-economic statuses to foster a wider range of perspectives. This diversity reduces the risk of embedding biases in AI systems, as more inclusive voices contribute to design decisions. Another practical solution is curating diverse and representative datasets for training AI systems, which helps avoid the reproduction of gender biases in AI outputs.
Furthermore, by incorporating feminist perspectives into the AI system development, we can ensure that AI systems uphold values of the Canadian Charter of Rights and Freedoms, which emphasizes gender equality and individual rights.
Toward a More Inclusive AI Future
Canada's Algorithmic Impact Assessments represent an important tool in this effort, but they must be part of a larger interdisciplinary strategy that includes feminist methodologies, as well as regulatory oversight and continuous public engagement. By embracing these methods, we can work toward a future where AI systems not only enhance efficiency and objectivity but also uphold fairness, equality, and dignity for all. This vision requires the commitment of policymakers, developers, and researchers alike to ensure that AI serves as a force for inclusion, rather than exclusion, in the digital age.
Key resources to learn more
Hill, Kashmir, (2020) Wrongfully Accused by an Algorithm The New York Times.
Drage, Eleanor & Mackereth, Kerry (2022) Does AI Debias Recruitment? Race, Gender, and AI’s ‘Eradication of Difference’ 35:4 Philos Technol 89.
Government of Canada, Algorithmic Impact Assessments, Canada.
Ada Lovelace Institute, (2021) Algorithmic Impact Assessment: A Case Study in Healthcare.
Grant, Crystal (2019) Algorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism American Civil Liberties Union.
Bartlett, Katharine T & Kennedy, Rosanne Terese, (2018) Feminist legal theory: readings in law and gender Boulder: Taylor and Francis.
About the authors
Habibe Deniz Seval is a Ph.D. Candidate at the University of Ottawa, Faculty of Law and a Scotiabank Student Fellow with the AI + Society Initiative. Her research focuses on the intersection of law, Artificial Intelligence, and feminism. Her work aims to bridge complex philosophical and legal themes to think deeply about the evolving dynamics between Artificial Intelligence and gender. Deniz Seval graduated from the LLM International Commercial Law at Bournemouth University in the United Kingdom in 2019, where she focused on blockchain technology as a solution for the current challenges of international trade practices. After completing her LLM program, she worked in an information technology company as a lawyer focusing on privacy compliance projects in the context of European and Turkish laws and as a legal counsel for its software development projects.
Renan Gadoni Canaan is a Ph.D. Candidate at the University of Ottawa, Faculty of Law and a Scotiabank Student Fellow with the AI + Society Initiative. He was named an Emerging Leader in the Americas by the Canadian government through the ELAP program and is a former MITACS fellow at the Centre for Law, Technology and Society. He is currently pursuing his doctorate in Data Governance at the University of Ottawa. With a multidisciplinary background that perfectly aligns with research at the intersection of Law, Innovation, and Data Governance, Renan has a unique academic journey. He initially studied Science and Economics at the Federal University of Minas Gerais in Brazil. Later, he earned a Chancellor’s International Scholarship to join the Science and Technology Policies Program at the University of Sussex. His research contributes to the ongoing global discussion on two main topics: (i) the Governance of Health Data for AI Innovation and (ii) the post-colonial influence of Western Digital Regulations on the Global South.
This content is provided by the AI + Society Initiative to help amplify the conversation and research around the ethical, legal and societal implications of artificial intelligence. Opinions and errors are those of the author(s), and not of the AI + Society Initiative, the Centre for Law, Technology and Society, or the University of Ottawa.