The AI + Inclusion stream promotes a research agenda around developing effective inclusive and participatory ethical design and engineering frameworks for AI systems, notably with respect to avoid the disturbing risk of amplifying global digital injustices through AI for women, youth, seniors, Indigenous People, LGBTQIA2S+, racialized people, people with disabilities, and linguistic minorities (such as French and Indigenous Languages)—and those at the intersection of these identities. The research also considers specific concerns regarding people in the North and remote communities, as well as developing nations.
While promising important benefits, unchecked AI development introduces significant challenges, from creating uncertainty surrounding the future of work, to shifts in power to new structures outside the control of existing and understood governance and accountability frameworks. It is particularly problematic how these challenges can have disproportionate impacts on marginalized populations and as a result amplify existing injustices.
This stream aims to pioneer best practices while proposing tools and frameworks for the ethical engineering of AI systems. We notably investigate strategies to implement effective ethical requirements for AI, implement those requirements and critically verify that they have been effectively implemented.