New report looks at AI use in Canadian politics

Faculty of Arts
Artificial Intelligence
Politics
Elections
Technologies Artificial intelligence  
Image of a graphic design with multiple robotic-like lines
As machine learning and artificial intelligence advance become more accessible, political actors are starting to experiment with AI-enabled tools. Whether creating synthetic content, detecting disinformation, tracking harassment or predicting voting outcomes, AI-powered technologies are playing an increasing role in democratic election processes.

“The Political Uses of AI in Canada,” a new report from uOttawa’s Pol Comm Tech Lab, looks at key cases highlighting use of AI in Canadian politics. This includes synthetic images in campaign documents, AI-generated video “spokesbots,” conversational agents that answer elections-related questions, a Twitter bot that detects toxic tweets during Canadian elections and more. The list is not exhaustive and, indeed, many applications of AI take place behind closed doors in corporations or in election campaign “war rooms,” or are intentionally obscured to make them harder to identify.  

How AI affects political process and decisions

The report, by Michelle Bartleman, a uOttawa PhD candidate studying generative AI in Canadian journalism, and Elizabeth Dubois, associate professor in communication and University Research Chair in Politics, Communication and Technology, aims to stimulate discussion on the ways AI has been and could be integrated into different phases of the election cycle. Staying away from how the Canadian government is using AI to regulate and govern, which is already being studied in several contexts, the report attempts to better understand how AI is being applied in political processes, how Canadians’ political lives are being marked by AI-enabled tools and how these tools are affecting the way Canadians make political decisions.

As Dubois says, innovative uses of AI in political communication and campaigns highlight the role humans play in creating and using these tools. “Sometimes we’re tempted to think of AI as independent entities with agency. While these tools have some decision-making ability, they are designed by humans, built by humans and trained by humans. So, it follows that, as humans, we can also choose how we want to use these tools, what guardrails to put up and how to make these systems transparent and equitable.”  

The new report builds on the insights of five contributing scholars (Samantha Bradshaw, assistant professor, School of International Service, American University; Wendy Hui Kyong Chun, professor, School of Communication and Canada 150 Research Chair in New Media, Simon Fraser University; Suzie Dunn, assistant law professor, Schulich School of Law, Dalhousie University; Fenwick McKelvey, associate professor, communication studies, Concordia University; and Wendy H. Wong, professor, political science and Principal’s Research Chair, University of British Columbia), who also outline challenges and next steps, to find ethical ways to integrate new technologies into political practice.

Download the “Political Uses of AI” report.

Author: Michelle Bartleman, MJ, PhD candidate (media studies)                                                     Department of Communication, Faculty of Arts