In 2019, in response to an anticipated increase in the use of AI decision-making tools across the public service, the Treasury Board Secretariat of Canada (TBS) released the Directive on Automated Decision-Making. This regulatory policy aims to assess and mitigate risks associated with AI tools by requiring departments who wish to implement them to follow a two-step process. First, they must complete a risk assessment to rate the risk on a scale of one to four, and second, if the risk is two or higher, they must complete a peer review before using the AI tool.
With the Directive in place but no previous experience in conducting this type of peer review, the Government engaged University of Ottawa researchers Kelly Bronson and Jason Millar to design a model peer review process and recommend best practices.
To call the resulting report a success is an understatement. The Guide on Peer Review has just been released, drawing heavily from Professor Bronson and Professor Millar’s work. “It’s not often that you can directly trace the impact that academic research has on policy, so it’s very exciting,” says Millar, associate professor at uOttawa's Faculty en Engineering and director of the Canadian Robotics and AI Ethical Design Lab.
“Since we published our report, I’ve had many requests from people referring to our work, asking for information and looking for support. It has been really rewarding to know that our report is useful and to continue to work collaboratively with a broader knowledge community around the Directive,” adds Bronson, associate professor at uOttawa's Faculty of Social Sciences.
The professors shared with us their interactive and interdisciplinary approach to this project and the interesting gaps they discovered along the way.
A collaborative approach to building a groundbreaking body of knowledge
Professor Kelly Bronson, who holds the Canada Research Chair in Science and Society, highlights the unique perspectives that she and Professor Jason Millar, the Canada Research Chair in Ethical Engineering of Robotics and Artificial Intelligence, brought to the table.
“We both have interdisciplinary backgrounds. Jason’s research looks at the technical and ethical considerations of AI, while my research focuses on the societal dimensions of emergent technologies,” says Bronson. “I think that bringing these complimentary domains together turned this project into the success that it was.”
The researchers began with a literary review of existing algorithmic impact assessments to understand current practices. “We quickly found the Canadian government was leading on this. Nobody else in the world was doing this kind of peer review at the time,” says Millar. “So instead, we looked at comparable processes, like privacy and environmental impact assessments and research ethics review boards, to understand how others conducted assessments that aim to manage the ethical implications of implementing a technology.”
Using equivalent practices, the researchers developed a set of guiding principles, around which they developed a model process for leading a peer review of the automated decision-making tools that the Directive covers.
“To me, the most important aspect of our methodology is that we didn’t stop at the literature review and initial modelling,” signals Professor Millar. “We were very focused on turning this into an interactive design activity where the ultimate uptake of the work would be driven by the ownership that the report’s audience sees in the final report and resulting toolkit.”
Bronson and Millar worked with numerous teams across government to workshop their model. This included a collaboration with the Canada School of Public Service and the Treasury Board Secretariat, who were in the process of reviewing an AI tool. This provided a practical opportunity to review Bronson and Millar’s model. They also engaged other departments working on AI-related projects, including Canada Border Services, Employment and Social Development Canada, Innovation, Science and Economic Development Canada and Agriculture and Agri-Food Canada.
“The feedback we got through this interactive, inclusive process allowed us to refine our principles and make our model process more useable,” highlights Bronson.
Going above and beyond to develop a useful toolkit
Ancillary to the model, and to the recommendations and tools that make up the final report, is a set of guiding principles for peer review. These principles include vigilance, independence, accountability, transparency, proportionality, accuracy, freedom from bias, consistency, inclusion, robustness and legibility.
“It was important to me to go beyond legal compliance under the Directive. I wanted to think about peer review through a broader social justice lens and consider how to mitigate harm, especially to vulnerable groups,” says Bronson. “We worked with a diverse team of researchers and collaborators across different domains — AI, ethics, social justice, law, public service — to develop comprehensive recommendations.”
While developing their process, Bronson and Millar identified significant gaps related to procurement, accountability and transparency within the Directive’s overall framework for reviewing and implementing AI decision-making tools.
“One of the things we realized when workshopping the process was that if a tool were provided by an external vendor, then they would necessarily be involved in the peer review process,” signals Millar. “There can be significant costs associated with the time a vendor dedicates to providing the information and documentation required for the peer review. It’s critical to anticipate this during the procurement stage to avoid unexpected budgetary issues.”
Bronson also emphasizes the need for clearly defined roles to execute the requirements of the Directive. “Our workshops revealed that responsibility was assigned ad hoc, and that accountability needed to be more clearly defined,” she says. “Our report assigns ownership for every step of the process and recommends the development of a team of AI ethics experts across government.”
Along with proper documentation, creating a paper trail and facilitating a smooth transition when individuals with key responsibilities change positions — a common occurrence in government — these steps promote transparency and consistency, something the researchers stress is critical to a proper peer review process.
“In the end, our report went beyond the scope of the peer review itself,” says Bronson, laughing. “But it really ensured that our research outputs were as useful as possible,” adds Millar.