L’Initiative IA + Société annonce les lauréats du Prix Chercheur(e) émergent(e) Banque Scotia en IA et régulation 2022

Droit, éthique et politique des technologies
bâtiment tabaret
L’Initiative IA + Société a le plaisir d’annoncer que le premier Prix Chercheur(e) émergent(e) Banque Scotia en IA et régulation a été attribué à Henry Fraser et José-Miguel Bello y Villarino et le second prix à Aileen Nielsen.
Trophy

L’Initiative IA + Société a le plaisir d’annoncer que le premier Prix Chercheur(e) émergent(e) Banque Scotia en IA et régulation a été attribué à Henry Fraser et José-Miguel Bello y Villarino et le second prix à Aileen Nielsen.

À la suite d’un appel international, l’Initiative IA + Societé a invité les chercheur(e)s émergent(e)s dans le domaine de l’intelligence artificielle (IA) et de la réglementation à participer au Global AI + Regulation Emerging Scholars Workshop afin de présenter leur projet d’article à des chercheur(e)s de premier plan dans le domaine de l’IA et du droit. Un comité de sélection international a sélectionné huit projets d’article qui ont reçu des commentaires de chercheur(e)s établi(e)s et des autres participants au cours d’un atelier virtuel de deux jours organisé par le professeur Florian Martin-Bariteau, directeur de l’Initiative, et Karni Chagal-Feferkorn, stagiaire postdoctorale Banque Scotia.

En plus d’offrir un espace pour échanger des idées et recevoir des commentaires constructifs, l’Initiative AI + Société a pu soutenir les chercheur(e)s émergent(e)s grâce au soutien du Fond Banque Scotia pour l’IA et la société à l’Université d’Ottawa qui a financé un premier prix de 1 500 $, et second prix de 500 $, pour récompenser les meilleurs articles présentés.

Le Prix Chercheur(e) émergent(e) Banque Scotia en IA et régulation pour le meilleur article a été attribué àHenry Fraser (Queensland University of Technology) et José-Miguel Bello y Villarino (University of Sydney) pour leur article « Where Residual Risks Reside: Lessons for Europe’s Risk-Based AI Regulation From Other Domains ».

Résumé : This paper explores the question of how to judge the acceptability of “residual risks” in the European Union’s Proposal for a Regulation laying down harmonized rules on Artificial Intelligence (the Proposal). The Proposal is a risk-based regulation that prohibits certain uses of AI and imposes several layers of risk controls upon “high-risk” AI systems. Much of the commentary on the Proposal has focused on the issue of what AI-systems should be prohibited, and which should be classified as high risk. This paper bypasses this threshold question, engaging instead with a key issue of implementation. 

The Proposal imposes a wide range of requirements on providers of high-risk AI systems (among others) but acknowledges that certain AI systems would still carry a level of “residual risk” to health, safety, and fundamental rights. Art 9(4) provides that, in order for high-risk systems to be put into use, risk management measures must be such that residual risks are judged “acceptable”. Participants in the AI supply chain need certainty about what degree of care and precaution in AI development, and in risk management specifically, will satisfy the requirements of Art 9(4). 

This paper advocates for a cost-benefit approach to art 9(4). It argues that art 9(4), read in context, calls for proportionality between precautions against risks posed by high-risk AI systems and the risks themselves, but leaves those responsible for implementing art 9(4) in the dark about how to achieve such proportionality. This paper identifies potentially applicable mid-level principles both in European laws (such as medical devices regulation) and in laws about the acceptability of precaution in relation to risky activities from common law countries (particularly negligence and workplace health and safety). It demonstrates how these principles would apply to different kinds of systems with different risk and benefit profiles, with hypothetical and real-world examples. And it sets out some difficult questions that arise in weighting the costs and benefits of precautions, calling on European policy-makers to provide more clarity to stakeholders on how they should answer those questions.

Le second prix a été attribué à Aileen Nielsen (ETH Zurich) pour son article intitule « Can an Algorithm be too Accurate? ».

Résumé : Much research on social and legal concerns about the increasing use of algorithms has focused on ways to detect or prevent algorithmic misbehavior or mistake. However, there are also harms that result when algorithms perform too well rather than too poorly. This paper makes the case that significant harms can occur because algorithms are too accurate. 

This paper proposes a novel conceptual tool and associated regulatory practice for reigning in resulting harms: accuracy bounding. Accuracy bounding would limit the performance of algorithms with respect to their accuracy. This technique could provide an intuitive and flexible tool to address concerns arising from undesirably accurate algorithms. Importantly, accuracy bounding could be complementary to many existing proposed governance and accountability tools for algorithms, such as fairness audits and cyber-security best practices. 

To date, legislators and legal scholars alike have largely ignored the risks that come from overly accurate algorithms. However, there is a rich history in law and regulation, as well as in the scientific and engineering disciplines, of analogous forms of performance bounding. Thus, accuracy bounding represents an expansion of existing regulatory and technical techniques. Such techniques offer a path forward to address many as of yet unresolved concerns regarding the rise of the algorithmic society.  

Félicitations à la lauréate et aux lauréats !

L’appel à candidatures pour l’édition 2022 du Global AI + Regulation Emerging Scholars Workshop et du Prix Banque Scotia sera publié au printemps 2022. L’atelier devrait avoir lieu en Europe à l’automne 2022.