The AI + Society Initiative is excited to continue amplifying the voices of emerging scholars from around the world through our bilingual blog series Global Emerging Voices in AI and Society, stemming from the Global AI & Regulation Emerging Scholars Workshops and the Shaping AI for Just Futures conference. This series showcases the innovative research and insightful discoveries presented during these events–and beyond–to facilitate broad knowledge sharing, and engage a global network in meaningful interdisciplinary dialogue.
The following blog is authored by Oana Ichim who was selected by an international committee to present her paper “The Politics of Lost Expertise: Carving a Place for (Legal) Experts and (Legal) Expertise in the AI Dynamics” as part of the 2nd Global AI & Regulation Emerging Scholars Workshop.
The Politics of Lost Expertise: Carving a Place for (Legal) Experts and (Legal) Expertise in the AI Dynamics
Human expertise cannot be ‘translated’ into ‘computer language’ and I explore ways to make that expertise ‘computable’ by coordinating ‘technical methods’ and benchmarks with legal knowledge and practice. My research highlights possible venues for regulating constitutive features of AI design, production and use, based on human expertise.
The lost meanings
The motivation for this research stems from questioning the perceived objectivity of scientific claims. Like other cultural objects, codes and algorithms gain meaning as they are created and circulated. Computing power and large datasets cannot account for how data becomes knowledge – for it is not clear through what process data was transformed into meaningful explanations.
Legal decision-making is all about choosing which norm best corresponds to facts. In the current state-of-the-art, machine learning or more recently Generative AI tools and methods cannot account for human choices; Generative AI does not ‘understand’ the process of creation, it just focuses on the given data. Our enchantment with Large Language Models (LLMs) like ChatGPT obscure the fact that they are essentially next-word-predictors whose success is mainly a result of huge compute power and large datasets and not a proof of their ability to produce explanations.
AI methods, no matter their specific name, aim at representing legal decision-making but they do so within the confines of current AI capabilities. The representation of legal decision-making is placed in the hands of computer scientists whose choice of method remains unaccounted for. At the heart of the legal decision-making lies what Jerome Frank once labeled, the unknowability of the way in which the judge found the facts.
Legal meanings are lost in negotiations about datasets and benchmarks. Computer scientists may not manipulate data, but they do manipulate the representation of it and any representation implies a choice. Legal expertise consists of confronting abstract representation of rules to possible contextual scenarios. Or, AI methods are not able to create abstract models, they are mainly focused on pattern matching and prediction. Pattern matching and prediction do not exactly qualify as legal argumentation. Moreover, even with the latest advancements in LLMs, knowledge retrieving solutions fail to encapsulate the sequence of granular transformations undergone by a norm in its process of ‘occurring’ as a (normative) model. Confronting mental models to facts creates patterns of decisions. Transforming words into numerical representations risks to further ‘abscond’ legal meanings and the richness of how these meanings were found in particular situations.
AI cannot account for rule-fact transformations
Legal tech’s advance has promised to de-code hidden steps of the reasoning process transforming judges in “optimal standard setters” and “blinking computerized Herculi”. AI’s credentials’ draw from its claim that it will transform the invisibility of legal reasoning into a computational engine that transforms opaque human intentions and values into ‘legible directives’. However, legal reasoning is a bit more complicated than that.
When retracing patterns in judicial decisions, it is necessary to retrace the various transformations of the norm and the transformations of the relation facts-data. Such a transformation is not ‘directly readable’ from the data because it is not linked to a specific text. One needs more than large amounts of text to account for the choices that have led to a particular normative outcome. For “Legal Tech” to become meaningful and relevant they need to take into account the transformations operated by that choice. Legal expertise consists of reasoning abilities to justify norm-fact transformations. The more experienced a decision-maker, the better the reasoning. Imagination and intuition largely support the creation of abstract normative models and their adaptation to facts.
AI tools cannot ‘read recurrence and transformation ’ at such a granular level not only because state-of-the art methods cannot break that barrier of meaning, but also because such purposes are not built into them.
With the view to carve a place for legal experts into the AI landscape, I borrow the concept of “holistic discrimination” from Hubert Dreyfuss and define expertise as the ability to aptly discriminate between situations. It is based on such ability that one can adapt mental abstract models to concrete facts.
I set out to build a framework that would help coordinate technical rules and benchmarks by focusing on possible venues to make up for lost meanings and for the uncounted expertise. I propose to develop specific methods to introduce ‘vanishing semantics’ back into formal representations and to build methods to retrieve personal experiences for confronting biases.
Specialized datasets and techniques should be developed for reflecting the cognitive capabilities of humans that come with expertise, and should be ‘tuned’ on diverse objective functions. As law moves away from agency toward compression into large datasets this increases the risk of lost meanings. Regulation should turn its focus from AI data, networks and the like, like the GDPR saga toward ‘regulation’ of the agency and expertise that goes into AI design. Nonetheless, and perhaps more important, regulation needs to create a framework for preventing the loss of expertise. The goal is not to impose expert standards that cannot be implemented by current AI models, but to render these standards visible and build a community that was once labeled “collaborative sense making”.
The Shaping AI for Just Futures Conference was the perfect opportunity to explore all possible hypotheses to improve the AI scenario. In particular, the Global AI + Regulation Emerging Scholars Workshop provided a tremendous opportunity to discuss the ‘craziest’ ideas around that scenario and raise the challenge of finding arguments to justify why such scenarios may lead to a just future.
Key resources to learn more
Casey, Anthony & Niblett, Anthony (2017) The Death of Rules and Standards 92 Indiana Law Journal.
Dreyfus, Hubert & Dreyfus, Stuart (1986) Mind over Machine: The Power of Human Intuition and Expertise in the Era of Computer New York: The Free Press
Engstrom, David Freeman, Gelbach, Jonah B (2020) Legal Tech, ‘Civil Procedure, and the Future of Adversarialism’ 169 (4) University of Pennsylvania Law Review.
Hildebrandt, Mireille, (2016) Smart Technologies and the End(s) of Law. Novel Entanglements of Law and Technology Edward Elgar Publishing.
Marino, Mark C., (2020) Critical Code Studies MIT Press at 120.
Mitchell, Melanie (2019) Artificial Intelligence: A Guide for Thinking Humans Farrar, Straus and Giroux.
About the author
Dr. Oana Ichim is an international lawyer by training, passionate about the interface between law and artificial intelligence and their relationship under the framework of logic, cognition, linguistics and aesthetics. She is a stubborn optimist integrating interdisciplinary teams with the goal to turn machine learning and social sciences into communicating domains.
This content is provided by the AI + Society Initiative to help amplify the conversation and research around the ethical, legal and societal implications of artificial intelligence. Opinions and errors are those of the author(s), and not of the AI + Society Initiative, the Centre for Law, Technology and Society, or the University of Ottawa.