L’Initiative IA + Société est ravie de continuer à amplifier les voix des chercheurs émergents du monde entier grâce à notre série de blogues bilingues « Voix émergentes mondiales en IA et société » qui font suite aux ateliers mondiales des jeunes chercheurs en IA et la régulation et de la conférence Shaping AI for Just Futures. Cette série met en avant les recherches innovantes présentées lors de ces événements – et au-delà – afin de faciliter le partage des connaissances et d’engager un réseau mondial dans un dialogue interdisciplinaire constructif.
Le blog suivant est rédigé par Lucas Costa dos Anjos qui a été sélectionné par un comité international pour présenter son article « The “Brussels Effect” and beyond: Pursuing smart, strategic, and balanced AI regulation » dans le cadre du 2e atelier mondial des jeunes chercheurs en IA et régulation. Une version en portugais est disponible sur SSRN.
Cette contribution scientifique n’est disponible qu’en anglais.
The “Brussels Effect” and beyond: Pursuing smart, strategic, and balanced AI regulation
This blog explores the need for jurisdiction-specific AI regulatory frameworks, examining the dynamics between the Global North and South. It explores the impacts of the European Union's AI Act, the false dichotomy between innovation and regulation, and factors influencing effective AI governance, highlighting the importance of a balanced approach to AI innovation and oversight.
The dynamic nature of artificial intelligence calls for a finely-tuned regulatory approach that recognizes unique development scenarios across the Global North and the Global South (or Global Majority). Instead of focusing on the quantity of regulation, or its potential hindrance to innovation, the key lies in shaping jurisdiction-specific frameworks that promote innovation while addressing regional needs and goals.
The analysis sustains the need for such frameworks, considering standards like technological infrastructures, research and development (R&D) capabilities, socio-economic priorities, the role of national economic champions in foreign trade policies, and data sovereignty, property, and protection. It also recognizes the need for some level of jurisdictional interoperability, especially in ever-growing digital economies and globalized supply chains in the tech sector.
The recent emergence of AI regulations, like the AI Act in the European Union, has sparked worldwide legislative initiatives, like in Canada, Brazil, China and the US. However, this rush has led to impulsive decisions. The AI Act's potential to set a global regulatory standard, mirroring the impact of the GDPR—a trend known as the “Brussels Effect”—adds gravity to this issue.
Transposing the AI Act requires thoughtful consideration of each jurisdiction's unique traits. While debates continue about the EU’s AI strategy potentially stifling innovation due to over-regulation, Global South countries, predominantly AI technology importers, need frameworks fostering innovation despite their distinct economic challenges.
The need for jurisdiction-specific AI regulatory frameworks
Technological advancements, economic development, and socio-political landscapes often divide the world into the Global North and Global South. The Global North, mainly developed countries, boasts advanced technological infrastructures and robust economies. In contrast, the Global South, made up of developing nations, grapples with challenges stemming from economic disparities and technological lag, often a consequence of colonial and neo-colonial exploitation by Global North countries. This is an oversimplified generalization, of course, that lacks the necessary nuance that this debate deserves. However, for the purposes of this short analysis, the two “poles” will be considered according to these parameters.
The digital sphere reflects these divides. The Global North designs and implements most digital economy infrastructures, architectural logic, protocols, applications, means of monetization, and standards. They lead AI research, development, and implementation, integrating innovations into various sectors. Conversely, the Global South generally relies more on importing AI technologies and solutions, reflecting economic constraints, lack of access to cutting-edge research, and policy limitations.
AI's ripple effects differ in the Global North and South due to their distinct socio-economic landscapes. In the Global North, AI enhances business processes, drives efficiency, and creates new market opportunities. In the Global South, AI is often portrayed as a tool to address fundamental societal challenges, such as improving healthcare in remote areas or optimizing agriculture. However, cultural and ethical considerations are crucial. AI systems not trained on diverse datasets might misinterpret local cultural nuances, underscoring the importance of inclusive AI development. And the Global South is usually a test bed for new technologies.
Legislative attempts to bridge such gaps are emerging, but a universal "one-size-fits-all" approach is impractical and counterproductive. Given the disparities in technological adoption, economic development, and cultural nuances, a blanket regulatory approach might overlook the unique challenges and opportunities of AI in each region. Recognizing these disparities, there's a growing need for jurisdiction-specific AI regulatory frameworks.
The AI Act's global impact
As AI continues to shape the future of industries and societies, the AI Act emerges as a central piece of legislation with the potential to influence global AI governance. Its implications extend far beyond its origin, setting the stage for a new era of AI regulation. The AI Act, with its comprehensive approach to AI governance, has the undeniable potential to become a benchmark for AI regulations. Many countries look to the AI Act as a template, recognizing its approach to promoting responsible innovation, despite it being designed with the EU’s jurisdictional realities in mind.
A trend known as the “Brussels Effect” refers to the EU's ability to set global standards due to its large market size and regulatory power. This effect has been evident in digital economy regulations, with the General Data Protection Regulation (GDPR) as a prime example. Introduced in 2018, the GDPR set stringent data protection standards, influencing countries outside the EU to adopt similar regulations. The EU has also introduced the Digital Services Act and the Digital Markets Act, which could spark new global standards for platform accountability and competition in the digital sector.
However, while the AI Act serves as an exemplary framework, it's essential to recognize that a direct transplantation of this law might not suit every jurisdiction, despite evidence of these legislative attempts. Each region has its unique socio-economic, technological, and cultural landscape. Cultural nuances, legal and political systems, and socio-economic priorities influence how AI technologies are developed, adopted, and integrated into various sectors of society, and these unique traits must be properly taken into consideration.
The false dichotomy between innovation and regulation
The EU's AI strategy, underpinned by the AI Act, has sparked debates regarding its consequences. While many laud its commitment to ethical AI, concerns arise about its potential to stifle innovation. The common narrative that innovation and regulation are at odds presents a false dichotomy. In reality, innovation and regulation can and should coexist harmoniously.
Well-crafted regulations can act as catalysts for innovation by setting clear guidelines and standards. They provide a roadmap for innovators, ensuring that their efforts align with societal values, ethical considerations, and safety standards. This clarity reduces uncertainties, making it easier for startups to attract investments and for established companies to venture into new AI-driven domains. Public trust is crucial for AI adoption, and regulations play an essential role in building this trust by ensuring that AI technologies are developed and deployed ethically and responsibly.
In the absence of regulations, companies might inadvertently develop AI systems that harbor biases, infringe on data protection, or pose other risks. Addressing these issues post-deployment can be costly. Regulations provide an ex ante approach, guiding companies to consider potential pitfalls from the outset and develop AI solutions that are both innovative and responsible. As AI technologies become increasingly global, a robust regulatory framework enhances a region's competitiveness on the world stage.
Beyond practical benefits, there's an ethical imperative to balance innovation with regulation. AI technologies impact every facet of society, from healthcare and education to finance and governance. Ensuring that they do so in ways that uphold human rights, equity, and justice is paramount. When approached thoughtfully, regulation not only coexists with innovation but actively enhances it, guiding the trajectory of AI to benefit society at large.
Balancing innovation and regulation
As AI continues its transformative journey, reshaping industries and societal norms, balancing innovation with regulation becomes a central concern. A regulatory framework for AI is about creating an environment where innovation thrives while ensuring public safety, ethical considerations, and societal well-being. Unchecked innovation can lead to unintended consequences, such as data protection breaches and the amplification of societal biases.
Lax regulations can lead to unethical practices like data misuse, biased algorithms, and a lack of transparency in AI decision-making processes. Such a landscape can erode public trust in AI technologies, leading to resistance in adoption. A proper framework provides clear guidelines for developers, ensuring that AI technologies are developed with ethical considerations at the forefront. It also offers avenues for redress, ensuring that those affected by AI decisions have a voice and a means to seek justice.
Furthermore, an adequate approach encourages collaboration between regulators, industry players, and the public, fostering a collective effort to ensure that AI technologies are developed and deployed responsibly. Given AI's global nature, technology imports and data sharing are vital for competitiveness. However, while economic champions may advocate for relaxed regulations to achieve global dominance, it's essential that such regulations don't jeopardize national security or citizen data protection.
The Global North often leads in AI innovation and regulation, with the Global South grappling with unique socio-economic, political, and cultural realities. The historical disparities, often a result of colonial and neo-colonial influences, further complicate the direct transplantation of regulations. As nations worldwide look to the AI Act as a template, it's crucial to approach its adoption with sensitivity to regional nuances, aiming for global standards that resonate with local realities.
Reflecting on the insights and perspectives exchanged at the Shaping AI for Just Futures Conference, it became evident that global AI governance is both complex and multifaceted. Regional-diverse and multistakeholder discussions highlighted the critical importance of jurisdiction-specific regulatory frameworks, showcasing the challenges and opportunities different regions face. These insights reinforced the necessity for a balanced approach that promotes innovation while ensuring ethical and responsible AI deployment tailored to regional needs.
Ressources clés pour en savoir plus
Bradford, Anu (2012). The Brussels Effect, Northwestern University Law Review.
Cohen, Julie E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism, Oxford University Press.
Mazzucato, Mariana (2015). The Entrepreneurial State: Debunking Public vs. Private Sector Myths, Science and Public Policy.
Svantesson, Dan (2022). The European Union Artificial Intelligence Act: Potential Implications for Australia, Alternative Law Journal.
Wall, P. J., Saxena, Deepak, & Brown, Suzana (2021). Artificial Intelligence in the Global South (AI4D): Potential and Risks, Heliyon.
À propos de l’auteur
Dr. Lucas Anjos is a postdoctoral researcher at Sciences Po School of Law, focusing on algorithmic transparency, trade secrets, and the challenges posed by artificial intelligence. His work at Sciences Po is supported by a Postdoctoral Fellowship from Project Liberty. Lucas also serves as a researcher at the Brazilian Data Protection Authority and a Professor at Universidade Federal de Juiz de Fora. He holds doctoral degrees from Université libre de Bruxelles and Universidade Federal de Minas Gerais. He founded and was a scientific advisor to the Institute for Research on Internet and Society (IRIS).
This content is provided by the AI + Society Initiative to help amplify the conversation and research around the ethical, legal and societal implications of artificial intelligence. Opinions and errors are those of the author(s), and not of the AI + Society Initiative, the Centre for Law, Technology and Society, or the University of Ottawa.