Abstract
The transition from artificial intelligence capable of interacting in natural language to the language and the advent of ChatGPT 3.5 highlights global disparities. These are also expressed in access to new technologies. The need to regulate presents challenges that need to be addressed. This article explains possible regulatory models.
The AI Control Dilemma
By the time ChatGPT 3.5 was launched at the end of 2022, some regions, such as the European Union, had already initiated public debates since 2021 about promoting ethical AI regulation, given the rapid advancements in this technology. In contrast, two years after the widespread public access to generative AI, many parts of the world still show a limited understanding and effort to regulate this technology.
The reasons for this may be multiple. To grasp this phenomenon, the reflections of David Collingridge are quite useful. In the 1980s, Collingridge introduced the control dilemma, which states that “attempting to control a technology is difficult, and not rarely impossible, because during its early stages, when it can be controlled, not enough can be known about its harmful social consequences to warrant controlling its development; but by the time these consequences are apparent, control has become costly and slow.”
In this regard, Collingridge highlights that, to prevent unintended consequences of a technology, two conditions must be met: first, the harmful effects of the technology must be known, and second, it must be possible to modify the technology to avoid those effects. These are two conditions that, in the context of AI, seem nearly impossible to foresee in their full scope, which may jeopardize our ability to manage its effects effectively and in a timely manner.
Why Regulate AI?
Long before ChatGPT 3.5 became a landmark in technological advancements in 2022, AI had already begun to be part of our daily lives. AI is not something of the future; it greets us every morning when we glance at the screen of our phone or other electronic devices (see Figure 1). From the moment we unlock our smartphones, we receive personalized suggestions about the weather, the perfect music to start the day, the fastest route to work, or even potential responses to a WhatsApp message. All of this is thanks to a branch of AI known as machine learning. As the day goes on, we may use Face ID to unlock our phones or Google Lens to translate a sign in another language or search for information from an image, interacting with another branch of AI: computer vision (also known as artificial vision, machine vision, or technical vision). This interaction may be less noticeable but still significant; for instance, when we watch videos on YouTube, another piece of this technology is activated to detect inappropriate content or copyright infringements in videos uploaded to the platform.
AI is not only present in our personal lives but also in our professional environments. Tools like Chatgpt, Gemini, and Canva have transformed the way we work. These platforms, based on generative AI and natural language processing, allow us to simplify complex tasks. From asking Alexa or Siri for help to writing an email in another language with Google Translate or proofreading texts with Grammarly, AI -powered programs have become almost imperceptible yet integral parts of everyday life for many people.
Figure 1. Specialized Branches of AI

Source: Self-made examples and diagram. Adapted from A common understanding: simplified AI definitions from leading standards (NSW Government, 2024).
Although the primary limitation of AI lies in the need for internet access, this access is becoming increasingly widespread globally. According to the United Nations, by 2023, more than 65% of the global population was connected to the internet, and over 75% owned a mobile phone, a figure projected to rise to 78% within the next decade. With three out of four people around the world starting their day, in one way or another, by interacting with AI even before the arrival of ChatGPT 3.5 in 2022, it is logical to ask: what has changed? and why has the discussion on AI regulation intensified?
The control dilemma, explained earlier, helps guide answers to these questions. In the early stages of AI development, not enough was known about its consequences. Today, the effects of AI —especially generative AI— are becoming increasingly apparent, which makes regulation more necessary, even if it may eventually come too late and with high societal costs.
Although AI represents a significant scientific advancement, with the potential to close gaps in key sectors like education and healthcare and stimulate the economy through innovation, it also poses serious challenges to individual rights. AI models, trained on information provided by humans, reflect the flaws and biases of our society. AI can amplify these biases, reinforcing discrimination against certain population groups, facilitating the misuse of personal data, infringing on freedom of expression, and spreading misinformation, among many other negative effects. Therefore, balancing the maximization of AI’s benefits and the mitigation of its risks requires an ethical approach to its regulation that carefully weighs these potential impacts.
Whether consciously or unconsciously, people share various data about their preferences. However, the omnipresence of AI, combined with limited technological literacy in AI, means that users have only partial control over the use and privacy of their data. Data collected by private companies, such as purchase histories, online searches, or social media interactions, as well as data obtained by employers or even governments, can be used for different purposes. In authoritarian contexts, this capacity for surveillance and control can have worrying implications, exacerbating risks to individual rights and fundamental freedoms.
Given the imminent use of people’s information, it is crucial for governments to address key issues regarding privacy, security, and the ethical use of both the data input into AI systems and the results processed by AI. Without strong regulatory frameworks, the risk of irresponsible use of AI increases significantly. Such regulation should include mechanisms that ensure the responsible and sustainable use of the technology, as well as promote AI literacy to mitigate the risks.
AI in the EU (2021-2030)
With the aim of ensuring that AI systems used in the European Union are “safe, transparent, traceable, non-discriminatory, and environmentally friendly,” the European Parliament formally adopted the first law regulating AI in March 2024 with 523 votes in favor out of 705 seats, following three years of debates. The legislation, originally introduced by the European Commission in April 2021, proposed establishing the first regulatory framework for AI in the region.
Currently, there is no global consensus on a definition of AI, which means that each country or region regulating it will weigh different elements or categories when creating legislation for its use. In the case of the EU (see Figure 2), the AI Act focuses on General Purpose AI (GPAI). The so-called foundational models possess generative capabilities and are designed to perform a wide range of intelligent tasks. As part of Artificial General Intelligence (AGI), the existence of GPAI is made possible by Large Language Models (LLM) and their generative abilities. In contrast, Artificial Narrow Intelligence (ANI) is only capable of performing specific, predefined tasks. These three elements— GPAI, AGI, and ANI—are critical for understanding the EU’s AI risk classification and the respective implementation schedule of these regulations.
Figure 2. Categories of AI Technologies

Source: Self-made. Adapted from the European Parliament (2023).
The law classifies AI systems into four categories: unacceptable risk, high risk, limited risk, and minimal risk, taking into account specific actions and allowing flexibility to adapt to changes if certain uses evolve and present greater risks:
— Unacceptable risk: Systems designed to manipulate human behavior or social classification are completely banned. This includes those that use manipulative techniques, exploit the vulnerabilities of disadvantaged individuals, or implement social scoring.
— High risk: Systems that significantly impact safety or fundamental rights—such as those used in education, employment, critical infrastructure, and the administration of justice—are subject to strict safety, transparency, and human oversight rules.
— Limited risk: These systems have less stringent obligations, such as ensuring users are aware they are interacting with AI. These include technologies like chatbots and deepfakes.
— Minimal risk: AI-enabled video games and spam filters are exempt from more rigorous regulations. However, this could change as generative AI advances.
The law restricts the use of real-time facial recognition in public spaces, with exceptions for cases such as searching for missing persons or preventing terrorist threats.
This regulation affects 27 Member States. Given the complexity of the European Union system, the AI Act, published in July 2024 and effective as of August the same year, outlines a long implementation process with a phased schedule for enforcing the different obligations (see Figure 3). While all Member States are required to report by November 2024 on the authorities responsible for implementing the legislation, the bans on certain unacceptable risk AI systems will come into force six months after the law’s enactment, starting in February 2025. Similarly, regulations for GPAI models will be implemented within 12 months, and high-risk systems will begin to be regulated in 24 and 36 months. During this period, other key rules on governance, confidentiality, and sanctions will progressively come into effect.
Figure 3. Timeline for the Implementation of the ue AI Act
12.7.2024 | The EU AI Act is published in the Official Journal of the EU on July 12, 2024. |
1.8.2024 | The EU AI Act comes into force on August 1, 2024 (Article 113). |
2.2.2025 | The rules on the purpose, scope, definitions, AI literacy, and prohibitions come into effect on February 2, 2025 (Article 113.a). |
2.8.2025 | The rules on notifications, GPAI models, certain enforcement matters, and sanctions come into effect on August 2, 2025 (Article 113.b). |
2.8.2026 | The EU AI Act is fully applied, the general grace period for high-risk AI systems ends, and most operational provisions come into effect on August 2, 2026 (Articles 111.2 and 113). |
2.8.2027 | Rules regarding high-risk AI systems under paragraph 1 of Article 6 come into effect on August 2, 2027 (Article 113.c). |
2.8.2030 | The grace period for high-risk AI systems intended for use by public authorities ends on August 2, 2030 (Article 111.2). |
Source: Excerpt from the EU AI Act enforcement timeline (2024) and White & Case (2024), Extended version in English.
From a temporal perspective, the EU AI Act follows a gradual approach that begins with AI literacy and prohibitions, followed by the introduction of rules on notifications, AI models, and sanctions. As it progresses, the grace period for high-risk AI systems concludes, operational provisions are applied, and specific regulations for high-risk AI are implemented. To ensure the effective implementation of the regulation, the establishment of the EU AI Office within the European Commission has been planned.
Developing AI Regulation Models
In contrast to the approach adopted by the European Union, other regions of the world with a high degree of AI development prioritize different components in their regulatory and technological adoption models. In the book Digital Empires: The Global Battle to Regulate Technology, Anu Bradford reflects on the contrasts between the European model, which focuses on establishing global regulatory standards, the US model that encourages the private sector, and the Chinese model driven by the use of state resources.
She explains that in the US model, which is market-centric, the role of the government is limited, allowing large tech companies to lead governance. It fosters a favorable environment through incentives for innovation and exports its influence through services and technologies, consolidating its private power in the global economy.
Regarding AI regulation, the United States lacks comprehensive federal legislation, and its approach relies on laws and guidelines of limited scope. Key regulations include the National AI Initiative Act of 2020, which focuses on promoting research and development in this field, along with the establishment of the National Artificial Intelligence Initiative Office, tasked with implementing the national strategy. In October 2023, the White House issued an Executive Order on the Safe and Trustworthy Development and Use of AI, which sets guidelines for the development of federal standards, including elements of transparency in the results of safety testing. In the past year, various states have led initiatives concerning the regulation of high-risk systems, algorithmic discrimination, and automated decision-making. The general trend is toward increased sectoral regulation at the state level; however, public discussions are expected to continue regarding the implementation of cohesive AI regulation and the establishment of a federal authority.
Meanwhile, the Chinese model, driven by the state, seeks to establish the country as a technological superpower through the utilization of state resources. This model manifests in surveillance, censorship, and propaganda, along with actions aimed at preserving political control. In turn, China exports its infrastructural power by developing 5G networks, data centers, and smart cities.
Regulatory-wise, in 2023, the first specific administrative regulation on generative AI was published, called the Provisional Measures for the Management of Generative AI Services. This regulation does not categorize risks, but certain services, such as those with “public opinion attributes or social mobilization capacity,” are subject to stricter scrutiny, including security assessments and general application requirements like content moderation and labeling. Among the labeling norms, requirements include upholding socialist values and not generating prohibited content that incites subversion of state power or the overthrow of the socialist system, thereby endangering national security and interests. The responsibility for regulating generative AI primarily falls on the cyberspace administration of China.
This discussion includes other giants on the global stage. According to the World Economic Forum (WEF), the five largest economies in the world have made significant progress in developing AI ecosystems. In addition to the United States, China, and Germany (a member of the EU), Japan and India have also joined this list. Although both countries lack specific AI laws, they have adopted distinct approaches to address their regulation.
Japan has been a key player globally in launching the Code of Conduct for Organizations Developing Advanced AI Systems in the context of the G7 in 2024, an instrument that compiles 11 recommendations with a risk-based approach. Domestically, the country follows a soft law strategy, promoting AI governance through guidelines aimed at minimizing risks while prioritizing the promotion of innovation. In 2024, the AI Guidelines for Business Version 1.0 was also published, a non-binding guideline that seeks to promote voluntary efforts following a risk-based approach. However, a recent draft AI Bill could redirect the current strategy toward a hard law approach, including oversight of developers and the imposition of fines and penalties in case of non-compliance.
India, for its part, has established sector-specific frameworks, such as in finance and health, and its approach is guided by the National AI Strategy from 2018 and the Operationalizing Principles for Responsible AI from 2021, which prioritize training and incentives for the ethical design of AI. Although regulations are quite limited, the forthcoming India Digital Bill is expected to delineate and regulate high-risk AI systems.
Latin America: Is a Unified Regulation Possible?
International experience in AI regulation offers valuable lessons for Latin America, both in terms of promoting innovation and protecting individuals from the associated risks of this technology. However, the idea of a unified regulatory framework for the region may not be realistic or effective, as each country is progressing at its own pace in creating ecosystems for AI development.
Recent recommendations, such as the resolution published in 2024 by the United Nations titled Harnessing the Opportunities of Safe and Reliable AI Systems for Sustainable Development and the OECD’s Recommendation on AI originally adopted in 2019 and updated in 2021, highlight the need for clear governance, investment in technological infrastructure, and education in digital skills. For Latin American countries, however, these challenges will need to be approached from a more flexible perspective, considering the different realities and capabilities of each nation. Chile, Brazil, and Uruguay are leading in AI research and development according to the Latin American AI Index (ILIA –Indice latinoamericano de inteligencia artificial) 2024, but advancements are not uniform across the region.
In such a complex and rapidly evolving context, Latin America must balance the promotion of AI with the protection of fundamental rights, leveraging this technology for inclusive and sustainable development. The key will be to design regulatory frameworks that allow for responsible implementation while respecting the diversity and evolutionary pace of each country, without compromising individuals’ rights or, ultimately, democracy.
References
Collingridge, D. (1980). The Social Control of Technology. New York: St. Martin’s Press.
EU Artificial Intelligence Act. (2024).
European Parliament. (2023). General-purpose artificial intelligence.
ILIA, Índice Latinoamericano de Inteligencia Artificial. (2024).
Naciones Unidas, Asuntos económicos. (2023). Más del 75 % de la población mundial tiene un teléfono celular y más del 65 % usa el internet.
Netflix. (2024). Y ahora qué. El futuro según Bill Gates.
Parlamento Europeo, Ley de IA de la UE: primera normativa sobre inteligencia artificial (2024)
White & Case. (2024). The global dash to regulate AI.
World Economic Forum. (2024, June 2). Así es como los capitalistas de riesgo invierten en la IA en cinco países.