Anticipatory governance with the help of technology

Anticipatory governance with the help of technology

Abstract The transformative power and disruptive potential of AI requires ethical governance. Responsible anticipation and literate use of the future

Por: Lydia Garrido Luzardo4 Feb, 2025
Lectura: 20 min.
Anticipatory governance with the help of technology
Compartir
Artículo original en español. Traducción realizada por inteligencia artificial.

Abstract

The transformative power and disruptive potential of AI requires ethical governance. Responsible anticipation and literate use of the future are key elements in policymaking. Parliamentary committees of the future offer recommendations for inclusive governance based on cooperation, transparency and collective intelligence.

The evolution of artificial intelligence (AI) has reached a turning point where its transformative and disruptive capabilities demand a profound assessment of how to govern it ethically and responsibly. Artificial general intelligence (AGI) refers to a type of AI capable of performing any human intellectual task. AGI poses challenges that would appear to call for different epistemological and methodological frameworks for its governance. These frameworks must not simply react to technological transformations once they have occurred, but rather explore the nature of the relationships that give rise to the invention and application of tools, taking into account the way dominant systems define and explore opportunities. To that end, a broader understanding of the attributes and relationships of anticipatory systems complexity is needed, one that integrates the epistemologies of collective intelligence with ethical values and anticipatory capacities, all rooted in a theory of anticipation that helps to clarify both why and how we imagine the future. Such a theory, the ‘discipline of anticipation,’ is what underpins efforts to enhance anticipatory capabilities and alter the conditions within which governance systems and practices function.

This article argues that effective anticipatory governance requires an approach based on anticipatory capabilities and ethical principles. Responsible anticipation is thus presented as both a capability and as an essential quality for a future oriented, proactive, responsive decision-making in order to create the conditions for an ethical development of AI for the common good of society. It is crucial to establish clear principles to orient AI evolution towards a safe an ethical agi and to adopt an approach that re-orients the generative side of human agency towards an integration of futures imagined within an ethical frame that steers away from oppression, extractivism, and exploitation.

This text integrates elements of ethics, complexity, and use of the future into decision-making, applying them to the context of anticipatory governance for AI. It also draws on the experience of the Special Futures Committee of the Uruguay Parliament and the recent contributions of the Second World Summit of the Committees of the Future in Parliaments that took place in Montevideo, Uruguay, in September 2023. 

The article is structured into three main sections. First, it analyzes the evolutionary nature of AI and its disruptive potential. Second, it examines the challenges that the use of the future poses in anticipatory governance practices. Finally, it discusses the practical considerations of responsible anticipatory governance for AI, emphasizing the crucial role of Parliaments and other institutions in designing flexible, anticipatory, and adaptive governance frameworks.

The Evolutionary Nature of AI and its Disruptive Scope

There are numerous definitions of artificial intelligence. Some debates focus on how AI differs from human intelligence, but this article does not revolve on that aspect. AI’s disruptive potential goes beyond such similarities or differences with humans, in other words, beyond an anthropomorphic and anthropocentric perspective. Here, we focus on a powerful tool, that is, in its current and potential capabilities, a source of opportunities for the good of humanity that, at the same time, may present serious threats.

The other crucial aspect of our approach is to consider the evolutionary nature of AI (precisely what so often generates differences and lack of consensus on a single definition), since one of its inherent characteristics is its permanent state of change. In December 2023, the OECD revised its definition of artificial intelligence systems:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Simply put, AI is not static; it is constantly evolving, expanding its capabilities and transforming its relationship with the surrounding technological and social dynamics. Such evolutionary process, characterized by increasing autonomy and disruptive potential, poses complex challenges for decision-makers and once again draws attention to the inadequacy of familiar governance systems that are premised on the proposition that the future is technocratically knowable. AI is intertwined with other emergent technologies such as the Internet of Things (IoT), autonomous systems, robotics, biotechnology, nanotechnology, as well as cognitive and neurocognitive sciences, which might further amplify its impact. An example of such intertwining can be found in precision medicine, where AI, along with biotechnology and nanotechnology, enables personalized treatments that, in turn, raise ethical and privacy concerns requiring new forms of governance.

From its origin, AI proved to be a powerful tool for solving specific problems. However, we are now on the brink of leaping into artificial general intelligence, a kind of AI capable of performing general tasks like humans and even solving complex problems without specific pre-programming. This transition prompts essential questions about how to govern these capabilities and what kind of futures we wish to build with them.

Understanding the evolutionary nature of AI is crucial for designing anticipatory frameworks that might provide a more practical foundation for the exercise of human agency and futureproofing governance.

Artificial Narrow, General, and Superintelligence

We draw on the distinction made by the Millenium Project, Futures Studies and Research to define different AI types.

AI governance must start with a fundamental distinction between three types of AI: artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial superintelligence (ASI). This distinction is critical because each type of AI entails different risks, opportunities, and challenges.

ANI (Artificial Narrow Intelligence) is the AI we have today, designed to carry out specific tasks, such as facial recognition, autonomous driving, or recommendation systems in streaming platforms. ANI comprises tools with limited capacities that cannot function outside the parameters for which they were designed. The progress of artificial generative intelligence (such as ChatGPT and similar AI systems) already suggests a transition into the next type or stage of AI.

AGI (Artificial General Intelligence) marks a qualitative leap in AI evolution. Unlike ANI, AGI might have the capacity to solve non-specific problems, function in a wide range of contexts with no need of constant human intervention, adapt to new situations, and learn autonomously. This would turn AGI into an autonomous agent capable of acting in ways not anticipated by the humans who deployed it. AGI can therefore improve its own code and evolve rapidly, raising concerns about its control and the unforeseen consequences of its development. Current advancements suggest AGI is near (experts estimate it could be five to twenty years before AI reaches this stage), and its impact will be profound, since it will enable AI systems to act as autonomous agents with abilities comparable to—or above—those of humans.

ASI (Artificial Superintelligence) refers to AI evolution after AGI, an intelligence so advanced that it could establish its own objectives and act entirely independently from humans. Even though ASI remains a speculation, the possibility of its emergence from AGI requires anticipatory governance contemplating not only the current stages of AI development but also its possible future ramifications.

In short, while current AI presents challenges concerning algorithm transparency, fairness, and personal privacy, AGI and ASI would amplify these challenges, introducing existential questions about control, autonomy, and potential influence of AI on the structure of society and humanity (Glenn & Garrido, 2023).

Relational Complexity and the Need for a Paradigmatic Shift

The evolutionary nature of AI, particularly its transition toward AGI and ASI, requires a paradigmatic shift in how we approach its governance. It is imperative to move beyond a linear, simplistic approach and adopt a framework based on relational complexity, one that recognizes the multiple interdependencies between technology and other social and natural systems (economic, cultural, environmental, etc.). Governance cannot be limited to reactive regulation of current issues or to ex-post intervention. Instead, it must be anticipatory and offer alternatives through generative and preventive actions in the present, thereby creating socially consensual conditions that consider the qualitative changes in society and the role that evolving technologies might play. Therefore, participatory, multi-, and transdisciplinary models are needed—for instance, reticular models based on collective intelligence and social participation.

This anticipatory governance approach not only addresses risks—it also explores how to integrate novelty (stemming from uncertainty) into perceptions and choices in the present. This integration covers not only the imagined costs and benefits, opportunities and threats posed by AI but also ways for decision-makers to navigate complexity and live with the certainty of surprises. Anticipatory governance is at once modest—it does not pretend to be able to know the future—and resourceful—leveraging constant experimentation and emergence. On the one hand, it stresses respecting the creativity of the world around us; on the other hand, it promotes experiments that, very importantly, test our hopes of shaping transformation in advance (whether by avoiding or fostering desired circumstances).

The Special Futures Committee established by the Parliament of Uruguay is a clear example of this shift. By adopting a more complex and interconnected approach, the committee works to integrate various perspectives and introduce aspects of the future, thus driving actions toward an innovative anticipatory governance ecosystem.

Anticipatory Governance Challenges and Responsible Anticipation

This document reflects on anticipatory governance for AI through the lens of responsible anticipation practices. The future itself does not exist (Miller, 2018), which raises an ontological tribulation that must be considered within the epistemic dimension of the problem. Given AI’s evolutionary nature, it is vital for governance strategies not to focus solely on current issues, but to encompass possible stages of AI development. However, a critical question arises: How can we assume responsibility for futures that do not yet exist? The following section considers this issue and proposes an approach to address it.

Ethics and the Responsibility Imperative

Responsible anticipation seeks to establish an ethical stance beyond purely philosophical approaches and traditional foresight or futurology. In this context, ethical practice is situated in a conscious and reflective conjunction that allows bringing the future into the present for both perception and choice. From this point of departure, the role of imagined futures is not reduced to choices or bets about tomorrow; rather, it includes a crucial reflective ethical practice that brings the future into the present through the anticipatory assumptions used in the different stages of the decision-making process. Certainly, this approach involves considering the role of different futures in the initial problem formulation, leveraging our capacity to reframe through collective and deliberative reflections, detecting and inventing alternatives, and thus reaching the selection of options for actions (Garrido, 2024). A futures literate approach is fundamental, as the next section will explain.

Hans Jonas’s pioneering work in the 1970s, still quite goal oriented as was so prevalent in the 20th Century, underscored the importance of incorporating future implications into contemporary ethical decisions. Jonas defied prevailing ethical standards by introducing a future-oriented framework that considers the impact of present actions on future generations and the environment. His concept of extended responsibility calls for an ethical reflection capable of addressing not only immediate consequences but also long-term effects. In his work, Jonas articulated the following responsibility imperative: “Act as if the effects of your action are compatible with the permanence of genuine human life on Earth.” This imperative, deeply meaningful for our technological era, emphasizes the moral obligation to safeguard the dignity, autonomy, and integrity of present and future life.

Jonas not only revised the traditional ethical approach—he revitalized anticipatory thought in line with the Aristotelian concept of final cause as a practical and ethical guide for action. This perspective calls for a return to the fundamental principles of foresight and responsibility, both of which are crucial for preserving human life in a context where natural and artificial systems are increasingly intertwined.

The notion of responsible anticipation proposed here encompasses an ethical mode of action considering the future consequences (consequentialist ethics), the moral obligations of our actions (deontological ethics), and the adoption of a proactive stance that promotes responsible care (virtue ethics). Each of these perspectives can be applied to both goal- and capability-based understandings of how humans anticipate, or ‘use the future.’ When the ends justify the means, there are clear ethical challenges—but so, too, when the means are the ends. Such a holistic approach is essential in fields such as health care, education, and, by extension, AI governance, where both the construction and vulnerability of the common good require an awareness of ethical imperatives

The moral structure of human beings displays our unique capability as reflective agents bearing the responsibility to make decisions that carry ethical implications. As Adela Cortina (2012) notes, human existence is inherently dramatic because of the constant need to make decisions and justify our actions. These dynamics of freedom, decision, and responsibility constitute the ethical axis of our actions, rendering us responsible not only for the immediate reach of our decisions but also for the futures we contribute to create.

Responsible anticipation answers the critical question of decision-makers: “What should I do now?” This query, central to Robert Rosen’s (1985) approach, leads us to consider a paradigmatically different standpoint for decision-making. Rosen’s conceptual framework of anticipatory systems provides new insights into how biological and social systems can make future-informed decisions, unlike reactive systems, which merely respond to past stimuli.

At the cognitive level, anticipatory systems and assumptions generate information and knowledge to support decision-making. Explicit anticipation (Poli, 2010) is the conscious ability to incorporate or generate information about a subsequent moment with the intention of acting accordingly. Anticipatory assumptions are the concrete operative elements of this process. Paying attention to such assumptions allows for the exploration of the synergy between ethics, intention, and potential futures. These assumptions enrich both our theoretical understanding and the practical implementation of ethics. They provide a bridge between ethical theory and practice, offering a nuanced approach to responsible anticipation that integrates ethical deliberation with future-oriented thinking.

The Process of Using the Future in Decision-Making

Although the future itself does not yet exist, we use it every time we engage in anticipation. This is the abbreviated meaning of using the future, a concept encompassing the many purposes and forms of anticipation, including preparation, planning, and the exploration and creation of alternatives in the present (Miller, 2018, p. 10).

A literate use of the future requires an understanding of how we use that which does not yet exist (the future) to generate knowledge and inform decision-making. In this process, recognizing anticipatory systems and assumptions, as well as how they influence our perception of the present, becomes essential. At the same time, recognizing the contingent nature of the future—constantly shaped by a multitude of possible events and decisions—helps us better comprehend and relate to uncertainty and complexity.

This process involves not only imagining a range of potential futures—goals or scenarios ahead in time, i.e., substantivized futures—but also evaluating them in terms of desirability and viability. Moreover, it requires the recognition and selection of different anticipatory systems and assumptions (subjacent models) that take part in this perception and shape it. The models, therefore, must align with the purpose and the nature of the phenomena and problems.

In other words, this process is about understanding and dexterously using the systems and models that allow us to incorporate the not-yet-existent into our thinking, and this is achieved through reflexivity on the epistemic modes we employ, which can ultimately reshape how we see the present, and the opportunities and challenges we perceive (which may be biased, incomplete, or mistaken). As a result, decision-making is increasingly informed, nuanced, and aligned with ethical considerations of value, as there is greater dexterity in incorporating future into the analysis.

All of this may seem very abstract because it actually is. But we are referring to high cognition processes that enable anticipation (which is itself an action with practical implications). Furthermore, anticipation is inherently counterfactual since it impacts beforehand and can alter what is yet to happen—highlighting once more the need for new logics, methodologies, and skills.

Futures literacy is a crucial skill in this context, comparable to any other type of literacy (alphanumeric, computational, or emotional). Futures literacy enables individuals and organizations to introduce the future beyond the word future or the projections or extrapolations of the past (which is what is usually done). Instead, they can foster the reflexivity and creativity needed to navigate uncertainty skillfully and responsibly, making sure their present actions are better informed.

In practice, this approach transforms policy design, allowing for the use of the future to become a powerful tool for anticipatory governance and responsible anticipation.

Following the work of Sripada (2016), we can distinguish two stages of decision-making processes: construction and selection. The construction stage involves creating meaningful options based on imagination and the exploration of future possibilities. This process is crucial to expand the set of possible alternatives, thus enriching decision-making. The selection stage, on the other hand, involves evaluating and appraising the options generated during the construction stage, ensuring that final decisions align with ethical principles and desirable futures.

Thus, responsible anticipation is not limited to mere prediction or responsibility for specific tasks. Rather, it implies a careful and reflective attitude throughout the whole decision-making process. From stating the problem and laying it out again, to achieving a deeper understanding—expressed in the ability to diversify alternatives and select the best options to transform beforehand—responsible anticipation enables decision-makers to act with a deep sense of ethical responsibility. This approach ensures that present decisions contribute to creating desirable futures while minimizing the inherent risks of uncertainty.

Applied Considerations for Anticipatory Governance for AI

Perhaps parliaments are the institutions that use the future with utmost intensity, and, therefore, with utmost responsibility to society and humanity overall. Consequently, they play a critical role in ensuring a safe AI evolution: responsible AI, AI for the common good.

Through key functions such as accountability, supervision, representation, and legislation, parliaments have direct and concrete influence on the guidelines for AI development. That is why the Parliament of Uruguay created a Special Futures Committee, an innovative initiative that enables the government to traverse a learning curve for anticipatory governance. As a pluralistic setting for engaging with citizenry and other spheres and levels of government, it represents a great opportunity to spark an anticipatory governance ecosystem.

Regarding governance for AI, the imperative of a responsible anticipation practice is a condition sine qua non.

Recommendations for Anticipatory Governance for AI

The following recommendations were issued in the context of the Second World Summit of the Committees of the Future, held in Montevideo, Uruguay, in 2023.

1. Devising an anticipatory governance framework for AI. It is crucial to establish a global regulatory framework, coupled with international and regional guidelines. This framework should promote international cooperation, ensure the ethical and responsible use of AI, and regulate its evolution to mitigate risks and maximize benefits.

2. Promoting transparency and algorithm explicability. Frameworks that require transparency in algorithm development are key to guarantee that decisions made by AI systems are comprehensible and auditable. Explicability is essential to avoid biases and foster public trust. Continuous auditing systems should be implemented to monitor AI behavior. Regarding the advancement of algorithm transparency and explicability, the European Union has established clear guidelines to ensure the responsibility and auditability of AI systems, setting a global standard in the field of technological regulation.

3. Promoting inclusive and participatory governance. The design of governance for AI should promote the inclusion of diverse stakeholders (governments, the private sector, civil society, and academia) and ensure that technology benefits society overall. Policies that guarantee fair access to emergent technologies should be prioritized to prevent technological gaps perpetuating inequality.

4. Enhancing anticipatory capabilities. Parliaments should develop anticipatory capabilities to manage AI evolution and prepare for disruptive changes. To this end, they should establish use-of-the-future specialized units and develop training programs for legislators regarding futures, AI, and complexity issues. Expanding such measures in a structured manner to other spheres of government and society is desirable, as this will create an anticipatory governance ecosystem.

5. Using regulatory sandboxes. The implementation of controlled experimentation environments (sandboxes) allows for iterative testing and adjustment of AI regulations. Sandboxes can enable a flexible adaptation to technological change, ensuring regulations evolve alongside technology.

6. Adopting fundamental ethical principles. Ethical principles such as transparency, fairness, privacy, and security, should be included in AI governance. These principles must be incorporated throughout the entire life cycle of AI systems, from design to implementation and use.

7. Promoting AI education and literacy. Developing educational and training programs on AI for legislators, citizens, and professionals of various sectors will promote a greater understanding of AI’s risks and opportunities, preparing society to participate in governance processes.

8. Fostering international cooperation on technological governance. Fostering international cooperation and exchanging best practices between countries are key to address global AI challenges and promoting shared solutions applicable at the local and global level.

These recommendations seek to enhance the capabilities of parliaments and other government institutions, academia, developers, and civil society regarding anticipatory AI management. They also seek to promote anticipatory, participatory governance based on ethical principles that ensure the safe and beneficial use of this technology for the common good, encompassing both the social and the environmental spheres.

Toward a Responsible Anticipatory Governance

Anticipatory governance for AI is fundamental in an era of rapid and profound technological change. The development of agi and the potential advent of asi pose existential challenges that cannot be addressed with traditional governance approaches. Responsible anticipation, grounded in ethics and reflexivity, must guide the design of flexible and collaborative regulatory frameworks and allow for the management of risks and opportunities presented by AI’s evolution.

The future of AI is yet to be written, and it hinges on the decisions we make today. Futures literacy, ethics of anticipation, and the development of anticipatory capabilities in decision-makers are key to ensure that AI evolves for the common good, guaranteeing its benefits reach society overall without compromising safety or dignity, in a beneficial and fair manner for all of humanity.

References

Arendt, H. (2008)

[1958]
. La condición humana. Barcelona: Paidós.

Cortina, A. (2013). ¿Para qué sirve realmente la ética? Madrid: Paidós.

Garrido, L. (2024). Responsible Anticipation. Futures literacy capacities to enhance ethical stance in anticipatory governance decision-making. Learnings and applications in Parliaments. En T. Fuller et al. (ed). Towards Principles for Responsible Futures. Lincoln University, Taylor and Francis (in press).

Glenn, J., & Garrido, L. (2023). Parliaments and Artificial General Intelligence (AGI). An Anticipatory Governance Challenge. IDEA Internacional.

Jonas, H. (2014)

[1984]
. The Imperative of Responsibility: In Search of an Ethics for the Technological Age. Chicago: University of Chicago Press.

Miller, R. (2018). Transforming the Future: Anticipation in the XXI Century. Paris: UNESCO. New York: Routledge.

Miller, R., & Poli, R. (2010). Anticipatory Systems and the Philosophical Foundations of Futures Studies. Foresight, 12(3), 3-6.

Poli, R. (2010). An Introduction to the Ontology of Anticipation. Futures, 42(7), 769-776.

Rosen, R. (1985). Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations. Oxford: Permagon Press.

Russell, S., Perset, K., & Grobelnik, M. (2023). Updates to the OECD’s definition of an AI system explained. OECD.AI Policy Observatory 

Sripada, Ch. (2016). Free Will and the Construction of Options. In M. Seligman, P. Railton, R. Baumeister & Ch. Sripada (eds.), Homo Prospectus. New York: Oxford University Press.

Lydia Garrido Luzardo

Lydia Garrido Luzardo

Antropóloga y futurista. Doctora en pensamiento complejo, con maestría en investigación integrativa. Directora de la Cátedra UNESCO en Anticipación Sociocultural y Resiliencia en el Instituto Sudamericano para Estudios sobre Resiliencia y Sostenibilidad (SARAS). Asesora de la Comisión Especial de Futuros del Parlamento del Uruguay.

newsletter_logo

Únete a nuestro newsletter