Freedom and Truth Captured by Artificial Intelligence

Freedom and Truth Captured by Artificial Intelligence

Abstract Artificial intelligence is a new reality with which we coexist, and it is also transforming who we are, how […]

Por: Miguel Pastorino4 Feb, 2025
Lectura: 16 min.
Freedom and Truth Captured by Artificial Intelligence
Compartir
Artículo original en español. Traducción realizada por inteligencia artificial.

Abstract

Artificial intelligence is a new reality with which we coexist, and it is also transforming who we are, how we perceive ourselves, and how we live—much like a new environment. Its impact is radically reshaping everything from education to medicine, from the economy and politics to work and interpersonal relationships.

The so-called artificial intelligence (AI) is not merely a technological leap; it has generated an anthropological shift, as it redefines human life, our ways of thinking and living, of knowing, learning, connecting, forms of power, and rights’ limits. It reshapes our conception of freedom and truth. It stands as one of the most significant philosophical issues of our time. Because it’s not just a tool, but it’s beginning to merge with our environment and with who we are, transforming our daily lives, leaving no area untouched—from education to medicine, from the economy and politics to work and human connections.

As we have shaped machines, those same machines have shaped us. The world built by humans—the world of machines or sociotechnical systems—has, in turn, redesigned us, influencing our abilities and shaping our own values and beliefs (Savulescu and Lara, 2021).

AI is neither morally nor philosophically neutral. It is not an instrument, but a new reality with which we coexist, one that also transforms who we are, how we perceive ourselves, and how we live—a new environment. We find ourselves in a dynamic of mutual interaction, of co-participation in operations.

The technological environment has its own dynamics and logic. It has not replaced the natural environment but has completely transformed and reconfigured it. “Technology is no longer a means but the Medium, the environment in which we live. It is a Medium because, through its self-regulation and automation, it behaves as something independent and constantly evolving, capable of surrounding us and continuously creating new contexts for human existence… New technologies interact with the environment around them, responding to stimuli and altering their behavior independently…” (Varela, 2022).

The fact that technology opens new spaces for action necessarily calls for a discussion about those actions, which are never neutral. This compels us to reflect on their social, economic, and political consequences, but first, we must ask ourselves what technology truly is and its impact on the human condition.

What Kind of Intelligence Are We Talking About?

Human intelligence cannot be reduced to functions that are metaphorically compared to those we attribute to AI. Confusing functions with the uniqueness of human intelligence is a common form of reductionism. We have grown accustomed to the borrowed use of concepts from cognitive sciences being applied to computer systems, often without the necessary conceptual clarifications. The terms we use shape our mental representations: we hear about synaptic chips, artificial neural networks, neural processors, etc.

“The principle of computational intelligence modeled after our own is flawed because the two hardly share any meaningful similarities” (Sadin, 2020), except when we fall into the trap of reducing intelligence to a set of functions, and reducing reality to binary codes, thus excluding the countless dimensions our human subjectivity can experience—dimensions that cannot be captured by mathematical models. “What we are faced with is a truncated, restricted, and biased understanding of the intelligence process, which is inseparable from its tension with a multisensory, unsystematizable grasp of the environment” (Sadin, 2020).

Big data emerged as a form of absolute knowledge, where hidden correlations between things are revealed, but we have neglected to ask ourselves about the meaning behind things, the ‘why,’ the ultimate reason behind events, and the purpose of life.

“Everything becomes calculable, predictable, and controllable. A whole new era of knowledge is proclaimed. In reality, it is a rather primitive form of knowledge. Data mining uncovers correlations. According to Hegel’s logic, correlation represents the lowest form of knowledge” (Han, 2021), because with correlations we do not know why things happen, we simply know that they happen.

Despite the impressive advances in generative artificial intelligence and the new transformations in science and technology, the truth is that we are not talking about intelligence in the human sense. While AI can perform, through machine learning, a range of functions that humans do—such as calculation, mathematical procedures, information selection, pattern recognition, and reproducing what it has learned—at a speed and with an amount of information impossible for any human being, this does not mean it thinks in a human way. AI lacks consciousness and subjectivity, even though it can simulate emotions and interact with humans by learning and reacting to the information it receives. The issue arises when we reduce intelligence to the ability to calculate and process information. Machines do not produce wisdom because they lack subjectivity and self-awareness, even though they may simulate it and lead us to believe otherwise, and impress us.

AI is neither artificial nor intelligent. Rather, it exists in a tangible form, made up of natural resources, fuel, labor, infrastructure, logistics, histories, and classifications. AI systems are not autonomous, rational, or capable of discerning anything without extensive and computationally intensive training, relying on massive datasets or predefined rules and rewards. In fact, AI as we know it is entirely dependent on a much broader set of political and social structures. And because of the capital required to build AI on a large scale, and the ways of seeing that it optimizes, AI systems are ultimately designed to serve existing dominant interests (Crawford, 2022).

Submission to the Artificial Oracle

Under the pretext of making the best decisions in all areas of life—finance, transportation, healthcare, sports, justice, and more—human affairs are increasingly being resolved from the lofty heights of artificial superintelligence, where larger quantities of data are processed. We are witnessing a growing reliance on artificial oracles, acting as gurus or spiritual directors, imposing daily routines as if they possessed superior and unquestionable knowledge. This process often starts at a basic level, such as coaching, where an app guides emotional life, nutrition, or relationships, prescribing how to think and act. But it can escalate to more prescriptive levels, where AI decides one’s career future or determines eligibility for a bank loan. Sadin points to an even more radical stage we’ve reached—a coercive level where AI will ultimately decide on expenditures, cutbacks, and even administer justice.

Humanity is rapidly equipping itself with an apparatus that renders it increasingly dispensable—surrendering its right to make decisions with full awareness and responsibility over matters that directly concern it. A new anthropological and ontological framework is taking shape, in which the human figure submits to the equations of its own artifacts, with the primary objective of serving private interests and establishing a societal order based on predominantly utilitarian criteria (Sadin, 2020).

It is essential to ask ourselves questions and engage in critical reflection on these matters. What challenges does AI pose to political philosophy? How do we address biases in AI programming when hiring, evaluating employees, or pursuing criminal justice, knowing that AI can hallucinate, make mistakes, and also discriminate? What will be the political effects of robotics in terms of justice and equality? What impacts does AI have on democracy, particularly concerning voter manipulation? How is it transforming journalism and news generation? How does it affect human relationships, learning, and mental health? What should be the degree of citizen participation in the regulation of AI? What implications does AI have for animals and agricultural production? What effects could it have on climate and the environment? What would digital rights look like for data protection and ensuring respect for human dignity?

Moreover, we often confuse predictions with the future, as if a new form of superstition gives us certainty about a controllable or knowable future.

Artificial intelligence learns from the past. The future it calculates is not a future in the true sense of the word; it is blind to events. However, thought possesses an event-like quality. It brings something entirely different into the world… ai merely selects from pre-existing options, ultimately between one and zero. It does not venture beyond what is already given into uncharted territory (Han, 2021).

Cognitive Sedentarism

What would happen if we asked someone to exercise for us, relieving us of the effort involved in such activities? The obvious answer: we would lose the opportunity to improve our physical condition and health, becoming physically atrophied, with all the consequences that need no further explanation. Even in the rhetoric of the gym, no one finds it excessive to speak of sacrifice, effort, dedication, and pushing oneself until it hurts; the more time and effort we invest, the better the results: No pain, no gain. Cultivating oneself as a person in all possible dimensions is an imperative present in every time and culture. Generally, everyone wants to be better than they are and to develop in various aspects of their lives. None of this feels strange to us. However, we live with a paradox regarding the care and development of our capabilities because the same does not apply to intellectual cultivation. What if the criteria we use for physical training were applied to intellectual life? Can you imagine a teacher today discussing sacrifice, effort, dedication…? Parents and colleagues would look at them with bewilderment, as if they were a dinosaur. Why is that? 

It seems we are witnessing an atrophy of thought, a promotion of a culture that favors shortcuts and minimal intellectual effort. If someone can save us time in thinking, reading, writing, comparing, calculating, synthesizing, or analyzing, we thank them as if they are doing us a great favor. And now, thanks to generative artificial intelligence (GAI), we can avoid engaging in academic work that develops essential intellectual skills, leading to brains that will become atrophied in fundamental capacities for clear thinking. It is not that using gai collaboratively for study and work is without merit; the real issue lies in how much we are willing to surrender our freedom and which skills we are prepared to forfeit for convenience. The substantial risk is that we stop teaching the value of effort and concentration—the ability to sit focused on something challenging for hours with the purpose of solving it. How can we develop tenacity and resilience if we instantly abandon tasks for someone or something else to resolve, sparing us the effort?

Losing the ability to calculate, to maintain attention, or to engage in sustained, deliberate effort to solve a difficult problem is part of a phenomenon we refer to as cognitive sedentarism (Sigman and Bilinkis, 2023).

The best way to combat cognitive sedentarism is to convey our own passion for knowledge and the benefits of developing intellectual skills that enable us to think for ourselves with greater depth, without renouncing our freedom to choose who we want to be and where we want to go.

What Are We Willing to Lose?

In today’s automated systems, computers often take on intellectual tasks—observing, perceiving, analyzing, evaluating, and even making decisions—those until recently were considered strictly human domains. The person operating the computer plays the role of a technology employee who inputs data, monitors responses, and looks for errors. Instead of opening new frontiers of thought and action for human collaborators, the software narrows our perspective. We trade subtle, specialized talents for more routine and less distinctive ones (Carr, 2014).

Day after day, we risk becoming unable to write an email, create a shopping list, navigate our own city, devise a business strategy, or compose a message, speech, or essay. With great enthusiasm and comfort, we surrender to the ever-helpful invitations: “What can I do for you?” We feel simultaneously pampered and served by technology, while we elevate it to a superior instance that will do almost everything for us and will know how to do it better.

Can we imagine the effects on individual and collective psyches of being in a position where we expect everything— as if we were lounging on our sofa— from systems that resemble infinitely superior butlers? This environment fosters the atrophy of both our impulse toward outward engagement and our intellectual faculties… (Sadin, 2024).

According to Sadin, we are in an era where everything seeks to satisfy well-defined objectives in real time, leaving no room for spontaneity or activities deemed useless or inefficient.

In the workplace, often the challenge matters more than the final result; the process and meaning of what we do provide us with a sense of fulfillment. Thus, in the professional world, the key to self-worth lies in the significance of our work and the knowledge that we are making a meaningful impact.

We appreciate the things we have created—our own works—simply because they are ours and we understand the effort they required. Perhaps in a few years, only a small minority will have access to those challenges that give life meaning. If that is the case, it could represent one of the greatest impacts of AI on the workforce (Sigman and Bilinkis, 2023).

Truth Reduced to Data

AI performs data management functions that far exceed our capacity and speed, but it does not replace other human abilities related to how we connect with one another or the meaning of life—issues that cannot be resolved through data, statistics, or patterns. Reducing knowledge to mere information fosters a naive optimism about AI’s various possibilities regarding human life. 

Regardless of the direction AI development takes, we cannot delegate responsibility or wisdom to it. There remains a certain naivety in believing that everything can be solved with an increasing amount of data, as if the answers to human dramas depended solely on information management rather than on deep reflection about who we are and what we truly want to achieve for the future generations to come. It is evident that we cannot evade technoscientific progress, and it is desirable that we think responsibly about how to accompany these processes. It would be irresponsible to fall into a determinism that suggests we should simply ride the wave without reflection, as if nothing depended on us other than accepting a future already programmed by uncontrollable forces.

The future is shaped by our present decisions, and it is commendable that political actors are thinking ahead in a responsible manner while listening to experts from various disciplines. The governance of technology will increasingly become an unavoidable issue on the political agenda. The abduction of truth through its reduction to mere data transforms AI into a sacred power, a reliable source for judging reality.

“Digital technology stands as an authority capable of determining reality more reliably than ourselves, as well as revealing dimensions hidden from our consciousness” (Sadin, 2020). Machines are anthropomorphized as if they possess the best discernment, leaving us with nothing to do but obey and relieve ourselves of the burden of thinking. We save time and mental effort while surrendering our freedom without resistance and accepting this new truth without question.

While we can work collaboratively and leverage the possibilities of technology, the greatest challenge lies in thoughtfully considering what we are willing to renounce of our human condition for convenience and what our non-negotiable minimums are.

Human thought is more than calculation and problem-solving. It clarifies and illuminates the world, bringing forth an entirely different reality. The intelligence of machines poses the primary danger that human thought may begin to resemble it and become mechanical itself (Han, 2021).

Cyber Leviathan and Technocratic Power

In his work Ciberleviatán (2019), José María Lasalle presents the crossroads facing humanity: the choice between losing freedom for greater security or, through responsible political action, establishing a genuine pact that ensures citizens’ freedom, protects data, and sets new digital rights.

We find ourselves submerged in a swarm of humans “lacking critical capacity and devoted to consuming technological applications within an overwhelming flow of information that grows exponentially” (Lasalle, 2019).

According to this Spanish philosopher, humanist liberalism primarily aims to limit power, and it now confronts the seductive allure of technological power that seeks to be omnipresent and omniscient, without resistance. We are witnessing a new reconfiguration of power:

Today, the data generated by the internet and the mathematical algorithms that discriminate and organize it for our consumption form a binary of control and domination that technology imposes on humanity. To the extent that humans are acquiring the characteristics of digitally assisted beings, largely due to their inability to decide for themselves (Lasalle, 2019).

The fascination with the unlimited power of technology, viewed as inevitable and unavoidable, which promises greater control and certainty in decision-making, gradually erodes trust in the fragility and spontaneity of the human factor. Thus, the freedom that is so valued and defended begins to be seen as a problem for progress, leading humans to accept that their freedom should be assisted by a superior, almost divine intelligence: artificial intelligence. Some authors are beginning to see in this technocratic sociocultural shift a promise to protect humans from their dangerous spontaneity, suggesting that it might be better to program ourselves according to what is deemed best by utilitarian criteria.

We are losing freedoms under the illusion that we can access new developmental possibilities, as if becoming ostensibly more free requires us to renounce fundamental liberties. Plus, we are doing this passively and with a certain naturalness and fascination.

Thus, we encounter a convergence of the technical, economic, and political realms, where power becomes disproportionately centralized over a growing number of activities, including health, education, and labor. According to Lasalle, “algorithmic despotism is returning humanity to a new minority status that unravels the liberal tradition of knowledge that fostered the Enlightenment.”

Bibliography for further reading

Carr, N. (2015). The Glass Cage. How Our Computers are Changing Us. London: Bodley Head.

Coeckelbergh, M. (2020). AI Ethics. Cambridge, MA: MIT Press.

Coeckelbergh, M. (2022). The Political Philosophy of AI. Cambridge, UK: Polity.

Crawford, K. (2022). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press.

Han, B.-Ch. (2022). Non-things: Upheaval in the Lifeworld. Cambridge, UK: Polity.

Lasalle, J. M. (2019). Cyberleviatán: El colapso de la democracia liberal frente a la revolución digital. Madrid: Arpa.

Sadin, E. (2020). La inteligencia artificial o el desafío del siglo. Anatomía de un antihumanismo radical. Buenos Aires: Caja Negra.

Sadin, E. (2022). La era del individuo tirano. El fin del mundo común. Buenos Aires: Caja Negra.

Sadin, E. (2024). La vida espectral. Pensar la era del metaverso y las inteligencias artificiales generativas. Buenos Aires: Caja Negra.

Savulescu, J., & Lara, F. (2021). Más que humanos. Biotecnología, inteligencia artificial y ética de la mejora. Madrid: Tecnos.

Sigman, M., & Bilinkis, S. (2023). Artificial. La nueva inteligencia y el contorno de lo humano. Barcelona: Penguin Random House.

Varela, L. (2022). Espejos: filosofía y nuevas tecnologías. Barcelona: Herder.

Miguel Pastorino

Miguel Pastorino

Doctor en Filosofía. Magíster en Dirección de Comunicación. Profesor del Departamento de Humanidades y Comunicación de la Universidad Católica del Uruguay.

newsletter_logo

Únete a nuestro newsletter