Governance of Algorithms

Governance of Algorithms

Abstract One issue is how to govern algorithms, and another is whether algorithms will eventually govern us, to what extent,

Por: Daniel Innerarity4 Feb, 2025
Lectura: 13 min.
Governance of Algorithms
Compartir
Artículo original en español. Traducción realizada por inteligencia artificial.

Abstract

One issue is how to govern algorithms, and another is whether algorithms will eventually govern us, to what extent, and with what legitimacy. To address this second issue, we need to examine the expectations of algorithmic governance and its limitations.  As a result, it seems unlikely that algorithms can take over the entire political process with the efficiency they sometimes promise and with the legitimacy needed to justify such a new regime. 

From bureaucracy to algorithmic governance

Once a political community reaches a certain level of complexity, the need to objectify and automate collective decisions naturally arises. When the number of actors and factors involved exceeds individual and centralized capacities, decision-making becomes more procedural and less charismatic.

When an incompatibility is presented between any kind of standardized decisions and humanistic considerations, it’s important to remember that these procedures were designed specifically to minimize human intervention in decision-making. Porter referred to the culture of quantification as the cult of impersonality, where the human element is minimized as much as possible: formalizable principles rather than subjective interpretations, unified standards instead of methodological chaos, and the rule of law over human power. In this new realm of objectivity, mechanical objectivity and disinterested science would reign, leaving out anything personal, idiosyncratic, or perspective-based; trust is no longer rooted in the integrity of truth-tellers or the prestige of exemplary institutions, but rather in highly standardized procedures (Porter, 1995). The most radical formula to express it could be this: “Instead of freedom of will, machines would offer freedom from will” (Daston and Galison, 2010, p. 49). This hope for data and objectivity grows in a political and social culture marked by distrust, crises, and uncertainty; turning to a form of objectivity benefits both governors and the governed, protects decision-makers, and fosters confidence among those affected by their decisions. 

The digital era has intensified this long-standing trend. Governing is already largely—and it will become even more so – an algorithmic act; a significant portion of government decisions is made by automated systems. This method of governance has been defined in various ways: “power is increasingly in the algorithms” (Lash, 2007, p. 71); “authority is increasingly expressed algorithmically” (Pasquale, 2015, p. 1).

The use of algorithms and automated decisions addresses the need to manage various forms of complexity, such as identifying the different perspectives and interests within an increasingly pluralistic society, as well as efficiently delivering public services. Algorithmic governance significantly improves management capabilities when handling large volumes of data and addressing complex problems.  Thus, not only does the world appear to have become more understandable, but new possibilities for political intervention, increased efficiency, smarter regulation, and earlier anticipation of certain problems have also emerged. This promises a form of governance that would simplify the complexity of social phenomena to an acceptable level.

The rise of decision systems driven by algorithms and data means that machines not only support humans in their decision-making but can also replace them, either partially or entirely. The question raised by all this is to what extent and in what way the use of automated decision systems (ADS) is compatible with what we consider a political system of decision-making. Democracy is expected to fulfill the belief that it is a genuine form of self-governance by the people while also effectively addressing the problems faced by society.

The democratic expectations of algorithmic governance

Algorithms make a dual promise of objectivity and subjectivity, offering both ideological neutrality and, simultaneously, complete respect for our preferences. These two promises have very beneficial effects on democratic politics, as they enable a more objective assessment of public policies and a better understanding of social preferences. However, they also come with their limits and drawbacks.

The promise of objectivity

The promise of algorithmic decision-making is highly seductive; it is not merely about saving time and money, but about promoting objectivity. Algorithms are often seen as objective, with their evaluations considered fair, accurate, and free from subjectivity, errors, and power dynamics. Furthermore, this perceived objectivity lends them legitimacy as mediators of relevant knowledge. They are not only tools for decision-making but also stabilizers of trust, ensuring that “assessments are accurate and fair, without flaws, subjectivity, or distortions” (Gillespie, 2014, p. 79). The implementation of automated decision systems (ADS) is justified because they not only make decisions more efficiently but also reduce partisanship and enhance fairness. We would have tools that appear to fulfill the hope of bringing greater rationality to the decision-making process, counteracting the subjectivity, ideological biases, or other prejudices that often drive many human decisions.

This claim is not entirely new, nor is its criticism. Weber’s idea of bureaucratic authority had already praised the values of efficiency and objectivity, but he had also warned of their limits and that other types of authority could arise precisely because of the ideal of objectivity. In principle, all the pathological tendencies of traditional bureaucracies also apply to automated decisions. Ever since claims of objectivity were formulated, both in bureaucratic settings and the digital era, it has been consistently observed that such procedures fail to deliver on that promise, generating other types of distortions, being far from free of arbitrariness, and that algorithms often reflect, and even amplify, deeply rooted societal prejudices.

 The promise of subjectivity

The second vector of democratization would stem from understanding the true will of the people, which a democratic government must serve. The chain of legitimization would thereby be strengthened, as it would allow the real decisions of the people to serve as the foundation upon which the popular will is formed. In a world filled with sensors, algorithms, data, and intelligent objects, a kind of social sensorium is being shaped to personalize health, transportation, and energy. Thanks to data engineering, we are moving towards an increasingly granular understanding of individual interactions and systems that are more responsive to individual needs.  By using micro-segmentation and granularity, we can shape a society finely tuned by algorithms, enabling us to understand citizens’ desires with remarkable accuracy based on their everyday behaviors. The objectivity of algorithmic governance methods would be accompanied by greater subjectivity in its recipients, who would thereby see their individuality more thoroughly understood, respected, and fulfilled.

The comfortable paternalism of algorithmic societies lies in the fact that it gives people what they want, governs with proportionate incentives, and proceeds by inviting, suggesting, and guiding. Transferring this model to politics would not encounter major issues were it not for the fact that the price of these benefits is often the sacrifice of some aspect of personal freedom. Considering the discrepancy between the self-determination we claim to demand and the self-determination we are actually willing to exercise when comforts and benefits are at stake, the outcome is that the satisfaction of needs often comes at the cost of sacrificing spaces of freedom. It is true that many of our desires are satisfied in this way, but at the cost of a certain renunciation of reflecting on them; what we want takes precedence over what we want to want, and the minimal, implicit will of the consumer replaces the explicit political will.

The democratic limitations of algorithmic governance

Algorithmic governance is well suited to enhance certain aspects of the policy process, but it is of little use for others; it can correct human deficiencies and biases, identify preferences, and measure impacts. However, it is inadequate for dimensions of the political process that are not easily subject to computation and optimization—areas that are difficult to quantify and measure. This includes the genuinely democratic moments when the criteria and objectives that technology can later optimize are determined. The reason algorithms are politically limited stems from their instrumental nature. Algorithms are designed to achieve predetermined objectives, but they contribute little to determining those objectives, which is the responsibility of political will, democratic reflection, and deliberation. The role of politics is to determine the design of algorithmic optimization strategies and to consistently preserve the option to alter them, particularly in dynamic environments. In a democracy, everything must be open to moments of re-politicization, meaning there must be the possibility to question established objectives, priorities, and means. This is the purpose of politics and not of algorithms. Algorithmically optimized governance lacks the capacity to resolve genuine political conflicts or address the political dimensions of those conflicts, particularly when frameworks, ends, or values are involved. As Lucy Suchman noted in another context, robots perform very well when the world has been organized as it was intended to be (Suchman, 2007).

This duality of ends and means, of political goals and algorithmic optimization strategies, can be illustrated by the student distribution system implemented for New York City schools and the ensuing debate regarding which values should be prioritized in that distribution (Krüger and Lischka, 2018). The system can prioritize the maximum satisfaction of individual preferences or a balanced social mix within schools. Both objectives have valid reasons supporting them; one option emphasizes individual desires, while the other promotes social cohesion. It’s also up for debate what level of compromise or balance between the two values is most desirable and achievable if they’re to be respected at the same time. To determine this, a political debate about values and the involvement of those affected is necessary—a discussion from which an algorithm cannot absolve us.

In this and similar cases, the issue is not merely about the implementation or transparency of the algorithms used, but about the value judgments involved in defining the objectives of education, which are diverse and sometimes conflicting, as one would expect in a pluralistic society. Political negotiation processes take priority over technical solutions, and technical solutions cannot replace the need for political negotiation. We are, therefore, addressing what we refer to as political issues.

Strictly speaking, political issues are those that can only be resolved through value judgments, while technical issues involve deciding on the implementation of intended objectives based on available knowledge. At times, it is also politically controversial what kind of optimization is considered satisfactory and which kinds of knowledge are deemed relevant. It could even be argued that if optimization as a principle is desirable, the ideology of optimization—believing that the effective implementation of certain objectives can render political discussion about those objectives unnecessary—may serve as a strategy for depoliticization. 

Algorithmic governance seeks to achieve objectives that have not been debated, and which it neither establishes nor questions. However, democratic politics is not merely about processing information but interpreting it within a framework of guaranteed pluralism. It is not just about how best to achieve certain objectives but about how to decide upon them. The politics begins where the debate arises about what algorithms should satisfy, which values they should uphold, and what conception of fairness they should serve. This idea can be expressed by recalling John von Neumann’s statement: we can build an instrument capable of doing everything that can be done, but we cannot build an instrument that tells us whether something can be done (Neumann, 1966, p. 51). In other words: the decision about what is computable cannot itself be computed.

As in politics in general, when we talk about algorithmic governance, the notion of producing better decisions with the help of machines still requires a prior criterion for what constitutes a good decision. The tools responsible for optimizing decisions do not eliminate the need to discuss what constitutes a good decision. It is true that artificial intelligence aids in informing decisions and optimizing outcomes, but while some economists have tried to quantify and measure aggregate welfare, there is no predefined or uncontested notion of what constitutes a successful political outcome. 

The great promise of algorithmic governance is that optimal results will lead us to forget the desired procedures. It is a type of governance that appears to prioritize effectiveness, even at the cost of excluding us from decision-making or reducing our role to a minimal, implicit, and individual presence, reflected in the form of requirements and preferences found in our digital footprints. If citizens are unable to oversee or influence algorithmic decisions, we can’t truly call it self-government.

Conclusion: the inevitability of deciding

The great challenge of the digital era is to resist the allure of depoliticizing our societies and overcome the inertia of traditional governance methods. We must avoid being seduced by falsely apolitical or post-ideological rhetoric while also moving away from practices that no longer align with new social realities. We are facing an attempt to conceptualize society in a depoliticized manner.

Contemporary societies require significant cognitive mobilization to address the problems they face, but the ultimate argument in favor of democracy is not epistemic, but decisional. Everything possible must be done to ensure that societies make the best decisions, but their ultimate legitimacy does not stem from the correctness of those decisions. Instead, it comes from the decision-making power of citizens, regardless of how well or poorly they use that power. Democracy tends to produce better decisions than alternative models, but its ultimate legitimacy comes not from the quality of those decisions, but from the popular authorization behind them. The need to make decisions is the core justification for democracy—a form of government where ordinary people have the final say over experts. There appears to be no technological device today that can entirely free us from the need to make decisions.

The ultimate legitimacy (of a society) doesn’t stem from the correctness of its decisions, but from the decision-making power of the citizenry—regardless of how well or poorly that power is exercised.

Artificial intelligence procedures cannot absolve us of that decision. Politics, exists where, despite all the sophistication of calculations, we are ultimately driven to make decisions that aren’t backed by overwhelming reasons or guided by infallible technologies. A humane world must be a negotiable world.

References

Daston, L., & Galison, P. (2010). Objectivity. Princeton University Press.

Gillespie, T. (2014). The Relevance of Algorithms. In T. Gillespie, P. J. Boczkowski & K. A. Foot (eds.), Media Technologies: Essays on Communication, Materiality, and Society (pp. 167-193) Cambridge: The MIT Press.

Krüger, J., & Lischka, K. (2018). Was zu tun ist, damit Maschinen den Menschen dienen. In R. Mohabbat Kar, B. Thapa & P. Parycek (eds.), (Un)berechenbar? Algorithmen und Automatisierung in Staat und Gesellschaft (pp. 440-470). Berlin: Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS.

Lash, S. (2007). Power after hegemony. Theory, Culture & Society, 24(3), 55-78.

Neumann, J. von (1966). Theory of Self-Reproducing Automata. Urbana: University of Illinois Press.

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge: Harvard University Press.

Porter, T. (1995). Trust in numbers: The pursuit of objectivity in science and public life. Princeton University Press.

Suchman, L. (2007). Human-Machine Reconfigurations: Plans and Situated Actions. Cambridge University Press

Daniel Innerarity

Daniel Innerarity

Doctor en Filosofía. Catedrático de Filosofía Política y Social, investigador Ikerbasque en la Universidad del País Vasco y director del Instituto de Gobernanza Democrática. Profesor en el Instituto Universitario Europeo en Florencia. Colaborador habitual de opinión en medios de prensa.

newsletter_logo

Únete a nuestro newsletter