AIdeology
Contesting Computational Spatiality
I draw here on the work of Federico Cugurullo, whose recent article in Antipode offers a rigorous account of artificial intelligence as an ideological formation. Cugurullo offers an analysis of how AI operates as a framework for organizing perception, space, and political expectation. Artificial intelligence circulates today as a set of assumptions about the organization and function of social order. It is associated with efficiency, objectivity, sustainability, and progress. These associations shape institutional behavior, policy frameworks, and public discourse. They structure what counts as a reasonable response to social problems and what appears inefficient, unrealistic, or obsolete.
AI discourse does not primarily persuade through argument. It works through scaled normalization. Political decisions are framed as technical necessities. Administrative authority is translated into metrics and models. Social inequality is rendered as a problem of insufficient data or imperfect optimization. Within this horizon, disagreement appears as a misunderstanding of how systems work rather than a conflict over values or power.
Cugurullo’s insight is that ideology is not external to, but remains rather immanent to technology and technological practices. AIdeology does not sit outside technology as a distorting element of propaganda that could simply be stripped away. It operates through discourse, institutional routines, and spatial arrangements. It shapes how intelligence is defined, how responsibility is distributed, and how future possibilities are imagined. Language is central. Terms such as intelligence, learning, autonomy, and objectivity are used as if their meanings were settled. Their apparent clarity conceals the fact that AI systems depend on specific economic arrangements, labor processes, and infrastructures. The vocabulary of neutrality masks continuity with existing power relations. Decisions encoded in models appear as outcomes of computation rather than as political choices.
One of the strengths of Cugurullo’s intervention lies in its attention to space. AI is never abstract. It is embedded in concrete environments: cities, borders, logistics networks, platforms, and data centers. These spaces are reorganized, or “coded” around prediction, monitoring, and control. Urban governance is reframed as technical management. Populations are rendered legible through continuous data extraction and classification. The “smart city” provides a clear example. It is presented as an upgrade in efficiency and sustainability. But it also involves a reconfiguration of authority. Decision-making shifts from public deliberation toward algorithmic systems whose operations are difficult to contest. Accountability is displaced. Political responsibility is absorbed into technical infrastructure.
This spatial dimension is fundamentally constitutive. AIdeology gains traction by attaching itself to concrete projects, built environments, and policy initiatives. It presents itself as practical, inevitable, and future-oriented. Resistance appears impractical by comparison. At the same time, the material conditions that sustain these systems tend to disappear from view. Ghost work is rarely acknowledged. Environmental costs are deferred or reframed as temporary inefficiencies. Extraction of minerals, energy consumption, and infrastructural violence are treated as externalities rather than structural features.
This displacement is ideological in a precise sense. Attention is directed toward outputs and promises while inputs and conditions recede into the background. The system appears intelligent on the surface because the work that makes it function remains invisible. Cugurullo identifies several recurring hypes within AIdeology. First is the belief that intelligence can be separated from embodied social practice. Second is the assumption that decision-making improves as politics recedes. And third is the expectation that automation will dissolve rather than reorganize capitalist relations. These mythical statements circulate across corporate marketing, policy documents, academic research, and popular culture. They form a shared horizon of expectation rather than a single coherent doctrine.
Danger lies not just in the fact that these faulty claims are universally believed. No less importantly, they structure the debate, excluding any and all conditions of discourse that refuse to make the same assumptions. Even critical discussions of AI often remain within this frame. Calls for ethical AI, responsible AI, or inclusive AI frequently accept the underlying assumption that intelligence must be computational, that optimization is the appropriate response to social complexity, and that technical refinement can substitute for political confrontation.
The ideological force of AI discourse lies in the distribution of responsibility and more so; responsibility gaps. Harm becomes a design flaw. Injustice becomes hidden bias. Structural inequality becomes a problem of data representation. These translations do not eliminate harm, they displace it into technical domains where political accountability is diluted. The result is a form of depoliticization that does not eliminate governance but reorganizes and effectively hides it, embedding power into infrastructure. Power persists, but its mechanisms become harder to distinguish. Decision-making responsibility is thereby diffused across systems, models, and infrastructures.
This has consequences for the shaping and production of subjectivity. Individuals are increasingly objectified as data points, risk profiles, behavioral patterns. Participation morphs into feedback. Agency becomes compliance with system design. The social world is approached as a space to be optimized rather than contested. Cugurullo’s intervention does not rest on nostalgia for a pre-digital past. It does not deny the utility of computation or automation. Its critical force lies elsewhere. It insists that intelligence is not a neutral category. It is a political designation that carries assumptions about value, authority, and legitimacy. To define intelligence in computational terms is to privilege certain forms of reasoning over others. It elevates prediction over judgment, efficiency over deliberation, and optimization over conflict. These priorities are not inevitable. They reflect specific historical and economic conditions.
The spatial reorganization associated with AI makes this clear. Borders become automated. Urban life is monitored and managed through sensors and platforms. These transformations are presented as technical upgrades, but they redistribute power and vulnerability in increasingly uneven ways. The question, then, is not whether AI systems function well. It is rather what kind of social order is presupposed when intelligence is defined as computation, and which forms of authority are legitimated and stabilized by that definition.
Our aim is not to predict technological futures or to evaluate individual systems. It is to examine the assumptions embedded in contemporary AI discourse and the forms of life those assumptions support. Obviously, ideology rarely announces itself as such. It operates through repetition, normalization, and the quiet removal of certain questions from public debate. AI discourse functions in this way. It defines the limits of what can be asked and what is treated as already resolved. What remains unresolved, however, is the relation between intelligence and power. Who defines what counts as intelligence. Who benefits from its application. Who bears its costs. These questions do not disappear when systems improve. They become more urgent as systems become more pervasive.
To engage critically with AI requires more than better models or ethical guidelines. It requires attention to space, labor, and political responsibility. It requires treating intelligence as a contested concept rather than a technical achievement. The task is not to imagine a world without technology. It is to refuse the reduction of social life to optimization problems. It is to insist that decisions affecting collective life remain open to contestation, rather than delegated to systems that present themselves as impartial.
The guiding question, then, is not where AI is going. It is what kind of world is already implied when intelligence is framed as computation, and which forms of power become easier to exercise under that framing. That question remains open and we must fight for our right to keep it open.


