COMPUTING HUMAN OVERS[A]IGHT: LAW/APPARATUS\VISION/AGENCY
DOI: https://doi.org/10.1145/3744169.3744187
AAR 2025: The sixth decennial Aarhus conference: Computing X Crisis, Aarhus N, Denmark, August 2025
Computing Human Overs(a)ight is a conceptual exploration of human responsibility for oversight in high-risk AI systems, as introduced in the European Artificial Intelligence Act (Art. 14). Investigating the legal and computational framework of the high-risk systems, and human responsibility within, we amplify possible enactment and embodiment of this article — we extract it as a phenomenon from the law so to understand the political and ideological notion of agency in the automated systems. The accelerated dissemination of the tools, models and products under the roof term AI in the scope of the last decade, has led us to their legitimacy in the eyes of the law, even in high-risk operations. Article 14 invokes the setting for the problematization of current intertwinements between computational and legal acts and abstractions. Such an approach can help us to understand the soci[et]al consequences of human-machine operations, addressing the depositions between computation and human agency, transparency, responsibility, and dignity. Relying on critical media studies that address the computational processes through a distinction between operational and representational notions of a computational image, we are questioning the [f]actuality of what is seen and what can be overlooked in the human oversight of high-risk AI systems.
ACM Reference Format:
Kristina Tica and Joaquin Santuber. 2025. COMPUTING HUMAN OVERS[A]IGHT: LAW/APPARATUSVISION/AGENCY. In The sixth decennial Aarhus conference: Computing X Crisis (AAR 2025), August 18-22, 2025, Aarhus N, Denmark. ACM, New York, NY, USA, 12 Pages. https://doi.org/10.1145/3744169.3744187
Article 14: Human oversight
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.
[…]
4. [...] e) to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.
EU Artificial Intelligence Act. Article 14(1)(4)
1 Introduction
The point of departure of the Computing Human Overs(a)ight critique is the notion of human oversight enacted by the EU law regulating AI – The European Artificial Intelligence Act, Article 14. We argue that computer vision-based systems, understood by automated processing in real-time, render human oversight in high-risk contexts as declared by the EU AI Act useless. This legal provision refers to “human-machine interfaces”, “real-time control”, “natural person” and even a “stop button” to bring the system to a halt. Despite invoking the human, these mechanisms displace the people's agency and political action by making oversight subject to the AI systems infrastructures, protocols, and automation. Human oversight is the currency that the legislator asks society to pay in exchange for the possibility of having high-risk AI systems. As such, it lies at the heart of the question of control, subjugation of machines to humans, or put differently: what is left for us, humans, now that AI is there and everywhere?
In the promise of a world computed by machinic perception, Computing Human Overs(a)ight investigates the tensions and paradoxes of human oversight in applications of high-risk AI-based systems as declared in the EU AI Act, Article 14, that has already been set into action. In the expansion of the global infrastructure —as Tech is accelerating from Big to Bigger— the decade behind us has been marked by the growing pressure that looms over our societal capacities to adopt and adapt to all emergent technological hopes and hypes. Guided by the law as a framework for our critique we shape an argument that law reveals the skeleton of a society, its commitments, preferences and predilections and, also, its vices. The law is formulated and placed into action once a certain set of properties, behaviours and regulations are enacted in the function of society, therefore it can help us navigate current understanding of human-machine operations, mainly in the scope of computer vision, not only from a technical, but from a social, ethical and legal level, questioning human agency in the automated apparatus.
From a legal perspective, we take inspiration from Robert Cover's opening of Nomos and Narratives, “We inhabit a nomos — a normative universe. We constantly create and maintain a world of right and wrong, of lawful and unlawful, of valid and void” [6] in conjunction with Henri Bergson's “this aggregate [ensemble] of images is what I call the universe” [3]. As a result, our view of legality is summarized in the following sentence: we construct a normative universe, which is an aggregate of images. This is in line with the idea that the legal system is first a system of images and then a system of rules, put forward by Peter Goodrich [12].Thus, the question is how do we construct a legality when there is no image to see, how do we exercise human oversight, and create and maintain a world of lawful and unlawful in AI systems from computational images that do not let them be seen by human eyes?
An image in this critique is placed on a threshold between the observational and operational – what the oversighter sees on the representational layer of an image is a surface beneath which many computational processes emerge. While the legislator's view of the image and vision refers to is a representation of the rich concrete phenomena that makes our social life, the computational images that AI systems operate on are abstractions that do no longer refer to phenomena itself, but an aggregate of data layers. A computational image is not a photographic proof – it is a data image. In processing of such an image, the data is rendered through layers of machinic calculations, becoming an operative substance for automated decision-making processes, rather than serving as a legal testimonial upon which a human operator tends to deliver its judgement. By the time the human gets to decide and understand what is wrong or right in the content projected on a visual interface, there was already an irreversible information encoding protocol [11] that took place in computational processing, creating a web of values and meanings [that are by its nature not complementary with human logic of causality].
Building on a tradition of image studies from media theory and arts; and critical legal theory [12] [32,36], we work with concepts of technical image [10], invisuality [20], operative image [28], through this critique to human oversight we address the imaginaries of the EU legislator exposing the [im]possibility of human oversight. At the bottom of the issue lies the collision between the will of the law and the operations of the AI systems, the constitutive tension between their ideologies and materialities. More broadly we address the social impact of computation under the notion of AI; marking the current state of human agency in the automation of decision-making, labour, law, and governance, and by providing insight into the computational, legal and artistic examination of human oversight in high-risk AI systems. After introducing the frameworks of the law-in-action, we discuss the ontology and teleology of an apparatus for human oversight. A formation of a fixed place is an illusion of control, with its function and aesthetic mainly symbolic, rather than operative.
This introduction is followed by four sections that critique human oversight of AI systems and the possibility to construct legality from the perspective of the law, apparatus, vision, and agency. The critical analysis of the legal provisions reveals the tension between the de jure subject of oversight, the natural person, and the de facto subject of oversight—the platform, as provider and deployer of AI systems. The apparatus section argues how the mechanism of oversight came to matter, by removing the human excess from the machine-based systems, while furnishing this role with concepts from cybernetics, as shown in the Cybersin project and the concept of an Operations room. The vision section highlights that the computational images to be seen are not produced for human sight but for the operations of the AI systems, the machinic vision. As such, oversight when the image is invisual to the human eye, is a form of machinic/platform seeing over itself. The closing section, questions the agency humans have on constructing a meaningful present and future when oversight is limited by predictive and preemptive politics of AI systems, forming a techno-solutionist self-fulfilling prophecy through re-engineering of society at a global scale.
Through this framework, we decode and deconstruct human oversight under notions of contemporary algorithmic culture [29,30] and relate it to psychological, perceptual and cognitive shifts in the visual culture [2] and artistic practices of representation [8,35] and socio-demographic concerns and consequences of automation [23] that have accelerated over the last decade, due to the global distribution and adoption of the tools, models and products under the roof-term of Artificial Intelligence. Against a background of the public branding, as mystification or commodification of these tools, and the actual technological developments and systemic implementation on a global scale, —confusion and dissonance— the understanding of how, why, and for whom these systems work comes into urgent question.
Political determination and social challenges are conjoined with the technological environment, and in such a constellation, it is necessary to disambiguate the distribution of responsibilities, and the notion of agency between the human [cognition] and the automated [systems]. In the public discourse, there are frequent instances of praising automated systems and algorithmic data processing as a form of intelligence, which obfuscates the purpose and the limits of the implementation of these algorithms and tools for different systems and industries. This perspective is one of the key motifs in this critique - to differentiate the operations of AI as a technological cluster of tools and models, hardware and software components, and AI as a discourse, concept and, nevertheless, ideology. Such understanding of AI is central for us, so to be able to [critically] compute human oversight in all its [statistical] probabilities.
2 THE LAW
2.1 Human Oversight by the European Artificial Intelligence Act: Article 14
“Article 14: Human oversight
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use. […]
4. [...] e) to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.”
With the increase in the availability of AI-based systems, the EU took the lead in regulating the implementation of these systems through the EU AI Act, prohibiting certain practices and creating new control mechanisms. The EU AI Act proposes new concepts that leave many questions open to be answered, for example, related to the human oversight of high-risk AI systems. Instead of leaving these questions to the experts in computer sciences, software engineering, data science or statistics, the idea is to find embodied and situated ways of social participation in the definition of the AI legalities—what is legal-illegal, lawful-unlawful, right or wrong [5] exploring how EU legislators, through the AI Act address the polyvalent blend of scientific, commercial, and political interests that AI entails.
As an emergent socio-technical phenomenon, AI is posing new questions to our society, as such the AI legislation resorts to futurist imaginaries, grounded in science fiction films and literature, as well as nuclear war scenarios [14]. This is especially telling in the case of how the law, the EU AI Act, imagines human oversight of high-risk AI systems, via human-machine interfaces to control in real-time the systems – including a stop button [37].
These abstract placeholders of human oversight are left by the law to be filled by someone. These are not mere technical decisions of computational efficiency, stability and capacity, but rather of a politico-ethical character. The proposal being put forward is that people, situated in their communities, are the ones who should give meaning to the questions of oversight — by whom? What content is in? What is out? When does it become censorship? What are the limits? How do we envision participatory forms of decision-making regarding human oversight? As such, human oversight should be a mechanism that makes room for people to construct a legality on the issues and challenges brought in by AI, and most importantly to contest AI commitments, values, and predilections.
From that point, this critique establishes a dialogue on feasibility, accessibility and flexibility in the development and understanding of both systems - the law, as well as automated decision-making processes under the term of AI. Both systems function on abstract rules, aiming to be universally applicable, yet specifically and fragmentarily changing as they are living and transforming with society. New rules must be invented, as the old ones must be revisited or readjusted at the pace of soci[et]al changes.
2.2 Human Oversight for high-risk AI systems
The European Artificial Intelligence Act introduces the notion of human oversight for the purpose of allowing AI-based applications that may affect fundamental rights, security and health, categorized as high-risk AI systems. In a way, human oversight is there as a condition, a license to take the risk.
In the EU AI Act. Annex III: High-Risk AI Systems Referred to in Article 6(2), there is a list of proposed high-risk AI systems, which in summary includes the following:
“These use cases include AI used in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. However, some exceptions apply for AI used for biometric identify verification, detecting financial fraud, or organising political campaigns.” [38]
This Annex of high-risk AI systems brings us to a question –who is at risk in this view– the system or the civil body? Is human oversight the protection to keep the system functional in guarding its lawful borders or to protect human dignity and safety at the margin of it? Or is it enabling governance of certain geopolitical flows, providing sufficient information about the object of governance “to identify and enforce the broad outlines of any [given] plan? [4]
In Computing Human Overs(a)ight we are exploring the manipulative capacities for a data image, and the limitations of human decision-making on what is seen, what is considered to be a risk, what is the content being regulated under the guise of safety protocol for post-human property and agency [17]. To understand the framework for human oversight, it is important to understand what the properties of the systems-in-the-making are, as well as what are the priorities of their makers/owners, or rather, how it affects the understanding of the external world.
To operate on an image, the image already must be set in a frame, and reduced to a fixed spot, from which data is being translated, and mass-absorbed in the system. Such image-data is an object of computation, it is inscribed in a larger modality of governance characterized by pure functionality without meaning and automation of the thought and will [1].
The law-in-text envisions human oversight of AI systems with the human at the centre, embodied in the form of a natural person. However, when we look at the materiality of these mechanisms –the law-in-action–they tell a different story. As such, the human subject starts to fade away from the centre, pushed aside by means of an apparatus of interfaces, and buttons, designed and developed by providers and decided by deployers.
3 APPARATUS
The apparatuses of oversight created by the law, including human-machine interfaces and a stop button, recycle older forms of State power now operating via corporations, and more specifically platforms. The EU AI Act refers to them as the provider and deployer, which are new techno-legal concepts, creating a security dispositive (apparatus) made of institutions, agents, artefacts to protect corporations and their innovations from undesirable external influences – people and their ideas [14]. This ensures the stability of the corporation/platform profitability and secures the promises of progress that AI has sold to the EU legislator.“Taking contemporary technical images as a starting point, we find two divergent trends. One moves toward a centrally programmed, totalitarian society of image receivers and image administrators, the other toward a dialogic, telematic society of image producers and image collectors.” [10]
A byproduct of the apparatus is the creation of a subject —human oversight creates a subject called the natural person who is awarded the capacity of overseeing, but cannot see. This is explained in the tension between the de jure subject and the de facto subject [36]. The trick here is that while the natural person as de jure subject oversees an object, an AI system, the natural person is, in turn, being seen by the AI system, provider, and deployer as a de-facto subject of oversight – who really sees. This also raises the question of boundary making of the subject first, but then within the de jure subject, the natural person. Who is granted with the capacity to oversee? As it is now, loyal to its commitments and predilections to corporations, while the provider gets to design and develop the “technical” part of the apparatus, the deployer gets to decide who can see. This is a duty given, by the law to deployer, that “shall assign human oversight to natural persons who have the necessary competence, training and authority” (Article 26). This is just another form of corporate bureaucracy [13], a transferring oversight in the ways the platform sees [20], its infrastructures and procedures for image production and collection.
The European legislator gives the power to decide who can see to the deployer, be it a self-interested profit-driven corporation or a public organization at risk of clientelism and authoritarianism. The EU AI Act leaves us with the question: can the individual citizen see? And if yes, from where? Does the individual citizen still see/over, or sees from below, or from within? The way the apparatus is constructed by the law, it is a fictional game between fictional players called provider and deployer, leaving the human out—the undesirable influence that exceeds the secured space for AI to function. For an apparatus to effectively manage and order “those qualities resistant to systematization must be stricken from the record. This subtraction process, this shaving off the “excess,” is necessary for the apparatus to function.” [16]
The first excess that the EU AI Act removes is the humane excess, by defining AI systems as machine-based, making them discrete, manageable, and governable [39]. By removing this excess, the apparatus not only displaces the human by the provider-deployer corporations, but it also turns oversight from a matter of law, ethics, and politics into a matter of infrastructures, procedures, and automation.
There have been extensive efforts to make the human entanglement with AI systems visible—through physical and intellectual labour [7] consumption, and most importantly though being the [crowd]source of collective intelligence that AI attempts to mimic [30]. However, for the apparatus to work that excess needs to be left out, excluded, and indiscernible. This is not trivial, at the centre of the question of human oversight is the answer to what is left to us now that AI is there. The role assigned to people, now that AI is there, is overseeing – while we know AI is not there, the ideological blindness of the legislator assures us that it is there.
At the centre of this apparatus, there is the constitutive tension between what is revealed and what is concealed and this same tension comes for the law. Human oversight can be understood as “an act of concealment in which, once again, a performance mode is called upon. [...] a ‘counter-performance within performance’.” [16] While the governance of AI systems, designed and developed by the provider and the deployer is in the hands of these corporation-platforms, the law produces a symbolic organ for human oversight that, in practice, is corporate oversight on themselves. As such, the apparatus is born out of the tension between ideology and materiality. Now the trick of the apparatus is to renegade from its ideological kinship, the contestability of its values, and relate itself to the technique, the factual and machinic operations of science. Far from God, close to the server.
3.1 Project Cybersyn
Moving away from the Silicon Valley discourses that place it as the origin of everything, we ground our exploration of an apparatus for automated, computational systems and human oversight on a different —rather unexpected— socialist genealogy. The concepts of a synchronized networked administrative system model, envisioned on the premises of cybernetics have been developed already in the mid-20th century. In Chile, during the presidency of Salvador Allende, Cybersyn was a successfully established national project of a cybernetic revolution, that almost happened in the 1970s.
This project can be understood as a national-level AI for a socio-democratic economy, that was used and effectively operated in real-time with real-time data. Cybersyn's telex information system was used effectively in October 1972. The telex network “enabled communication across regions and the maintenance of distribution of essential goods across the country” [25].
The cybernetic premises of interconnectedness, of a networked system between human and non-human agents, were ideally promised to enable an equilibrium, optimal balances between the inter-special agents, humane machine communication, in real-time information processing and operation in the distributed network of different actors. Unfortunately, as the destabilisation of the government took place, the Cybersyn project suffered a heavily negative media campaign, portrayed as a totalitarian project of mass surveillance and control [25]. As the government was overthrown in a coup d'etat, that was funded and supported by the CIA in the Cold War economic warfare of the United States, the project Cybersyn was dismantled and destroyed. In the state-control, national level AI, the networks and systems were created for the benefit of the economy, and if the technological system wouldn't have been destroyed with the political system, there would have been a possibility to see how would the social negotiations regarding the development of the system progress over time. As that chance was taken away, we can only speculate. The entrepreneurial and libertarian stream found a way to rebrand a technocratic image into a techno-evangelistic or techno-solutionist one, promising solutions and services of proprietary technologies for [private] profit.
Aside from the fear of dismal futures, those who fear less, found the fortune in these ideas - that same ‘totalitarian’ Cybersyn, in the words of Evgeny Morozov, helped pave the way for big data and anticipated how Big Tech would operate, taking as an example the “Uber's use of data and algorithms to monitor supply and demand for their services in real time” [26]. The U.S. imperialism did not only fund and sponsor the coup d’ etat that ended with the Allende's presidency and the Cybersyn project, but also stood faithful to their colonial ambitions, extracting and reappropriating the ideas, so to reform and reframe them into a new ideological and practical agenda.
From a current standpoint, it is hard to imagine a sovereign techno-social system on a national or inter-national level that would benefit the economy, the regulative system and the society as another idea has been historically proven – that the alternative is not allowed. The global-scale framework of resource extractivism [7] and data accumulation holds its competences and competitiveness in the persisting Cold War narrative. What was once the space race, nowadays is the Big Tech's biggest, fastest AI model. An accelerationist pressure runs by all costs [physical, material, cognitive, intellectual], and instead of solving or integrating the systems to directly improve the functioning of society as an organism, it further destabilises the economies, politics and amplifies social divisions on a global scale.
3.2 The Operations Room
The Operations Room is a regulatory chamber, a fixed space for surveillance and control. It provides a gaze from many sides, but the view from above is not in the human agent. The human agent that observes is also being observed. The question of human oversight is a question of human and computational perception of legality in digital environments, how they make sense of it and how their perception shapes their possibilities for action. Taking on a (post)phenomenological approach to law [15], we question how people could perceive and make sense of legal provisions from the EU AI Act: Art. 14 and its obligation to design human-machine interfaces to perform human-oversight of high-risk AI Systems.“The people enrolled in this apparatus risk an abstraction of accountability and the production of ‘thoughtlessness’.” [23]
The legal dispositions by the EU AI Act, quoted in the previous section, give us the opportunity to explore different scenarios on how this human oversight obligation becomes a reality and imagine decentralized alternatives. These scenarios take effect not only on a technical reality, but a social one, too. The conceptualisation of the Operations Room for Human Oversight comes with an inquiry for a deconstruction of the premise that an automated computational system can be assigned to run a high-risk decision-making process, and to determine whether a human operator-oversighter has adequate accessibility to understand internal processes of such system. Also, it is challenging the ideas and concepts on which AI-based systems are being developed. Exposing the human condition and responsibility in the oversight process, guidelines or ethical concerns can also expose the logic on which these systems are built, the mistakes and biases, possibilities to subjectify the data or the desired outcomes [35].
This apparatus for oversight enacts the world as visualised or framed by legal regulations, law, and computational statistical operations. Before the task of understanding what we see, we need to understand from where we look. The space for oversight is fixed, it is an ocular-centrist control room. It is an embodied experience of duty and responsibility of control. Such a fixed place is a symbolic centre: the operations room is a representation of control, a theatrical place for filling in for a bureaucratic role of observation, and not a place from which a decision can have an immediate effect and or impact on the real-life situation in negotiation, outside of intervention of reporting of a system malfunction.
The operations room itself holds a cinematic power, a fixed — yet distributed— gaze, the visual exaggeration of oversight, stacked up with a multitude of sources, information, data-images. In project Cybersyn, the Operations Room was a social environment. Characteristically machoistic for the cultural momentum it was made, it was designed for the people in charge to debate, or negotiate the processed information, holding the power to make decisions based on discussion and expertise. They all met their demise, as their power over the system was visible, exposed, and therefore, traceable and vulnerable.
That is why, by the current [progressive-]proxy standards, the Operations Room for human oversight is a role-play control centre, with a symbolic human worker to hold responsibility for the system's calculated risks, enduring a mundane job procedurally approximate to a warehouse security guard, while the actual owners observe from afar, outside of any government's territory. While the human-oversighter pursues the work of empty surveillance over data-reenactment of reality, the machine deterritorializes the action at a distance, translates it into data patterns, into a web of probabilities unseen to human senses, yet it is delivering a political action.
The human oversight apparatus materializes the practices of oversight in by removing the excess, the human element that can disturb the purity of the technical. By doing so, it places the human outside of the boundaries of the system, and external controller. As such, does not have. Cybersin project and the operation rooms offer historical examples of how that role of apparent control is furnished with the conceptual framing of cybernetics, and the physical place from where to exercise its symbolic control, the operations room—a symbolic representation of power. The choice of the operation rooms as the command centre is not casual but reveals the common genealogy of both symbolic oversight and the operational image.
4 VISION
“Yet what if it is not a human eye, but the inhuman, digital and rhizomatic eye of the web that contemplates images?” [31]
4.1 The Invisual
The place comes first, then the apparatus, and then the human. Next comes the object of machinic gaze and what is the display of oversight for the human worker. The invisual, in the work of MacKenzie and Munster, exists as a nonrepresentational observation that operates in and through the image, and, as such, is achieving its new modality in the scope of AI-based [or computational] layers of image processing [20]. We provide an insight into the machinic vision, or mainly computer vision as a set of tools and models where the main political action takes place. By doing so, we analyse the problems of reduction, translation and expansion of an image, from input to output. What makes the computational image invisual is:
“[...]the formatting of operations, as various visual processes and materials pass transversally through platforms, cuts off the ability to see across, look at, or step back and observe the vast array of contemporary distributed imaging operations. The platform itself clears visuality of such ‘oversight’.” [20]
Computed images go far beyond cinematic or photographic ‘framing’ — following Galloway's remark on digital information as: “nothing but an undifferentiated soup of ones and zeros, data objects are nothing but the arbitrary drawing of boundaries that appear at the threshold of two articulated protocols” [11]— it transposes the real-life real-time events or subject information, into the algorithmic processing system where a pixel matrix of values, predictions and metadata forms the new set of values of the image file. When we talk about computer vision, we talk about dataset training, model, human labour, and intention, goals that are set to the algorithmic task in the code [9]. We do not talk about representation, we talk about calculation — detection of data [numeric] patterns, approximations, recurrences as mathematical correlations and probabilities. The system predicts the result, and we decide what we make of it. The display of oversight is the digital, computational image, a re-rendered and re-enacted reality on a screen, an interface of [apparent] control.
An all-seeing top-down view has blind spots for the plateau of possibilities, horizontally distributed social relations and frictions. The presumption is that the decision-making is more objective or neutral if it is more distant from the actual place of action/observation. Computational imaging and the scope of its mediations is still attuning to human notion of representations –producing images that are not necessary for the operation of the system but are pleasing to the human senses and reassuring of humans’ self-conception. However, these “meat-eye” comforting images [27], are the product of scientific and technological abstractions, such as statistical and mathematical analysis and modelling, big data, and neuropsychology among others [19].
An AI system, an algorithmic model based on (in this particular critique) computer vision can be programmed and produced to trace a certain set of values, arbitrarily chosen by the provider-deployer's objectives (platforms) — whether it is biometric data collection, emotion detection, civil behaviour evaluation, traffic control, law enforcement, or border control [38].What the system is instructed to see is a specific set of values, controllable behaviour, while anything outside of the predicted scope comes as irregular or even goes undetected. Is the system there to process the ‘normal’ predictive behaviour, or to warn about anomalies? How much work can be imposed upon a human-oversighter, to discriminate all possible data-behaviour anomalies that system can recognise? And how many will stay overlooked?
The binary classification of behaviour is essentially divided into the categories of: (1) usual behaviour inside the system– [or normal, as per the model of an AI-system in-use]; and (2) unusual behaviour: (a) inside the system - [suspicious or alarming, as the system's model is trained] or; (b) outside of the training data and parameters of the system — which stands for any new event, or circumstance making a possible false positive or a false negative. On the notion of false positives, as computer and human bias are embedded in the system's infrastructures and protocols, another responsibility in human oversight is to make sure the system does not misinterpret a subject [civilian/person] as a criminal, or a threat. Anything that does not fit in the frame of normal, usual behaviour, is turned into a suspicious action. In the eye of the algorithm, we are all possible suspects.
4.2 The Algorithm
The algorithmic categorisation of behaviour is a mere pattern detection that is unable to detect or understand the nuances of the law. In the journey from a real-life event to data visualisation or computer vision-processed image, AI systems are recreating an event, and evidence –assessing truthfulness or [f]actuality of the event. The disposition of a witness and a decision maker, into a system and an automated algorithmic protocol amplifies the risks of misinterpretation and biased understanding of an event/case in observation. A corollary effect of the digitalization of society and organizations is the need to create new regulations and extend existing ones, but recently also, the displacement of traditional sources of regulation and coordination by technological regimes [34].This institutional displacement by technological regimes can be articulated under two logics: digital omniscience, in which all aspects of reality can be captured in the form of digital data, and digital omnipotence, in which all activity is controlled by information systems [33].
In the transmission/translation of the input, the algorithm does not mediate the event, it can manipulate the [f]act —by reduction, visual amplification and extrapolation of certain patterns. Therefore, there is always something left unseen in human oversight. Reliance on computer vision in high-risk and vulnerable cases imposes another risk, imposing the invisuality of both social behaviours and the pattern tracing in the recreation of ‘evidence’ of an event. The apparatus for oversight enacts the world as visualised or framed by the legal regulations, law, and computational [statistical] operations, an algorithmic probability of legally approved behaviour. The focus is on the process, where —by deploying an oversight— an insight into the system has been enabled, into the materiality [and fragility of it], by seeing beyond what was seen before on an operative scale, but also into the scope of social negotiations that such system demands.
Law and its regulations are a framework, a system to navigate in context-specific environments. If the law is operating on a binary, deterministic categorisation, it is a projection of the totality of normativity/normalisation, a systemic punishment for everyone who falls out of the literal protocol of predicted behaviour. The legal understanding of high-risk AI systems overlooks the need for social negotiations, while humanising the algorithmic protocol by placing a human-overisghter as a mere witness of a computational process, and not the event itself.
The invisuality of the operational images on which the AI systems rely to operate, leaves the human-oversighter as mere notary that testifies that a system operates according to its infrastructure, protocols, and algorithms set by the provider and deployer. The visuality that algorithms make possible does not afford access to the abstraction that makes up AI system's outputs. As such, the control is reduced to whether the system works as mandated by the provider-deployer, a matter of the technic deployment and execution and no longer a juridical and political-ethical matter. The role of the human-oversighter is a mere designated witness of operations that cannot be seen. A control agent without agency.
5 AGENCY
“Algorithmic productive force avoids causality, evades accountability, and restricts agency to participation and adaptation.” [22]
In the Eye of the Master, Matteo Pasquinelli [30] reminds us that —similar to the theory of division of labour in the process of industrialization— AI is a hypermimicry of collective intelligence, such as labour–robots in factories that do not reinvent the arts of the chain, they just become metallic versions of the arms of the labourers. The idea of human oversight is the last death rattle of reason on its way out, a last attempt to keep the show on, the reasonable man at the cusp of an order falling apart into automated thoughtlessness (as per McQuillan).
It is not only labour, but any formation of algorithmic protocol that involves all sorts of practices and refined ways of doing things, achieving something, transforming the environment, relating to each other, which have been under a process of sophistication for thousands of years [29]. Not only that, but it also removes that collective knowledge from the public domain, privatizing it by turning it into algorithms, scripts, and codes that can be controlled, while they control the human co-labourer, those who remain part of the contemporary division of labour.
While human/over/sight apparatus is pointing at AI systems, specifically high-risk AI systems, following Pasquinelli, what we are seeing at the end of the chain is an entangled human-machine labour. In a way, it works as a remediation of the control of labour but from a distance and at scale. In this term seeing from afar, can be understood as the fantasy of remote sensing labour, remote sensing the other. This brings the owner of capital the possibility to continue to extract the surplus, without having to be close to those producing it—perhaps they can do it from Mars (but that's another fetishism). In understanding such attempts, Yarden Katz states that the misunderstanding of what AI does, reshuffles power and accountability from democratic institutions to AI systems and models, from politicians to engineers and entrepreneurs, presenting the latter as the big architects of society [18].
Furthermore, the algorithmic protocol as a seemingly decentralised, zero-agent, depersonalized power structure, imposes an extractivist method that renders global-scale data for profit —whereas intelligence comes as a collective effort reclaimed and appropriated by the AI entrepreneurs— that is not erasing, but displacing human labour and agency. Meanwhile, the agent and the power are allocated to the algorithm and its proprietors.
In the context of AI ideology and politics, we create the future by statistical determinism. McQuillan argues that ‘no alternative futures’ discourse is embedded in such a system, where “AI's solutionism selects some futures while making others impossible to even imagine.” [24]. As such, we have left the realm of history driven by political will and trapped ourselves in a statistical evolution—attuning to the techno-material conditions of the digital [2]. This also reinforces the statistical evaluation of possible risks, that an AI system not only predicts, it, in fact, overdetermines its foreseeable future [24]. It is the perfect calculative oracle for preemption. Having its deterministic functions in hyper-accelerated data processing, it leaves little space for negotiation of its social consequences, if any.
The intertwinement of technology with the political agenda becomes a totality of technocratic rule under a neoliberal disguise. It is reshaping social relations, governance, surveillance, dismantling institutional power, and obfuscating human agency. In the preemptive politics of AI, a predictive system maintains its political legitimacy by taking the promise of predicting the future while at the same time creating it, whereas data science serves us to find the patterns where we want to see them. In reference to McQuillan's understanding of the technology of anticipation and preemption, we also note Massumi's remarks on the logical disjunction and regress of preemptive politics where “a logical gap opens in the present through which the reality of threat slips to rejoin its deferral to the future.” [21]. Every AI system, as an apparatus allows its proprietors to reshape the meaning of our reality, to prepare the message for the future and to put a “threat [that] makes a runaround through the present back toward its self-causing futurity.” [21]
In the techno-solutionist worldview, in the promise of the future, we are recycling the past. The promise of understanding the future, and therefore preempting it, holds the political power that allows it to affect and conceive the future that it is trying to preempt [21]. The human factor is there to pick up the mess left behind (accident of human consequence), while stumbling after the progressive accelerated calculus. The human individual as the worker is entitled to hold the responsibility for systemic overlooking. One of the most notable spins on accountability and agency in the displacement of power is the obfuscation of responsibility in predictive, preemptive engineering of social dynamics and automated platform-based geopolitical control.
In this last section on agency, we argue that while the ideal provision of the law is that the human remains in control of AI systems, its capacity to affect the course of action of these systems is limited. The preemptive quality of automated systems excludes the human agent from understanding, participating and therefore shaping their social or political future. As such, the possibility of constructing a meaningful legality has to do with the formation of platform operations that can involve social negotiations, and reduce automated frictions by forming space for underrepresented and invisible nuances of social interactions. Behind the thirst for accuracy in their predictions, AI systems override any possibility of external disturbances, the human excess in the path set ahead, in fulfilling their own prophecies.
6 CONCLUSION
Algorithmic computational systems have no space for social negotiations, they predict, classify, categorise, approve or ban. They also impose their self-optimisation over civil society as an aggregate for automation. Paradoxically, human oversight is a mere addendum to automated infrastructures, protocols, and algorithms. True power is embedded in the operative system. However, this bureaucratic role in the apparatus differs from the rest of the computational chain because it is the only role recognised by the law. The law does not recognise an apparatus, or a system as an agent of decision-making, so there must be a human agent to be held responsible in the eyes of the law, by the EU AI Act. The Act pretends that the human agent ensures control over the system, yet the human fills a symbolic role, in a form and purpose to legitimize the functioning of the apparatus of a high-risk AI system.
An apparatus of oversight, such as a control or operations room, holds access to observation but denies the power to act on actual situations. It is not a view from above; it is a displaced centre of physical surveillance. It is a procedural ritual, routine, a notion of a fixed [human] agent in a specific place and time, being a part of the system, as a ceremonial placeholder with a specific duty in the data processing protocol. Even though systems do not have any cognitive capacity to replace humans in the loop, the transposition of technologies, from their purpose as a tool to their image as a solution, in the public discourse gives liberty to promise a life-improving asset while private capital is generating all substantial power and impact on global societies by endorsing a politics of preemption.
The human-oversighter is neither a negotiator on behalf of humanity. Their role comes with a set of rules and regulations that they must comply with, so their agency is limited to a specific set of instrumentalised decisions, protocolised actions, as well as programmed interactions. They see the display surface of the unseen, invisual computational operations, automated decision-making processes fuelled by statistical determinism, and inaccessible to social negotiation. Their agency within the system is reduced to a set of instructions, but their responsibility is extended into the legal realm. The human worker is also prompted and solicited by the system; their agency is their duty to preserve the image of a safe system rather than providing any notion of safety to the subjects whose data is being processed in that same system. Overall, their actual duty, or the effect of their labour might be to serve the servers.
References
- Franco Berardi. 2012. The uprising: on poetry and finance. Semiotext(e), Los Angeles, Calif.
- Franco Berardi. 2015. And: phenomenology of the end: sensibility and connective mutation. Semiotext(e), South Pasadena, CA.
- Henri Bergson. 1991. Matter and memory. Zone Books, New York.
- Benjamin H. Bratton. 2019. The Terraforming. Retrieved May 21, 2025 from https://www.academia.edu/49771602/The_Terraforming_Strelka_Press_
- Julie Cohen. 2012. Configuring the Networked Self: Law, Code, and the Play of Everyday Practice. Georgetown Law Faculty Publications and Other Works. Retrieved from https://scholarship.law.georgetown.edu/facpub/804
- Robert Cover. 1983. The Supreme Court, 1982 Term – Foreword: Nomos and Narrative. Faculty Scholarship Series. Retrieved May 21, 2025 from https://openyls.law.yale.edu/handle/20.500.13051/2047
- Kate Crawford. 2021. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t
- Kate Crawford and Vladan Joler. 2018. Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources. Retrieved May 21, 2025 from http://www.anatomyof.ai
- Kate Crawford and Trevor Paglen. 2019. Excavating AI: The Politics of Training Sets for Machine Learning. Retrieved May 27, 2025 from https://excavating.ai
- Vilém Flusser, Nancy Ann Roth, and Mark Poster. 2011. Into the Universe of Technical Images. University of Minnesota Press, Minneapolis. https://doi.org/10.5749/minnesota/9780816670208.001.0001
- Alexander R. Galloway. 2004. Protocol: How Control Exists after Decentralization. The MIT Press. https://doi.org/10.7551/mitpress/5658.001.0001
- Peter Goodrich. 1991. Specula laws: Image, aesthetic and common law. Law and Critique 2, 2: 233–254. https://doi.org/10.1007/BF01128679
- David Graeber. 2012. Dead zones of the imagination: On violence, bureaucracy, and interpretive labor: The Malinowski Memorial Lecture, 2006. HAU: Journal of Ethnographic Theory 2, 2: 105–128. https://doi.org/10.14318/hau2.2.007
- Alex Green, Mitchell Travis, and Kieran Tranter (eds.). 2025. Science fiction as legal imaginary. Routledge, Abingdon, Oxon [UK] New York, NY. https://doi.org/10.4324/9781003412274
- Mireille Hildebrandt. 2015. Smart Technologies and the End(s) of Law. Retrieved May 21, 2025 from https://www.elgaronline.com/monobook/9781849808767.xml
- Sharon Kahanoff. 2009. The Will and the Way of The Apparatus. in Judy Radul: World Rehearsal Court. Retrieved May 21, 2025 from https://worldrehearsalcourt.com/essays/the-will-and-the-way-of-the-apparatus
- Jannice Käll. 2023. Posthuman property and law: commodification and control through information, smart spaces and artificial intelligence. Routledge, Taylor & Francis Group, London New York, NY. https://doi.org/10.4324/9781003139096
- Yarden Katz. 2017. Manufacturing an Artificial Intelligence Revolution. https://doi.org/10.2139/ssrn.3078224
- Robin Mackay, Luke Pendrell, and James Trafford (eds.). 2014. Speculative aesthetics. Urbanomic, Falmouth, UK.
- Adrian MacKenzie and Anna Munster. 2019. Platform Seeing: Image Ensembles and Their Invisualities. Theory, Culture & Society 36, 5: 3–22. https://doi.org/10.1177/0263276419847508
- Brian Massumi. 2015. Ontopower: War, Powers, and the State of Perception. Duke University Press. https://doi.org/10.1215/9780822375197
- dan Mcquillan. 2015. Ghosts in the Algorithmic Resilience Machine. danmcquillan.io. Retrieved May 21, 2025 from http://danmcquillan.doc.gold.ac.uk/./resilience-smartcity-democracy.html
- Dan McQuillan. 2018. Data Science as Machinic Neoplatonism. Philosophy & Technology 31, 2: 253–272. https://doi.org/10.1007/s13347-017-0273-3
- Dan McQuillan. 2022. Resisting AI: an anti-fascist approach to artificial intelligence. Bristol University Press, Bristol.
- Eden Medina. 2014. Cybernetic revolutionaries: technology and politics in Allende's Chile. MIT Press, Cambridge, MA.
- Evgeny Morozov. 2014. The Planning Machine. The New Yorker. Retrieved May 21, 2025 from https://www.newyorker.com/magazine/2014/10/13/planning-machine
- Trevor Paglen. Operational Images - Journal #59. e-flux. Retrieved April 2, 2025 from https://www.e-flux.com/journal/59/61130/operational-images/
- Jussi Parikka. 2023. Operational images: from the visual to the invisual. University of Minnesota Press, Minneapolis.
- Matteo Pasquinelli. 2019. Three Thousand Years of Algorithmic Rituals: The Emergence of AI from the Computation of Space. e-flux.
- Matteo Pasquinelli. 2023. The eye of the master: a social history of artificial intelligence. Verso, London New York.
- Andrea Pavoni, Danilo Mandic, Caterina Nirta, and Andreas Philippopoulos-Mihalopoulos. 2018. SEE. University of Westminster Press. https://doi.org/10.16997/book12
- Andreas Philippopoulos-Mihalopoulos. 2015. Spatial justice: body, lawscape, atmosphere. Routledge, Abingdon, Oxon New York. https://doi.org/10.4324/9781315780528
- Henri Schildt. 2022. The Institutional Logic of Digitalization. In Research in the Sociology of Organizations, Thomas Gegenhuber, Danielle Logue, C.R. (Bob) Hinings and Michael Barrett (eds.). Emerald Publishing Limited, 235–251. https://doi.org/10.1108/S0733-558X20220000083010
- Susan Scott and Wanda Orlikowski. 2022. The Digital Undertow: How the Corollary Effects of Digital Transformation Affect Industry Standards. Information Systems Research 33, 1: 311–336. https://doi.org/10.1287/isre.2021.1056
- Kristina Tica. 2023. COMPUTATIONAL AESTH-ETHICS - Understanding visual computation processes between the image and its context. https://doi.org/10.57697/8WK2-B258
- Cornelia Vismann. 2013. Cultural Techniques and Sovereignty. Theory, Culture & Society 30, 6: 83–93. https://doi.org/10.1177/0263276413496851
- 2024. Article 14: Human Oversight | EU Artificial Intelligence Act. Retrieved May 21, 2025 from https://artificialintelligenceact.eu/article/14/
- 2024. Annex III: High-Risk AI Systems Referred to in Article 6(2) | EU Artificial Intelligence Act. Retrieved May 21, 2025 from https://artificialintelligenceact.eu/annex/3/
- 2024. Article 3: Definitions | EU Artificial Intelligence Act. Retrieved May 21, 2025 from https://artificialintelligenceact.eu/article/3/
This work is licensed under a Creative Commons Attribution International 4.0 License.
AAR 2025, Aarhus N, Denmark
© 2025 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-2003-1/2025/08
DOI: https://doi.org/10.1145/3744169.3744187