scrim

The ironies of autonomy

abstract

À travers une approche sociale, culturelle et philosophique, la chercheuse Maya Indira Ganesh, spécialisée en technologies et cultures digitales, étudie, de manière critique, la façon dont les véhicules autonomes (VA) sont en train de modifier la subjectivité humaine et les relations homme-machine. Ces nouvelles formes d’automobiles, à la fois IA et robots apparaissent dans l’article comme des types d’architectures distribuées (dont l’ensemble des ressources disponibles ne se trouvent pas au même endroit ou sur la même machine) dotés d’infrastructures de big data. L’auteure décrypte avec ironie ces nouvelles tensions : la recherche de compensation des limites humaines par le design, le grimage de l’humain en machine et son effacement des infrastructures techniques. Elle mobilise les concepts de corporéité et d’instrumentalisme (où les modèles scientifiques permettent de concevoir et prédire commodément les phénomènes) s’inscrivant à la suite de la tradition phénoménologiques et des extraits d’études de terrain, à l’appui de l’hypothèse suivante : l’émergence des VA dans la société invite à repenser les multiples types de relations qui constituent l’humanité à travers les machines.

Maya Indira GANESH. “The Irony of Autonomy”, Humanities and Social Sciences Communications, Volume 7, Article no. 157, 2020.

Current research on autonomous vehicles tends to focus on making them safer through policies to manage innovation, and integration into existing urban and mobility systems. This article takes social, cultural and philosophical approaches instead, critically appraising how human subjectivity, and human-machine relations, are shifting and changing through the application of big data and algorithmic techniques to the automation of driving. 20th century approaches to safety engineering and automation—be it in an airplane or automobile—have sought to either erase the human because she is error-prone and inefficient; have design compensate for the limits of the human; or at least mould human into machine through an assessment of the complementary competencies of each. The ‘irony of automation’ is an observation of the tensions emerging therein; for example, that the computationally superior and efficient machine actually needs human operators to ensure that it is working effectively; and that the human is inevitably held accountable for errors, even if the machine is more efficient or accurate. With the emergence of the autonomous vehicle (AV) as simultaneously AI/ ‘robot’, and automobile, and distributed, big data infrastructural platform, these beliefs about human and machine are dissolving into what I refer to as the ironies of autonomy. For example, recent AV crashes suggest that human operators cannot intervene in the statistical operations underlying automated decision-making in machine learning, but are expected to. And that while AVs promise ‘freedom’, human time, work, and bodies are threaded into, and surveilled by, data infrastructures, and re-shaped by its information flows. The shift that occurs is that human subjectivity has socio-economic and legal implications and is not about fixed attributes of human and machine fitting into each other. Drawing on postphenomenological concepts of embodiment and instrumentation, and excerpts from fieldwork, this article argues that the emergence of AVs in society prompts a rethinking of the multiple relationalities that constitute humanity through machines.

The crashes

[…]

This article addresses two aspects of the role of the human in the emerging autonomous vehicle. The first is the dominant perception that autonomous driving entails the replacement of the human driver with computation and automation, and thus is sometimes colloquially referred to as ‘robot driving’. However, automation does not replace the human but displaces her to take on different tasks.1 I will show how humans are distributed across the Internet as paid and unpaid micro-workers routinely supporting computer vision systems; and as drivers who must oversee the AV in auto-pilot.2 Aside from online tasks, humans are encouraged to ‘empathise’ with the emergent machine that struggles to learn how to navigate the world. These are cases of heteromation, also seen across contemporary online platforms and services: a “new economic arrangement in which humans are put on the margins of machines and algorithms, providing labour in unrewarded or minimally rewarded ways”.3 Distinct from automation where “the machine takes centre stage”; or augmentation where “the machine comes to the rescue”, heteromation is defined as “the machine calls for help”,4 and in which the human becomes legible as a “computational component”.5

Second, and related, is that the discursive construction of the AV rests on the transition from human to robot driving; precisely because the AV is not just a car or a robot but is also a distributed data infrastructure running AI technologies, there are subject positions the human may find herself in that she cannot necessarily predict or control given what big data infrastructures are. It is one thing to expect the human to be alert enough to take over; but, we are in different territory with the AV perceiving the world through machine learning and making decisions on this basis. […]

The conditions of optimisation and standardisation of the data in the statistical relationships that underlie computer vision have the power to produce multiple, conflicting subjectivities within the AV: that of an accident victim on a dark night or poorly lit street; in the operator’s hot seat and expected to take over at a moment’s notice, but without any control over the contingencies set in motion by the computational infrastructures she is embedded in; and as a ‘heteromated’ worker-cog propping up these material infrastructures, including annotating and labelling visual images for computer vision systems. And yet despite the limited human control in such systems, accountability and liability still fall on the human operator, coupled with surveillance and monitoring systems to discipline the human to remain alert and vigilant in their role as driver-overseer, as I will discuss. […]

The ‘irony of autonomy’ is a riff on Lisanne Bainbridge’s “irony of automation”: “the automatic control system has been put in because it can do the job better than the operator, but yet the operator is being asked to monitor that it is working effectively.”6 ‘Irony’ also refers to other contradictions and gaps that emerge from the AV as simultaneously AI/robot, big data infrastructure, and automated automobile. Human-machine relations forged through automation and aviation engineering of the 20th century, are having to be reconsidered as humans become distributed and displaced through the system. Significant legal and accountability loopholes have emerged from these earlier conceptions and require new approaches to protecting human values, as current automation law does not, even though it proposes to.7 […]

The measures of robot driving

To ask the seemingly straightforward question, “what is an autonomous vehicle?”, is to undertake a mapping of material practices of knowledge-making, metaphors, institutions, and infrastructures that constitute it. By identifying the discursive interplay of language and measurement in constructing autonomy, I want to bring political valence to this term; if not, it remains both opaque and fantastical.

The language of autonomous driving echoes the robot trope, a computational brain inside a vehicular body or automated machinery sans humans: driverless car; robot taxi; unmanned vehicle system (includes drones and robots). ‘Self’-driving suggests the vehicle might have a sense of self; or that humans see it as having a self because it can navigate itself. The AV is imagined as an artefact that is separated from the human, but is still humanoid in its processing capabilities, referred to as ‘intelligence’, in the same way that AI/artificial intelligence is, an ‘awesome thinking machine’ that will make decisions for itself, automatically or, ‘autonomously’.8 AVs in advertising, cinema, literature, TV programs and industry literature are resolutely anthropomorphic.9 […]

A vehicle is autonomous if it navigates the road like a human driver does, that is, on its own and without a human needing to pay attention to it. […]

A human driver is able to patch together sensing, perception, memory and the body to generate the appropriate response. […]

It is precisely this inability of the AV to adapt and respond on the fly that the human has to step in to help with. This handover has become a measure called ‘disengagements’, applied in California, which is the number of miles driven in ‘autonomous’ mode (with a human required by law to be present) before the human driver has to take over.10 […]

Automation legacies

The transition from automation to autonomy will occur through machine learning-based decision-making that is subject to the ‘cascading logics of automation’, meaning that one instance of automation necessitates another; a large scale automated data collection can only be analysed through a similarly large-scale automated process, and not manually.11 Automation begets more automation; anything other than this is friction that will slow down the process. It is typically this kind of situation that makes it appear that the human is erased. But in fact what is erased is not so much the human, but how humans make judgements and decisions; and driving is a case where humans are generally shown to make poor decisions. […] Machine decision-making is not just fast, but is also efficient and correct precisely because of the ‘god trick’12 of seeing everything from nowhere, or ‘objectively’ as big data technologies are thought to. […] The development of computation and automation through 20th century aviation safety design have been influential in shaping human-machine relations in terms of what exactly machines do better than humans, and vice versa, and what is best pursued collaboratively. These longstanding concerns are now transported to AVs. […] So skills-based tasks such as landing a plane or parallel-parking a car, are well-suited to automation because they entail a routine, specific set of steps. But landing a plane under adverse conditions requires expertise plus intuition and judgement sharpened through a variety of experiences; and thus is notoriously hard to formalise as requirements of an automated system.13The more formalised, specific, and certain an environment and task are, the easier they are to automate. […] Similarly, Bainbridge presents a detailed discussion of the various conditions under which different kinds of ironies emerge from the automation of tasks;14 one that specifically applies to the AV’s auto-pilot is the gradual degradation of human skills through the introduction of automation. As investigations of AV accidents show, the human, notorious for not paying attention, is freed from paying attention by automation that never loses attention; yet she pays a tragic price for inattention when something in the automated system, i.e., computer vision, fails to respond to a sudden change in the environment, and requires her attention to manage.[…]

Another aviation engineering-import, the ‘human in the loop’15 has shaped legal accountability in robots and autonomous technologies; both an evocative metaphor and a practical guideline, ‘the human in the loop’ is a safety mechanism. Jones however identifies problems with this conception saying that the human has always been part of the loop and cannot be erased or shifted out. She identifies the irony that US automation law builds on this notion of humans and machines as separate and joined by a loop, thus not acknowledging the inherently socio-technical nature of automation; and thus even as it proposes to protect human values, it actually results in less protection because it understands the two as separate.16 Jones proposes that the law—and I would say, accountability regimes more broadly—must break the loop and tie a “policy knot” instead with the contexts of design, implementation and social relations. […]

Embodiment

Recent business, AV Engineering, and HCI17 narratives suggest that the language of human-machine relations is changing, from ‘looping’ to the affective registers of ‘teaming’, trust, and empathy.18 […]

Cultural theorists of automobility persuasively show how the automobile is an extension of the human. A “complex hybridisation of the biological body and the machinic body”19 in which “new forms of kinship are elaborated ‘linking animate qualities to the machine’”, “not only do we feel the car but we feel through the car and with the car”.20 […] Such phenomenological interactions of humans with technologies constitute a shared lifeworld that shapes knowledge, politics, aesthetics, and normativity, among others. […] Of these, embodiment is particularly resonant in the case of the AV; it refers to the ‘taking in’ of a technology device into human bodily experience, and the extension of the human back into the device, such that the technology ‘disappears’ and becomes notionally transparent.21 […]

Postphenomenologists typically refer to benign examples of embodiment such as reading glasses and walking sticks that become work by being ‘embedded’ in the human body, one expanding into the other to work effectively; however in the case of the AV there is a serious edge to the ‘hybridisation’ of car and driver that goes beyond the body and includes psycho-affective and emotional states as well, for it can spell the difference between life and death as crashes have shown. I discuss another set of relations of the entwining of human bodies and minds with AVs before turning to a discussion of the social implications of embodiment. […]

Computer vision in AVs is not advanced enough for driving and has emerged as a weak link in all fatal crashes so far. It is not that the AV, fitted with multiple sensors, cameras, Lidar and radar to document the environment, cannot visually sense, but that it cannot make sense of what it senses. Humans must annotate images so that computer vision algorithms can learn to distinguish one object from another, and then apply this when encountering new and unfamiliar images. […] The departure that heteromation makes from automation is not just the shift from the transfer of control from the human to the machine in automation, to the machine handing back to the human in heteromation. It is that ‘heteromated’ humans are generating significant value for software and ‘AI companies’. The notion of fixed roles of humans and machines fitting into each other also dissolves. […] The decision about what a thing is—a pedestrian, a road divider, or the sky—is being made by a machine, or a human, or a human overseeing a machine, which becomes a statistical relationship between data points that make up the world around the autonomous vehicle. In domains such as online content moderation, specific guidelines are drawn up for how human moderators must adjudicate on content, not just because it is a matter of speech, but because of how misrecognition or decontextualised annotation can change history itself. […] Human and machine decision-making for AVs to navigate the world is also re-making it. […] Thus, surveillance and monitoring of human drivers has become a part of the AV-driving experience. AV testing requires that a driver-facing camera be fitted to record and monitor driver behaviour, physiological states, and affect. This is affective computing in action, a booming interdisciplinary field that analyses individual human facial expressions, gait, and stance to map out emotional states through machine learning techniques. […] No doubt this surveillance data will protect car and ride-sharing companies against future liability if drivers are found to be distracted. This monitoring is literal bodily control because it is used to make determinations about people. […] Similar kinds of quantification exist within ubiquitous, networked technologies and we use it to monitor and optimise our own health, wellbeing, and personal success.22 Measurement of human activity, bodies, and affect in work contexts becomes the basis for sorting, classification and analysis resulting in the production of social categories that have far reaching consequences; for example, categories such as criminality and creditworthiness are now determined algorithmically based on individual data profiles and run through analytics; these control large swathes of already-disadvantaged communities.23 […] Information technologies were born as measuring devices in 18th-19th century contexts of colonialism and slavery; these produced unique categories such as race, mental illness, and criminality among others, which eventually served to discipline and control entire populations. […]

The operations of automated data science to classify and rate communities of people is what Postphenomenology refers to as ‘instrumentation’: Measurement practices that create transformations in human experience, and knowledge of the world.24 Such practices mirror the two meanings of the word ‘apparatus’: as Foucaultian ensemble of institutions and discursive practices that shape knowledge, and as a literal measuring device. Apparatuses as measuring devices are neither inert, objective, nor universal, they are productive of phenomena they purportedly measure, and betray their origins if we study them.25 […]

And a phenomenon like ‘autonomy’ can be measured by a ‘device’ like a disengagement report; and the ‘ethics’ of autonomous driving can be based on crowdsourced values held by people playing an online game about how an imaginary AV should react in the case of an unexpected accident.26 Thus measuring devices do not just observe and record, but actively create categories and realities like ‘trustworthy’, ‘efficient’ or ‘autonomous’. However, the big data technologies underlying these devices are not ‘objective’, and only replay and amplify pre-existing racial, gendered, and socio-economic biases and disadvantages.27 Thus we cannot be certain that all humans will assessed in quite the same way despite the presumed ‘objectivity’ of measurement.

Conclusions

[…]

Humans are ultimately responsible for failures of more advanced software that are supposed to replace them.28 This irony might be compounded by the Autonomy-Safety paradox: “as the level of robot autonomy grows, the risk of accidents will increase, and it will become more and more difficult to identify who is responsible for any damage.”.29 […]

At minimum, we might begin by recognising the displacements humans inhabit as workers, managers, overseers, drivers, consumers and other publics. Peeling back the layers of the practices that validate ‘autonomy’, as I have attempted here, and identifying the role of the human in it, is a key part of this. The history of science and technology is replete with examples of disadvantaged people being even further marginalised; so in our breathless enthusiasm to roll out a new technology, we must acknowledge that inequities in human society will play into this emergence too. […]

There are automated systems that humans cannot intervene in and cannot be held accountable for, and to, like the logics of computer vision. In contexts where testing takes place unregulated and in public, how are local communities assured that they will not be mis 230821_ganesh_The ironies of autonomy_en_BAC.docxrecognised, or altogether erased by, machine vision? What might it mean to have solidarity with movements of scholars and activists resisting being subjected to algorithmic classification? Further, if AVs are indeed more than just cars, and are commercial data platforms, then questions of labour, data protection, and data use, must become central as well. Just as gig and platform workers have organised, as have other tech workers in Silicon Valley, what kinds of protections exist for people engaged in developing AV capabilities? The autonomous vehicle community of practice and research has not seriously addressed these concerns, and as such this reaches across different domains of automobility regulation, data protection, and AI governance. Innovation and policy research and advocacy could become more attentive to how multiple new publics and stakeholders are emerging in the shaping of this technology, in addition to traditional institutional actors and investors. All these networks and connections matter and must muddy the discursive construction and emergence of the AV. The irony of autonomy must be emphasised: that autonomy is not about separation or isolation, but is a matter of consistent connection and relations of mutual influence.

Data availability

The datasets generated during and/or analysed during the current study are not publicly available owing to reasons of interviewee privacy but are available from the corresponding author on reasonable request.