October 23, 2019
The image, a face, is at once a surface of display and of capture. A screen or a net. That is, it can be generative or interpretive. The camera knows the contours of the face intimately. The white walls and black holes are the illuminated screen of the brow and the etchings of shaded eye sockets.1 But while the camera can identify, track, trace and capture faces, it is confined to a curious position of flatness. The algorithmic face, not a face itself but an image system, turns the visual semiotics of images into semantics: a logic of relative lightness and darkness, of coloured regions coursing along planes. The face, in an algorithmic sense, is an abstraction: more of a set of geometric relations between parts than an entity in and of itself. A face may function as a biometric calling-card, but one with nothing behind it, and which is composed of mathematical signifiers.
In the banality and omnipresence of facial recognition technology, compositions of facial features cease merely to comprise noses and eyes and mouths, human countenances. They are instrumentalised as measurements between arrangements of pixels, making functional maps or nets for the ensnarement of faces. With the world reduced to a vector, the difference between a face and any other category is a matter of calculation. The digital camera may “read” and differentiate a smile from a grimace in a practical sense, yet we know that it lacks an understanding of significance beyond computation. The machine is illiterate, though it feeds on the manipulation of signs. Algorithmic images refer more to databases than objects in the world, causing a breakdown in the relation between image and referent, due to the fuzziness of algorithmic representation. The image can no longer be taken at face value, or be considered to declare its contents transparently. This kind of image need not bear a visual resemblance to the thing that it “reads” as, returning us to the conundrum of photography seeking to capture or represent something beyond mere appearance. Yet although this is a departure from established norms of pictorial representation, images, and art more generally, have traditionally been prized for their irreducibility.2 While counterintuitive, it bears consideration, whether the senselessness of errors in machine learning reveals some of the senselessness in our own attempts to make sense of the world through logic, language and culture.
The production, the circulation, and the ontological status of images is in flux, caught between new modalities brought about through digital imaging and machine learning. While aspects of the digital image remain deeply tied to historical artistic conventions, traditional notions of the image have undergone a transformation in light of the current proliferation of algorithmic images. This extends from a deeper history of incorporating algorithmic processes into the image plane. Over a century ago, the human-camera conglomerate incorporated the photographic program into seeing, converging light rays through layers of ground glass at the push of a button.3 The photographic algorithm turned the image into a procedure to be performed upon a face, a plane, as opposed to the end product of that performance. The eye-machine, thus composed, blended mechanical, optical, chemical and biological processes, acting as a prosthetic to human vision, but also imposing its operations upon it. In this sense, the image has become a machine. It no longer exists as the outcome of human intentionality impressed upon a surface, but takes on a life of its own to become a producer in its own right.
In similar fashion to the ancient idea that cast-off skins of objects flow into the eyes like veils, on the face of it, an image is a visual plane: a surface through which optical processes pass. The image no longer rests comfortably on the surface of reality but is enacted by and acts upon it. Due to the multitude of extra-visual aspects it takes on, it may seem that the image dissolves into invisibility. Rather, it spreads out beyond the visible, ceasing to reside comfortably within surfaces. What is visible in an algorithmic image is merely a surface displaying the outcome of the performance of spatial procedures. This changes the ontological status of the image from being confined to the results of visual procedures to that which conditions the production of such end products. Yet contrary to what one might expect from a turn toward a performative notion of the image, digital images are neither immaterial nor atemporal. A compelling example of the physicality and temporality of the digital image is the latency of digital image sensors, the time required to transform gathered photons into electrical signals.4 There is also a notion of analog photography being in fact digital, as the grains of silver in photo paper are either activated or not, the silver salts which are not activated being washed away in processing and those which remain composing the resulting photograph.
Much like the latency of digital images, encoded for their performance by computers, conceptual and avant-garde art movements have experimented with the creation of instructions for the performance of artworks. Artists including Sol LeWitt, Lawrence Weiner, Yoko Ono and John Cage have explored the potential for instructions for the execution of works of arts to be considered as critical components of the work itself. In these cases, it is often an idea or a method which drives the work, rather than concern for the visible end result. It is relevant that “script”[^6] can be read to have several meanings: 1. the text itself, 2. the transcription of text, handwriting, 3. a plan of action for the performance of a text, by humans or by computers. The image, then, can be seen as existing in flux. It is at once the performance of visual processes at the same time as it exists as the instructions for that performance.
In the exhibition project image machine / machine image, I explored these ideas with a focus on the procedural, structural and latent aspects of images. One outcome of this investigation, which I describe here, is a series of hand-written arrays of numbers representing the pixel-values for images which may be produced by entering those values into a computer. The series plays upon the idea of transcribing a computational image as a form of writing, rather than drawing, and what is shown are akin to instructions for the production of images. Through this series, I sought to give over a level of agency to the performance of algorithmic procedures from simple operations determined by chance to more complex computational processes.
To create the first image in the series, I used the flip of a coin to determine a binary outcome for each pixel: heads being 0: black, tails being 1: white. In the next image, I rolled a die to obtain values between 1 and 6, producing a grey-scale from the resulting numbers. For image number three, I found time constraints would not allow me to manually generate the numbers for an RGB image, so I used a computer to generate pseudo-random numbers, with three values per pixel. The fourth image is an average from a data-set of other images. The final work is a transcription of an image obtained by performing Google's reverse image search on the image produced from the previous array of numbers.
I made a number of observations during the process of producing the series, some of which were surprising. The first of these was a contradiction of what the anticipated outcome. I found that although the progression from very simple procedures toward those which are more complicated tends to provide more interesting results, it also reveals the level of influence that a given method has upon the image or images it produces. Another realisation from conducting this investigation is how quickly human ability is overwhelmed by the sheer volume of information which goes into even simple image processes. I was working with grids of 32 by 32 pixels, meaning that to generate the values for the simplest image, I had to flip a coin 1024 times. But producing an RGB image requires 3 values per pixel, tripling the number of values necessary to 3072.
I see these instructional images as representative of a kind of algorithmic literacy. One aspect of algorithmic literacy is commonplace, learning to adapt one’s behaviour to that of the algorithmic media encountered on a frequent basis. A deeper aspect of this phenomenon is learning to “read” the processes which have gone into an image or to interpret an image from its pixel values. As we become ever more immersed in algorithmically determined media, such forms of literacy are emerging and becoming consequential. Not only does this reshape the nature of the image, but it also introduces new modalities of sharing agency and interacting with the agency of machinic processes.
Much like other kinds of tools, the selection or design of an algorithmic methodology depends on the intended outcome, rendering the process fairly closed from the outset. In this case, choosing what kind of algorithmic process to employ plays a role in determining what outcomes may result. As such, algorithmic methodologies may introduce elements of serendipity into the creative process, they also impose a particular modality of exploration and interaction upon it. This is apparent in much artistic work employing machine learning, as the algorithms and datasets used tend to be recycled from other contexts. While appropriation of algorithmic methodologies may provide openings for critique, it has yet to manifest a significant response to the flood of algorithmic behaviour most of us are familiar with at this point.
The image, the face, is a battlefield. It is a site upon which algorithmic control is exerted and eluded. In response, I advocate a turn toward critical investigation of the procedures performed by algorithmic media and of the conditions from which those algorithms emerge. This paper is an exploration of how such a critical approach to algorithmic practices may take shape. In opposition to treating images as though they may still be taken at face value, it must be acknowledged that the nature of the image has radically changed in light of algorithmic media. Algorithmic literacy enables degrees of functional proficiency
1 Deleuze and Guattari build up the concept of a facial machine as an arrangement of features with certain relations to one another, but which should not be taken literally as a face in the human sense. Here a face is a way of approaching semiotic relations, white walls serving as ground and black holes being signifiers in contrast to that substrate. Deleuze, G., & Guattari, F. (2005). Year Zero: Faciality. In A Thousand Plateaus (pp. 167–191). Minneapolis, London: University of Minnesota Press. ↩
2 See Elkins ↩
3 Flusser, V. (2011). Into the Universe of Technical Images. (N. A. Roth, Trans.). Minneapolis, London: University of Minnesota Press. ↩
4 Cubitt, S., Palmer, D., & Tkacz, N. (Eds.). (2015). Introduction: Materiality and Invisibility. In Digital Light (p. 16). London: Open Humanities Press. ↩