October 23, 2019
There’s an Andy Warhol anecdote that goes: in the early 1970s, it emerged that several fake print portfolios of his Flowers (1970) were circulating, made in slightly different size and colour to what had been specified, with ‘fill in your signature’ stamped on the back. Warhol eventually got his hands on a few of them, which he then signed: ‘This is not by me. Andy Warhol.’1 They effectively became ‘his’ again, with a little twist.
Warhol famously desired to ‘be a machine’, and much of his work was produced with an industrial rhythm: prints made from photographs, silkscreens applied to paper and cardboard, actions repeatedly carried out in a certain order, with endless potential editions. Who took the picture or pushed the ink across the silkscreen wasn’t so important. Warhol’s was a 20th century vision of automation, a certain kind of machine: a Fordian assembly line with approved products emerging at the end, objects that could then be attributed to an individual artistic (and commercial) endeavour.
What interests me about the ‘not by me’ story isn’t the art that resulted – it’s a fairly conservative set of two-dimensional images of flowers. But there’s a bristling set of factors here, starting with the obvious one that he didn’t make the work (which applies to most of his other work, too). He didn’t even oversee it, hence the original disavowal; though having overseen the originating circumstances of the silkscreens, this seems to be what enabled him to re-approach it, and claim it as its author. Then the enigmatic closing gesture: the signature seems to mark a finality – that the work is definitely finished, done, that it fully exists now that he had done so.
The anecdote came to mind while visiting a few recent exhibitions last year in London. In the Martine Syms show ‘Grand Calme’, a digital avatar of the artist stared out at the audience from a large screen. ‘Text me,’ it demanded, providing a phone number; at the other end, an Artificial Intelligence (AI), perhaps modelled in some form on Syms herself, was meant to respond. My ‘Hello’ text went unanswered. (Perhaps not interesting enough to merit a response, admittedly.) Over at the Serpentine gallery, on a similar set of screens a tractor morphs into an oil tankard, and a frog seems to mutate into a beetle, then a finch, then something like a caterpillar. These flickering, unsettled images in Pierre Huyghe’s ‘UUmwelt’ installation were created by a neural network, based on information taken from brain scans while someone was prompted to imagine a specific thing: a machine imaging from a human imagination. It was just as I’d left the show, glancing at my phone, that I saw a few headlines excitedly announced the sale of what was qualified as the ‘first ever AI-generated portrait sold at auction’.2 The artwork sold was a square print of a smudgy portrait of a man with the title Portrait of Edmond de Bellamy (2018), by the French collective Obvious. The group, making use of an existing process, had supplied 15,000 images of portrait paintings to an algorithmic system, which then had to ‘judge’ whether the images produced were man or computer made, resulting in a series of blurry, Francis Bacon-like pseudo-portraits.
This concise exposure to the Syms, Huyghe and Obvious works was just the more high-profile of a recent glut of artworks proclaimed to be made through use of algorithms, machine learning and deep neural networks. Look around, and I’m sure you’ll notice no shortage of Augmented Reality, Virtual Reality and AI artworks currently on display; or the unveiling of the gendered robot Ai-Da as the ‘world’s first ultra-realistic humanoid AI artist’ in June of this year. Even if last year – a long time in the digital world – that day seeing AI-generated art provides a useful summary of the way artwork produced via such methods is being presented and discussed. On one hand, the Obvious work, like the Google DeepDream software released in 2015, provides us with a computer vision of an image, the outcome of feeding the algorithm art and seeing how it manipulates it: the result is a two-dimensional, semi-abstract image, modelled after a painting, print, or photograph. Or like the Syms piece, AI is used as a presentation of a partially formed learning system as a gesture of interactivity, but also as a way of embedding the audience in the process. I was reminded of Ian Cheng’s BOB AIs at the Serpentine earlier in the year, bug-like entities that we could see on screen, and who we were meant to interact with on phones using facial-recognition software, the entity learning from all the visitors who did so. Though for most of the time I was there, like Syms’s work, BOB apparently didn’t feel like interacting. The Huyghe installation was somewhere between the two, supposedly allowing access to the images in formation, a glimpse of the process frozen in time. But whether a CGI talking head or a framed 2D image, they all share a rhetoric, where the learning algorithm helping to produce the work is cast as a sort of ghost in the shell, an other which mysteriously enables this thing to pop out at the other end. It all has a similar coy faux-distancing to Warhol’s signed assertion – I didn’t make this, the artist admits. But, sure, I’ll put my name on it.
In a world where much of our current design, architecture and communications are mediated by adaptive learning systems and algorithm-led programs and platforms, the type of art that is produced and our sense of creativity have already been long shaped by these forces. But only more recently has the computer been foregrounded as both medium and potential collaborator in the artistic process. When the term ‘AI art’ is used, it conjures up art made without humans. And we will get to that point, but we might not know when it arrives – AI art proper is going to be entirely unintelligible to the human mind. At the moment though, where we’re at is more like the ‘I forced a bot’ meme (eg, ‘I forced a bot to watch 1,000 hours of Friends, and then asked it to write an episode of its own’, and then providing a faked script with a cracked-up, absurd version of that show or film), where we spoon feed a program a certain type of artistic content, and then see in what relatively minor ways it manipulates that content. And then marvel, or smirk, at the results. Which is to say, we’re still at a fairly basic stage of what AI art could be, but the bar is so low, and the projected imagination of it so high.
As a cultural phenomenon, what machine learning art seems to indicate is a desire for an idealised collaborator. We want someone to show us something new, show us a way out of the contemporary hall of mirrors – or maybe we just want to be machines. AI art is presented as an encounter with an impossibly foreign subjectivity that might still also reflect some truth back on us. Though in that ache for the encounter, the focus has remained on the process more than on the art that we have to deal with on the other end. There’s a fascination with the process, which is meant to act as some sort of excuse for the result. At this point, over 50 years since things like Brion Gysin’s computer cut-up Permutated Poems and Lillian Schwartz’s digital drawings at the Bell Labs, we might begin to formulate some criteria for the reception of digital art, not just production. In part, it’s a need for contextualisation - if we can start with the understanding of an algorithm as simply a set of instructions, we might view much of, say, Warhol’s work as algorithmic. It’s been over 100 years since Elsa von Freytag-Loringhoven’s algorithm for urinal art was used by a wily French chess fanatic. This isn’t to start an anachronistic re-assessment of found, instructional, assistant-produced and outsourced art, but to maybe ground some of the boundless claims made for digitally produced art. In the making of art, from pigment sources for paint, to models (human or otherwise), to rocks, foundries and coders, other entities have always been involved
Too Much Arousal In a paper published by Rutger University’s Art & AI department in 2017, the authors ‘propose a new system for generating art’,3 the system similar to that which would later be used by Obvious in getting the auction houses’ attention. In setting out the terms of their experiment in getting AI to produce art, the study cites the British psychologist D.E. Berlyne’s work on arousal as the source for their criteria for understanding aesthetic reception, enlisting ‘novelty, surprisingness, complexity, ambiguity and puzzlingness’ as the most significant arousal-raising properties. In describing the adverse reactions to the DeepDream images, they are almost comically perfunctory:
"‘The Guardian commented on the images generated by Google DeepDream by ‘Most, however, look like dorm-room mandalas, or the kind of digital psychedelia you might expect to find on the cover of a Terrence McKenna book”. Others commented on it as being “dazzling, druggy, and creepy”. This negative reaction might be explained as a result of too much arousal, which results in negative hedonics."4
Noting that new art, and audience pleasure, has historically come from breaks in stylistic norms, their system would learn from the styles it was fed and attempt to deviate from them, while also try not to arouse too much. Their system would avoid this by aiming to be novel, but only a little novel, and aim for more stylistic ambiguity. What they actually fed into the system was a set of American Abstract Expressionist works, and a selection of abstract paintings shown at the 2016 ArtBasel fair (which included Andy Warhol, Heimo Zobernig, and out of the 25 paintings selected, 14 Chinese artists). The result is a set of mediocre distorted variations on landscapes, mostly with muted colours, many still retaining the sense of being a photograph run through a series of Photoshop effects: solarize, distort, repeat.
The results themselves aren’t surprising, given that the system is only reacting to what it was given; if all it is shown is fey abstractions, then fey abstractions we will get. ‘This is nothing to do with art,’5 as writer Georgina Adam remarked on Twitter the day of Obvious’s auction sale. But of course it is art; it was the result of a system explicitly designed to produce ‘art’, and it has been presented to us as ‘art’. It might even be considered ‘good’ art, by very specific, Western early 20th century standards. By any other contemporary parameter, it’s the tokenistic gestures of a tool still in development, making art that isn’t so much ‘bad’ as just pointless; and perhaps it could only ever be pointless art, because it’s trying too hard to produce an expected result, and was based on out-dated models of ‘art’ and ‘creativity’ anyway.
What might be useful to remember is that most of those ‘adverse’ art turns of the past century that created ‘stylistic breaks’ were movements and moments which were actively anti-art, or attempting to erase the distinctions between art and life. It’s also useful to remember that one of the most insightful questions you can ask an artist is ‘how do you know when it’s done?’ These initial steps in training potential creative intelligences might be necessary, but at what point do we ask more, and update the definition of art to more than just pretty pictures, or at least something after Conceptualism? Ai-Da, on some token level at least, represents a movement towards a farmed-out practice similar to Warhol’s: after devising ‘her’ paintings, the schemes are handed over to a human artist, Suzie Emery, to execute; the results still being fey abstractions. These systems, as they’re currently used, are going to only ever produce reactionary, limited art, because their aim is to produce an explicit version of visual art – rather than, say, consider the state of reproductive rights in Northern Ireland and construct a creative response, in any form.
The final part of the published Rutger’s experiment was to show both the initial prompting artworks provided to the system, alongside the resulting images, to a group of people, asking which images they preferred and if they thought a computer had produced the work. It turns out the works from ArtBasel were seen more as computer-generated than the actual computer-generated works. Often such studies seem to rely on this as a marker of success of some form, like a sort of Picasso-Turing Test, where if a human thinks it was made by another human then it must be decent, or worthy. All that’s proven, in the end, is a concept of imitation, which feels irrelevant to the actual question at hand: what actually is creativity? This ‘passing’ as human, coupled with the notion of arousal, focuses this question solely on the individual aesthetic judgement; yes, art is subjective, but we also need to acknowledge the social and instructional aspects of how art is seen, shown, discussed. To adapt an art school saying: can we teach them not to make art, but to be artists?
The Sublime Fiction There are visions of a more holistic AI art; but for the moment these are projections, fictions. In Lawrence Lek’s video Geomancer(2017), an AI military satellite in 2065 decides to fulfil its ambition: to become the first AI artist. A more grim, and perhaps more realistic plan is presented in João Enxuto and Erica Love’s Institute for Southern Contemporary Art (2016), where a voice-over actor with a bad Southern US accent pitches a proposed educational institute where algorithms will devise ideas for highly marketable contemporary art, to be put into action and given a finishing human touch by art students; after all, ‘authorship and individuality are key factors for quantifying artistic value’.6 Places in the art school are free; all students need to do is provide their data. Enxuto and Love’s institute positions the algorithm as the creative centre to the operation, the humans more as the fabricators, which initially appears as a reversal of roles. So far, the machine learning used in art has been presented as a semi-autonomous assistant, like the way Trevor Paglen, in an interview earlier this year, described the AI used to create his Study of Invisible Images photographs as akin to the people who carry out Sol Le Witt’s wall drawing instructions.7 In an article in Art Monthly last year year, artist and writer Dave Beech described this ascription of work to AI and the desire to automate as a reprisal of 18th-century utopian ideals, where AI would become the new working class. All that would achieve, he argues, is to ‘universalise the activities of the capitalist: one day we will all live off labour that we ourselves do not perform.’8 He notes that the ‘robot and the genius are partners’,9 but he doesn’t in the article acknowledge the implication of his statement, which Enxuto and Love ironically extol: that AI has effectively come to occupy the role of the genius in contemporary society. As in, it holds a position which invites the aspirations and reverence of the genius, rather than the actual qualities of genius (whatever that might be). In a text from several years ago on the semi-automated printed paintings of Wade Guyton and Gerhard Richter, writer Barry Schwabsky used the term ‘technological sublime.’10 The hark back to Edmund Burke is telling, where the ‘sublime’ was used as a quasi-mystical notion to justify things like monarchy. What it seems like, now, is that the discourse around AI has become a back door to enable the re-introduction of such absolutist terminology, where the aura of the genius is a means of blinding us with technology, denying context and specific debate.
Contemporary art can be understood as a system – that includes conception, process, transmutation, presentation, context of encounter, sharing and transmission, and any individual or communal hauntings afterwards. Any appraisal of art invokes all those at once, at varying momentary instances and pressure points. AI art, so far, presents the programming as both the process and context for the work, a digital sublime void to which we are meant to submit. I feel like it’s necessary to assert that even if an audience doesn’t completely understand the coding and the pathways of the neural networks, it’s ok to have expectations. There are plenty of affective, critical artworks that use machine learning to explore what it means to live at this particular moment in time. But it’s also ok to admit that the unresponsive AI just didn’t work, rendering the piece into an ineffectual video piece; or that maybe the resulting AI portrait was just unimaginative and a bit meh; it’s ok to admit that perhaps more often it’s the idea of interactivity than interaction itself. As contemporary creativity evolves, so will new forms of arousal, dispersal, even disappearance. Creativity is something restless, never ending, elusive – the future of AI art will not be by us.
1 See Andrew Wilson, ‘This is not by me: Andy Warhol and the Question of Authorship’, in Dear Images: Art, Copyright and Culture, eds. Karsten Schubert and Daniel McClean (London: Ridinghouse, 2002), pp. 375-385. ↩
2 Eileen Kinsella, ‘The First AI Portrait Ever Sold at Auction Shatters Expectations’, Artnet.com, 25 October, 2018, https://news.artnet.com/market/first-ever-artificial-intelligence-portrait-painting-sells-at-christies-1379902 ↩
3 Ahmed Elgammal, Mohamed Elhoseiny, Bingchen Liu, Marian Mazzone, ‘CAN: Creative Adversarial Networks Generating “Art” by Learning About Styles and Deviating from Style Norms, Rutgers University, 2017, https://arxiv.org/pdf/1706.07068.pdf, p.1. ↩
4 Ibid., p. 5. ↩
5 @georginaadam, Twitter.com, 26 October, 2018, https://twitter.com/georginaadam/status/1055744755458547713 ↩
7 Brian Boucher, ‘This is the Project of a More Just World’, Artnet.com, June 11, 2018, https://news.artnet.com/art-world/trevor-paglen-interview-1299836 ↩
8 Dave Beech, ‘I, Genius. I, Robot’, Art Monthly, No. 415, April 2018, p. 12. ↩
9 Ibid., p.11. ↩
10 Barry Schwabsky, ‘Generation X: On Wade Guyton and Gerhard Richter’, The Nation, November 14, 2012, https://www.thenation.com/article/generation-x-wade-guyton-and-gerhard-richter/ ↩