Artists have been working with artificial intelligence (AI) since the 1970s. They comprised a minuscule enclave within the early computer art community throughout the 1970s and 1980s and were largely unnoticed by the mainstream artworld and broader public. In the 1990s and 2000s, more artists got involved with AI and produced installations that question the meaning of agency, creativity, and expression. Since the 2000s, AI art diversified into generative and interactive approaches that involved statistical methods, natural language processing, pattern recognition, and computer vision algorithms. The increasing affordance of multilayered machine learning architectures, as well as the raising socio-political impact of AI, have facilitated the further expansion of AI art in the second half of the 2010s. The topics, methodologies, presentational formats, and implications of contemporary AI art are closely related to, and affected by, AI science research, development, and commercial application. The poetic scope of AI art is primarily informed by the various phenomenological aspects of sub-symbolic machine learning. It comprises creative strategies that explore the epistemological boundaries and artifacts of AI architectures; sample the latent space of neural networks; aestheticize the AI data renderings; and critique the conceptual, existential, or socio-political consequences of corporate AI; a few works criticize AI art itself. These strategies unfold in disparate practices ranging from popular and spectacular to tactical and experimental. The existing taxonomies or categorizations of AI art should be considered as provisory because of the creative dynamics and transdisciplinary character of the field. Similar to other computational art disciplines, AI art has had an ambivalent relationship with the mainstream artworld, marked by selective marginalization and occasional exploitation.
1. Introduction
Artists have been working with AI since the 1970s. AI art pioneers, such as Harold Cohen, Arthur Elsenaar and Remko Scha, David Cope,
and Peter Beyls,
and Naoko Tosa, comprised a minuscule enclave within the computer art community, primarily due to the complexity and scarcity of AI systems throughout the 1970s and 1980s, but also because this period was one of the AI “winters”, with reduced research funding and receding interest in the field
[1]. AI research in the 1990s and 2000s provided more accessible tools for artists to confront and compare human and machinic behavior. With uncanny robotic works that question the meaning of agency, creativity, and expression, artists such as Ken Feingold, Ken Rinaldo, Louis Philippe-Demers, Patrick Tresset
, Naoko Tosa, and others had articulated some of the contemporary AI art’s topics.
Since the 2000s, artists such as Luke DuBois, Sam Lavigne, Sven König, Parag Kumar Mital, Kyle McDonald, Golan Levin, Julian Palacz, and others, have been creating generative and interactive works based on logical systems or statistical techniques which conceptually and technically overlap with, or belong to, AI technologies. They used natural language processing (NLP), pattern recognition, and computer vision (CV) algorithms to address various features of human perception reflected in AI, and to explore higher-level cognitive traits by interfacing human experiential learning with machine learning (ML)
[2][3].
The increasing affordance of multilayered sub-symbolic ML architectures such as Deep Learning (DL), as well as the raising socio-political impact of AI in the second half of the 2010s, have facilitated the expansion of AI art
[4] (pp. 2–3). Its production has been gaining momentum with the support of AI companies such as Google or OpenAI, and academic programs which have been facilitating art residencies, workshops, and conferences, while its exhibition has expanded from online venues to mainstream galleries and museums.
Contemporary AI art includes practices based on diverse creative approaches to, and various degrees of technical involvement with, ML
[5] (p. 39). Its topics, methodologies, presentational formats, and implications are closely related to a range of disciplines engaged in AI research, development, and application. AI art is affected by epistemic uncertainties, conceptual challenges, conflicted paradigms, discursive issues, ethical, and socio-political problems of AI science and industry
[6]. Similar to other new media art disciplines, AI art has had an ambivalent relationship with the mainstream contemporary artworld (MCA); it is marked by selective marginalization and occasional exploitation, which entice artists to compromise some of their key poetic values in order to accommodate the MCA’s conservative requirements for scarcity, commercial viability, and ownership
[6] (pp. 252–254).
Its interdependence with AI technologies and socio-economic trends exposes AI art to a critical consideration within a broader cultural context. The existing literature comprises several studies of AI art and implicitly relevant works. For example, Melanie Mitchell in
Artificial Intelligence (2019)
[7], as well as Gary Marcus and Ernest Davis in
Rebooting AI (2019)
[8] provide a conceptual, technological, and socio-cultural critique of AI research. Michael Kearns and Aaron Roth in
The Ethical Algorithm (2019)
[9], and Matteo Pasquinelli in
How a Machine Learns and Fails (2019)
[10], address the ethical, socio-political, and cultural consequences of the AI’s conceptual issues, technical imperfections, and biases. With
The Artist in the Machine (2019)
[11], Arthur I. Miller includes AI art in the examination of creativity that spans his other books
[12][13]. In
AI Art (2020)
[14], Joanna Żylińska opens a multifaceted discussion of AI focusing on its influences on visual arts and culture. In the Artnodes journal special issue
AI, Arts & Design: Questioning Learning Machines (2020)
[4], edited by Andrés Burbano and Ruth West, the contributors address the issues of authorship and creative patiency (Galanter)
[15], the creative modes of AI art practices (Forbes)
[16], the public AI art (Mendelowitz)
[17], the use of ML in visual arts (Caldas Vianna)
[18], and the relationship between AI art and AI research (Tromble)
[19]. In the
Atlas of AI (2021)
[20], Kate Crawford maps the less desirable reflections of human nature in the AI business, hidden behind marketing, media hype, and application interfaces. In
Understanding and Creating Art with AI (2021)
[21], Eva Cetinić and James She provide an overview of AI research that takes art as a subject matter, outline the practical and theoretical aspects of AI art, and anthologize the related publications. In
Tactical Entanglements (2021)
[22], Martin Zeilinger investigates the tactical and posthumanist values of AI art. In
Alpha Version, Delta Signature (2020)
[3], I explore the cognitive aspects of AI art practices; in
Brittle Opacity (2021)
[6], I address the ambiguities that AI art shares with AI-related creative disciplines; in
Immaterial Desires (2021)
[23], I focus on the AI art’s entanglements and cultural integration; and in
The Creative Perspectives of AI Art (2021)
[24], I discuss the creative dynamics of contemporary AI art.
2. Poetics
This framework requires an inclusive view on the poetic features of prominent AI art practices, ranging from popular/mainstream to tactical and experimental. The discussion addresses the thematic, conceptual, methodological, technical, and presentational aspects of exemplar artworks that belong to disparate formal, thematic, or procedural categories.
The poetic scope of AI art derives from computer art and generative art and is primarily informed by the various phenomenological aspects of sub-symbolic ML systems. It comprises the strategies that explore the epistemological boundaries and artifacts of ML architectures; sample the latent space of DL networks; aestheticize or spectacularize the renderings of ML data; and critique the conceptual, existential, or socio-political consequences of applied AI; a few works criticize AI art itself. The existing taxonomies or categorizations of AI art are useful
[16][17] but should be considered as provisory due to the creative dynamics of the field, particularly with respect to AI research.
2.1. Creative Agency and Authorship
Themes such as creative agency, authorship, originality, and intellectual property are widely attractive to AI artists, popular with the media, and fascinating to the audience. The malleability of these notions was central to modernism and postmodernism, and artists have been addressing them with computational tools since the 1960s, so this recent surge of interest is probably due to a combination of the novelty of DL, its processual opacity, and its specific informational or formal effects. However, artistic exploration of this territory has been challenged by the AI’s most pervasive ambiguity—anthropomorphism.
Anthropomorphism manifests in various forms. One is a tendency to assign human cognitive or behavioral features to non-human entities or phenomena, which often proves difficult to identify and sometimes has undesired consequences. In AI research, the anticipation of emergent intelligence is based upon a belief that software will attain intelligence and develop emotions given enough training data and computational power. Besides being technically dubious, this is an anthropocentric position, another version of the tenet that humanity is the sine qua non of the universe
[19] (p. 5). It is complicated by the corporate AI’s crowdsourcing of cheap, invisible, and underrecognized human labor for tasks such as dataset interpretation, classification, or annotation, whose outcomes affect ML training models or algorithms
[10] (p. 7)
[14] (pp. 119–127).
Anthropomorphism is broadly sensationalized in AI art discourse, for example by authors such as Arthur I. Miller who argues for the (intrinsic) creativity of AI systems
[11][25]. His narratives often rely on anthropocentric expressions, such as “what neural network sees”, and identify creativity in AI as generating new information
[25] (pp. 247, 249). A converse form of anthropomorphic fallacy is to conflate the artists’ creative agency with cumulative human creativity embedded in their tools (computers and software), which simultaneously deprives artists of their own inventiveness, and lifts the responsibility off of their creative acts. Shared between some researchers, many artists, and the media, it often exploits the trope of the ever “blurring line between artist [ghost] and machine”
[26][27], and involves experiments which are supposed to establish “who is the artist” or “what is real/better art” by manipulating the preferential conditions of human subjects tasked with evaluating human- and machine-produced artefacts
[25] (p. 248)
[27][28]. Such experiments are often naïve or manipulative because they presume—and instruct the subjects—that their test material
is art while omitting two fundamental distinctions: who considers something as an artwork, and why
[29] (p. 102). They disregard that art is artificial by definition, and ignore the well-informed notions about a complex relationship between creative agency, authorship, and technology
[3] (pp. 75–77)
[5] (pp. 42–43, 47)
[30][31][32].
2.1.1. The Elusive Artist
In his long-term project
AARON (1973–2016), the pioneering AI artist Harold Cohen experimented with translating and extrapolating some components of human visual decision-making into a robotic drawing/painting system
[33]. He had an ambiguous relationship with machinic creative agency and flirted with anthropomorphic rhetoric
[34]. Not surprisingly, a highly popular segment of contemporary AI art belongs to the saccharine reiterations of Cohen’s approach, in which artists “teach” their robots how to paint, such as Pindar Van Arman’s
Painting Robots (since 2006)
[35] or Joanne Hastie’s
Abstractions (Tech Art Paintings) (since 2017)
[36]. Driven by weekend painters’ enthusiasm, these projects “serendipitously” merge technically competent execution with dilettante aesthetics, conceptual ineptness, and ignorance of art-historical context. The meaning of the word “art” collapses into banal, camera-driven visualizations, rendered and presented with amateurish self-confidence in a series of “progressively improved” ML systems. Anthropomorphism is also advocated within the art-academic domain, for example by Simon Colton’s discussion of his project
The Painting Fool (2012) that he hopes “will one day be taken seriously a creative artist in its own right.” Its aim is to dramatically expand the artistic range of Cohen’s
AARON by introducing the software interface that could be trained by different human artists, able to critically appraise its own work, and (in future versions) the work of other artists
[21] (pp. 8–9)
[37] (pp. 5–6).
Fewer artists address the subtlety of this topical range. One of them is Adam Basanta. In his installation
All We’d Ever Need Is One Another (2018)
[38], a custom software randomizes the settings of two mutually facing flatbed scanners so that in every scanning cycle, each captures a slightly altered mix of the facing scanner’s light and its own unfocused scanning light reflected off the opposite scanner’s glass plate. The perceptual hashing algorithms then compare each scan to the images in a large database assembled by scraping images and image metadata from freely accessible online artwork repositories. If the comparison value between the scan and its most similar database image exceeds 83% based on the parameters such as aspect ratio, composition, shape, and color distribution, the software declares a “match”, selects the scan for printing, and labels it according to the matching image metadata. When it selected and labeled one of the scans as
85.81%_match: Amel Chamandy ‘Your World Without Paper’, 2009, Canadian artist Amel Chamandy initiated a legal action about the intellectual property rights against Basanta because of the reference to her photograph, although
85.81%_match… is not for sale and Basanta apparently does not use it for direct commercial gains by any other means.
All We’d Ever Need Is One Another disturbs the concepts of authorship, originality, and intellectual property by legitimately and consistently applying the functional logic of ML, while the intricacies of the lawsuit it triggered exemplify the intellectual and ethical issues of our tendency to crystalize the commercial rights of human creativity
[22] (pp. 94–108).
Basanta and other exemplar artists such as Nao Tokui (discussed in
Section 2.2.2 and
Section 2.4.2) or Anna Ridler (discussed in
Section 2.1.4,
Section 2.4.2, and
Section 2.4.3), approach AI both as a criticizable technology and a socio-political complex, and recognize the variable abstraction of technologically entangled authorship. They demonstrate that crucial aesthetic factors such as decision-making, assessment, and selection are human-driven and socially embedded regardless of the complexity or counter-intuitiveness of the tools we use for effectuating these factors. They remind us that our notion of art is a dynamic, evolving, bio-influenced, and socio-politically contextualized relational property which needs continuous cultivation.
2.1.2. Performative Aesthetizations
Performance artists who enjoy the sponsorship of corporate AI tend to emphasize dubious human-centered notions of creative agency through sleekly anesthetized mutations of earlier avant-garde practices. For example, Sougwen Chung’s homo-robotic projects, such as
Drawing Operations Unit: Generation 2 (2017, supported by Bell Labs)
[39], draw a comparison with Roman Verostko’s algorist compositions from the 1980s and 1990s
[40]. Whereas Verostko encapsulates his coding experiments with pure form into a discreet relationship between his pen-plotter and its material circumstances, Chung uses the theatricality of her collaboration with robots as a “spiritualizing force” to mystify the manual drawing process—which is by nature highly improvisational and technologically interactive.
Similarly, Huang Yi’s robotic choreography
HUANG YI & KUKA (since 2015, sponsored by KUKA)
[41] spectacularizes the metaphors of harmonious human-machine interaction and mediates them safely to the comfortable spectators, while the referential Stelarc’s performances since 1976, such as
Ping Body (1996), emphasize the existential angst and uncertainty of shared participatory responsibilities between the artist, technology, and the audience who all have a certain degree of manipulative influence on each other
[42] (pp. 185–190). Also sponsored by KUKA, Nigel John Stanford’s musical performance
Automatica: Robots vs. Music (2017)
[43], can be viewed as an encore of Einstürzende Neubauten’s concerts from the 1980s “spiced up” for tech-savvy cultural amnesiacs
[44]. Rehearsed beyond the point of self-refutation, Stanford’s “improvisations” stand in as formally polished but experientially attenuated echoes of Einstürzende’s rugged guilty pleasures in sonic disruption.
With high production values and aesthetics palatable to the contemporary audience, these AI-driven acts largely evade the unfavorable comparisons with their precursors and serve as marketing instruments for their corporate sponsors by promoting vague notions of robotically-enhanced consumerist lifestyle. Their persuasibility relies on our innate anthropocentrism, myopic retrospection, and susceptibility to spectacles.
2.1.3. The Uncanny Landscapes
The exploration of anthropomorphism in AI art often involves the uncanny appearance of artificial entities. Uncanniness is the occasional experience of perceiving a familiar object or event as unsettling, eerie, or taboo, and it can be triggered in close interaction with AI-driven imitations of human physique or behavioral patterns
[45] (pp. 36–37).
Some artists approach it implicitly, for example by extracting human-like meaningfulness from the machinic textual conversation in Jonas Eltes’
Lost in Computation (2017)
[46] with reference to Ken Feingold’s installations such as
If, Then,
What If, and
Sinking Feeling (all 2001)
[47]. In these works, NLP systems provide semantically plausible but ultimately senseless continuation of narrative episodes which allude to the flimsiness of the Turing test and serve as vocalized metaphors of our lives. They extend the experience of uncanny awkwardness into the absurdity of miscommunication and accentuate the overall superficiality of the systems tasked to emulate human exchange.
Ross Goodwin and Oscar Sharp used this type of slippage to disrupt the cinematic stereotypes in their short film
Sunspring (2016)
[48]. Trained with the 1980s and 1990s sci-fi movie screenplays found on the Internet, Goodwin’s ML software generated the screenplay and the directions for Sharp to produce
Sunspring. It brims with awkward lines and plot inconsistencies but qualified with the top ten entries in the Sci-Fi London film festival’s 48-Hour Film Challenge.
Sunspring reverses the corporate movie search algorithms and playfully mimics contemporary Hollywood’s screenwriting strategies largely based on regurgitating successful themes and narratives from earlier films
[2] (pp. 390–392). By regurgitating
Sunspring’s concept and methodology two years later, Alexander Reben produced “the world’s first TED talk written by an A.I. and presented by a cyborg” titled
Five Dollars Can Save the Planet (2018)
[49]. A YouTube comment by MTiffany fairly deems it “Just as coherent, relevant, and informative as any other TED talk.”
[50].
Another approach to implicit uncanniness is by alluding to the intimate familiarity of the human body, for example in Scott Eaton’s
Entangled II (2019)
[51] which is comparable to earlier, structurally more sophisticated video works such as Gina Czarnecki’s
Nascent and
Spine (both 2006)
[52], or Kurt Hentschläger’s
CLUSTER (2009–2010) and
HIVE (2011)
[53]. Ironically, projects that combine uncanniness with our apophenic perception in order to “humanize” AI often contribute to diverting attention from pertinent socio-political issues. For example, with
JFK Unsilenced: The Greatest Speech Never Made (2018, commissioned by the Times)
[54], the Rothco agency aimed at contemplative uncanniness by exploiting the emotional impact of sound to reference the romanticized image of John F. Kennedy. Based upon the analysis of 831 recorded speeches and interviews, Kennedy’s voice was deepfaked in a delivery of his address planned for the Dallas Trade Mart on 22 November 1963. The voice sounds familiar at the level of individual words and short phrases, but its overall tone is uneven, so the uncanniness relies mainly on the context of the speech that the young president never had a chance to give. However, even with perfect emulation of accent and vocalization, this exercise could never come close to matching the eeriness and deeply disturbing political context of Kennedy’s televised speech on 22 October 1962 about the Cuban missile crisis in which sheer good luck prevented the multilateral confusion, incompetence, ignorance, and insanity of principal human actors from pushing the world into nuclear disaster
[55].
Visual deepfakes, such as Mario Klingemann’s
Alternative Face (2017)
[56] or Libby Heaney’s
Resurrection (TOTB) (2019, discussed in
Section 2.4.1), approach the psycho-perspective mechanism of uncanniness explicitly, by simultaneously emphasizing and betraying the visual persuasiveness of statistically rendered human-like forms. This strategy was prefigured conceptually and procedurally by Sven König’s
sCrAmBlEd?HaCkZ! (2006)
[57] which facilitated continuous audiovisual synthesis from an arbitrary sample pool. It used psychoacoustic techniques to calculate the spectral signatures of the audio subsamples from stored video material, and saved them in a multidimensional database; the database was searchable in real-time to mimic any sound input by playing the matching audio subsamples synchronized with their corresponding video snippets. Perhaps this innovative project has been largely forgotten because König pitched it to the VJ scene rather than using it to develop artworks that establish meaningful relations between their stored videos and input audio (both selectable by the artist). Along with the sophistication of his technique, König’s expressive mismatch may have anticipated the analogous issues in contemporary AI art.
2.1.4. The Mechanical Turkness
The socio-political aspects of anthropomorphism can be effectively addressed by addressing the deep social embeddedness of complex technologies such as AI and by exposing human roles and forms of labor behind the “agency” or performative efficacy of corporate AI.
For example, Derek Curry and Jennifer Gradecki’s project
Crowd-Sourced Intelligence Agency (CSIA) (since 2015)
[58] offers a vivid educational journey through problems, assumptions, or oversights inherent with ML-powered dataveillance practices. It centers around an online app that partially replicates an Open Source Intelligence (OSINT) system, and allows the visitors to assume the role of data security analysts by monitoring and analyzing their friends’ Twitter messages, or by testing the “delicacy” of their own messages before posting them. The app features an automated Bayesian classifier designed by the artists and a crowdsourced classifier trained on participant-labeled data from over 14,000 tweets, which improves its accuracy by the visitors’ feedback on its previous outputs.
CSIA also includes a library of public resources about the analytic and decision-making processes of intelligence agencies: tech manuals, research reports, academic papers, leaked documents, and Freedom of Information Act files
[59]. This multilayered relational architecture offers an active learning experience enhanced by the transgressive affects of playful “policing” in order to see how the decontextualization of metadata and the inherent ML inaccuracies can distort our judgment.
Similarly, RyBN and Marie Lechner’s project
Human Computers (2016–2019)
[60] provides revelatory counter-intuitive insights into the use of human beings as micro-components of large computational architectures. It is based upon multi-layered media archaeology of human labor in computation since the 18th century. It shows that many AI applications have in fact been simulacra, mostly operated by echelons of underpaid workers, which corporate AI euphemistically calls “artificial Artificial Intelligence” (AAI) or “pseudo-AI”. This foundational cynicism of corporate AI indicates that its development imposes an exploitative framework of cybernetic labor-management
[20][61], which significantly diverges from Norbert Wiener’s cybernetic humanism in
The Human Use of Human Beings (1988)
[62].
A sub-project of
Human Computers, titled
AAI Chess (2018), was an online chess app with three all-human playing modes: human vs. human, human vs. Amazon MTurker, and MTurker vs. MTurker. In 2020, Jeff Thompson “replayed”
AAI Chess with his performance
Human Computers [63], in which the visitors were tasked to manually resolve a digital image file (Google StreetView screenshot of the gallery) from its binary form into a grid of pixels. With 67 calculations per pixel, the complete human-powered image assembly takes approximately eight hours. Here, the visitors’ unmediated enactment of automated operations asserts how a combination of complexity and speed in pervasive technologies makes them difficult to understand and manage by an individual.
These projects were conceptually and methodologically anticipated by several earlier works, particularly by Kyle McDonald and Matt Mets’
Blind Self Portrait, and Matt Richardson’s
Descriptive Camera (both 2012)
[64][65]. In
Blind Self Portrait, a laptop-based face recognition setup draws linear portraits of the visitors, but in order for the setup to work, the sitter has to keep eyes closed while resting their hand on a horizontally moving platform and holding a pen on paper. Unlike Van Arman’s, Hastie’s, or Patrick Tresset’s drawing robots (
Human Studies series since 2011)
[66], which put their sitters in a traditionally passive role,
Blind Self Portrait makes a reference (intended or not) to William Anastasi’s
Subway Drawings from the 1960s
[67] and playfully turns visitors into the “mechanical parts” of a drawing system, self-conscious of their slight unreliability. Richardson’s
Descriptive Camera has a lens but no display; it sends the photographed image directly to an Amazon MTurker tasked to write down and upload its brief description, which the device prints out.
By exploiting human labor in order to emulate the features of AI systems or AI-enabled devices, these projects remind us that the “Turk” in AI is still not mechanical or artificial enough, he resists “emancipation”, and it is not easy to make him more “autonomous”. Their self-referential critique also points to the ethically questionable use of non-transparent crowdsourcing in art practices exemplified by earlier Aaron Koblin’s projects
The Sheep Market (2006),
10,000 Cents (2008), and
Bicycle Built for Two Thousand (2009, with Daniel Massey)
[14] (pp. 117–120)
[68].
It is noteworthy, however, that artistic attempts to approach computational creativity through active open-sourced participation can be equally undermined by the muddled relationship with anthropomorphic notions. Seeing ML as a tool that “captures our shared cognitive endowments”, “collective unconscious”, or “collective imagination”
[69], Gene Kogan initiated a crowdsourced ML project
Abraham in 2019 with a goal to redefine agency, autonomy, authenticity, and originality in computational art. The opening two parts of Kogan’s introductory essay describe
Abraham as “an open project to create an autonomous artificial artist, a decentralized AI who generates art”, and elaborate on the idea in a semantically correct but conceptually derisive discussion, raising suspicion that the author is unaware of Jaron Lanier’s prescient critique of online collective creativity and the subsequent relevant work
[70][71][72]. The missing parts 3 and 4 of the essay were proposed to be published by the end of 2019
[73].
2.2. Epistemological Space
Art methodologies that address the epistemological character and boundaries of ML often involve sampling of multi-dimensional datasets in the inner (hidden) layers of network architectures and rendering their representations compressed in two or three dimensions. Artists treat these datasets as a latent space, a realm between “reality” and “imagination”, replete with suggestions that emerge from a complex interplay between the various levels of statistical abstraction or determination
[21] (p. 9).
2.2.1. Inceptionism
The exploration of latent space started with adaptations of a CV software package DeepDream in order to produce imagery and animations in a quasi-style called Inceptionism, characterized by delirious fractal transformations of pareidolic chimeras
[74]. Examples include Mike Tyka’s
DeepDream (2015–2016); Gene Kogan’s
DeepDream Prototypes (2015); James Roberts’
Grocery Trip (2015); Samim Winiger’s
ForrestGumpSlug (2015); Josh Nimoy’s
Fractal Mountains,
Hills, and
Wall of Faces (all 2015); Memo Akten’s
All Watched Over and
Journey Through the Layers of the Mind (both 2015); Johan Nordberg’s
Inside an Artificial Brain (2015) and
Inside Another Artificial Brain (2016). Inceptionist works struggled to become more than decorative interventions, and the trend dried out relatively quickly. Besides the inherent structural uniformity and apparent formal similarities between Inceptionist works, the main reason is that arbitrary generation of mimetic imagery or animations tends to become oversaturating and boring if it unfolds unbounded. In order to engage the viewer, it requires prudently defined conceptual, narrative, and formal constraints, which seemed to be difficult to implement with DeepDream.
2.2.2. Sampling the Latent Space
Further experimentation prompted artists to transcend mere representation by exploiting ML with meaningful premises, and by finding more flexible aesthetics to mediate the latent space. To metaphorize the statistically structured epistemological scope of neural network architectures, artists often accentuate the tensions between their processual effectiveness and interpretative limitations.
Timo Arnall’s
Robot Readable World (2012)
[75] is an early example of this approach. It comprises found online footage of various CV and video analytics systems (vehicle and crowd tracking, counting and classification, eye-tracking, face detection/tracking, etc.), composited with layers that visualize their data in real-time. However, Arnall’s attempt to reveal the “machinic perspectives” uses a human-readable (anthropocentric) approximation of the actual software data processing, tracing back to the funny but technically groundless translation of the terminator android’s CV data to English in
Terminator 2 (1991, directed by James Cameron). In
Computers Watching Movies (2013)
[76], Ben Grosser handled this topic more appropriately. It illustrates the CV processing of six popular film sequences in a series of temporal sketches in which the points and vectors of the CV’s focal interest are animated as simple dots and lines on a blank background (the processed film footage is not visible), synchronized with the original film sound. This semi-abstraction draws viewers to make sonically-guided comparisons between their culturally developed ways of looking and the “attention” logic of CV software that has no narrative or historical patterns.
In contextually different settings, the semantic power of written text can provide strong generative experiences. For example, Nao Tokui and Shoya Dozono’s
The Latent Future (2017)
[77] is an ambient installation based on the interaction between an ML semantic model trained on a collection of past news and the real-time human or machine-generated news. It continuously captures Twitter newsfeeds and uses their discerned meanings to create fictional news. The generated news is presented in a virtual 3D space that maps each sentence’s latent feature vectors, while the distances between the sentences correspond to their relative semantic differences. This work is informed in real-time by the largely unpredictable dynamics of the Twitter galaxy, but also by Twitter’s filtering algorithm which represents many important aspects of current socio-political trends.
The approach to interpretation in these and many other AI artworks calls for a comparison with earlier generative works which reveal the tropes of various media in aesthetically elegant and intellectually engaging ways
[3]. For example, in Memo Akten’s
Learning to See (since 2017)
[78] visitors are invited to arrange various household objects on a table for a camera feed that is processed in real-time by a convolutional conditional generative adversarial network (GAN) autoencoder which mimics the input shapes and surface patterns as compositions of clouds, waves, fire bursts, or flowers, depending on the chosen training model. By revealing narrowness and arbitrariness, the ambiguous interpretative efficacy of this interaction suggests the similarity between GAN’s and human vision in their reliance on memory and experience. However, the actual experience of
Learning to See quickly becomes tedious and erodes into a mildly amusing demo because, regardless of the object arrangements or the selection of interpretative image dataset, the results are always homogenously unsurprising.
The relational flexibility of human visual interpretation that
Learning to See fails to address was brilliantly utilized by Perry Bard in a conceptually and formally comparable non-AI generative project
Man with a Movie Camera: The Global Remake (2007–2014)
[79]. It is an online platform that allows visitors to select any shot from Dziga Vertov’s seminal film
Man with a Movie Camera (1929) and upload their video interpretations. Bard’s server-side software replays a two-channel setup comprising Vertov’s original synchronized with a remake which is continuously assembled of the participants’ shots (randomly selected when there are multiple uploaded interpretations of the original shot). By leveraging the creative breadth of human perception and cognition, this relatively simple technical setup engrosses both the uploaders and viewers in an intriguing and surprising experience.
2.2.3. GANism
In order to explore and mediate the latent space, artists have been developing various techniques that exploit the increasingly versatile GAN architectures. They reveal the artifactual character of GANs by treating autoencoder networks as compression algorithms, for example in Terence Broad’s
Blade Runner—Autoencoded (2016)
[80], or by allowing “unpolished” representations of GAN data, for example in Elle O’Brien’s
Generative Adversarial Network Self-Portrait (2019)
[81], or Jukka Hautamäki’s
New Parliament (2019) and
Restituo I and II (2021)
[82][83].
Other examples of manipulating the latent space include Anil Bawa-Cavia’s
Long Short Term Memory (2017); Gene Kogan’s
WikiArt GAN and
BigGAN Imitation (both 2018); several Mario Klingemann’s portrait synthesis works, such as
Face Feedback,
Freeda Beast (both 2017), or the
Neural Glitch series (2018)
[56]; AI Told Me’s
What I Saw Before the Darkness (2019); Hector Rodriguez’s
Errant: The Kinetic Propensity of Images (2019); Sukanya Aneja’s
The Third AI (2019); Tasos Asonitis’
Latent Spaces (2021), and others. Within this range of works, Weidi Zhang’s
LAVIN (2018)
[84] is notable for its sampling/rendering methodology and representational strategy. It provides a responsive virtual reality (VR) experience of a GAN which maps the real-world objects from a video camera feed to the semantic interpretations limited to a set of less than a hundred daily objects; the photogrammetric reconstructions of these objects navigate the audience through a virtual world.
Recent GAN techniques allow complex formal remixing by modifying generator or discriminator networks, for example in Golan Levin and Lingdong Huang’s
Ambigrammatic Figures (2020)
[85], or by inserting filters and manipulating activations maps in the higher network layers to disrupt the image formation process, for example in Terence Broad’s
Teratome (2020)
[86].
Due to their limited autonomy in choosing the training datasets or the statistical models that represent the latent space, GANs prove to be primarily the tools for processual mimicry rather than intelligent creative engines
[21] (p. 9). Therefore, GAN manipulation renders ubiquitous visual character in disparate works produced with similar techniques. Although some projects go beyond purely technical/formal exploration or perceptual study, aesthetically (and often conceptually), many do not diverge significantly from earlier glitch art in which the error is an aestheticized frontline layer
[87]. This expressive issue reaffirms the importance of the artist’s decision-making and overall poetic articulation. In contrast to the tech community’s quick approval
[88] and self-conceited assertions that “GAN artists have successfully cultivated their moderately abstract, dream-like aesthetic and promoted the process of serendipitous, often random usage of generative processes”
[89], the poetic identity of GAN artworks is dominated by Dali-esque or Tanguy-esque formal fusion (morphing), often visually oversaturated but conceptually bland. Superficiality and banal consumerist notions of perception in GAN art extend to the socio-political and ethical dimensions by pointing to the artists’ technocratic strategies which Żylińska critically labels as “platform art”
[14] (pp. 77–85).
The popularity of GANs has also escalated the misuse of the expression “generative art” to describe only the computational practices that involve randomness, complexity, or ML architectures. A disregard for the methodological diversity and long history of generative art
[90][91] impoverishes the broader contextual milieu of experimental art and facilitates the uncritical appreciation of AI art practices.
2.3. Spectacularization
AI art with the highest public visibility profile comprises derivative projects by mainstream artists, and big-budget AI art spectacles.
2.3.1. Derivative
The AI’s growing ideological authority, socio-economic power, and the practical accessibility of ML software had induced the MCA’s involvement with AI art in the mid-2010s. The recent adoption of blockchain crypto products such as non-fungible tokens (NFTs) for securing the marketability of digital entities further increased the gallery/museum and auction house interest
[92], prompting the mainstream artists to assimilate ML into their repertoire and to update their poetic rhetoric accordingly. Similar to the post-digital artists a decade earlier
[93], they approach digital technologies as affective markers of contemporary culture and act chiefly in collaboration with hands-on personnel to produce AI-derived works in conventional media (installation, sculpture, video, and photography) with a lower degree of technological entanglement than most experimental AI artworks.
This strategy affords them cultural recognition, institutional support, and commercial success, but sacrifices the intricate tension between the artworks’ conceptual, expressive, or narrative layers and the contextual logic of the technologies in which they appear. Examples include Gillian Wearing’s
Wearing Gillian (2018); Lucy McRae’s
Biometric Mirror (2018); Hito Steyerl’s
Power Plants and
This is the Future (both 2019); Pierre Huyghe’s
Of Ideal (since 2019) and
UUmwelt (2019); Kate Crawford and Trevor Paglen’s
Excavating AI (2018) and
Training Humans (2019–2020), and others. The presentational authority and decorative appeal of this production tend to seduce the general audience into superficial aesthetic consumption or complacency, even when projects are created with critical intentions.
For example, Trevor Paglen’s AI-related production has been praised as a critique of biases, flaws, and misconceptions of AI technologies, along with his established line of interest in visualizing the covert systems of power and control in the military, intelligence, state, or corporate institutions. However, it is also criticizable as exploitation of activist perspective toward opto-centric epistemology, which mystifies high-end visual technologies and abuses the affective perception of institutional power through stylized gallery setups accompanied with highfalutin explanatory statements. Paglen’s collaborative project with Kate Crawford
ImageNet Roulette (2019)
[94] is represented as being critical of classification biases in CV, but it is hard to see in it anything more than an overwhelming illustration of the issue. As an analytical, research-based revelatory critique of classification biases and AI technologies in general, it is neither new nor original, for example when contrasted with Curry and Gradecki’s
CSIA (since 2015, discussed in
Section 2.1.4) or with RyBN’s systematic critical analysis in a number of projects, such as
Antidatamining (since 2007)
[95] and
Human Computers (2016–2019, discussed in
Section 2.1.4)
[60]. Similarly, its socio-cultural commentary fades in comparison with Taryn Simon and Aaron Schwartz’s project
Image Atlas (since 2012)
[96] which addresses the same issues but discards loftiness for a more meaningful impact by obtaining simple imagery in complex ways and by coupling it with concise, unpretentious narratives.
2.3.2. Large Scale
The substantially market-driven operational criteria and depoliticized discourses of MCA
[97][98] have been epitomized by spectacular, large-scale AI art installations in various forms: static, generative, reactive, interactive, or self-modifying “intelligent environments”
[17]. This high-profile/high-visibility approach had been ushered with corporate enterprises such as
The Next Rembrandt (2016)
[99], collaboratively produced by ING bank, Microsoft, Technical University in Delft, and Mauritshuis art collection. They used DL for a complex multi-feature analysis of Rembrandt’s paintings in order to generate and 3D print a “most representative” painting of his style. The project’s promo language is typical of the corporate AI’s patronizing anthropomorphism, claiming that it “brought the great master back to life”.
Examples of large-scale AI art installations include Sosolimited, Plebian Design, and Hypersonic’s
Diffusion Choir (2016); Marco Brambilla’s
Nude Descending a Staircase No. 3 (2019)
[100]; Refik Anadol studio’s projects such as
Melting Memories (2017),
Machine Hallucination (2019 and 2020), and
Quantum Memories (2020)
[101]; CDV Lab’s
Portraits of No One (2020)
[102]; projects by Ouchhh studio; projects by Metacreation Lab, and others.
Along with many GAN works discussed in
Section 2.2, these practices willingly or unwillingly contribute to platform aesthetics—a mildly-amusing algorithmic generation of sonic, visual, spatial, or kinetic variations, which teases the visitors with the promise of novelty and insight, but effectively entrances them into cultural conformity and political deference
[14] (pp. 72–83, 132–133). Dependent on the latest AI research and elaborately team-produced with significant budgets or commissions, the hyper-aestheticized AI art installations also warn how effectively the manipulative intents, unimpressive concepts, or trivial topics can be concealed behind skillful rendering, aggrandized by high production values, and popularized through the flamboyant exhibition.
The issues of platform aesthetics are exemplified by the AI installations produced in Refik Anadol’s studio
[101], which flirt with sophisticated production techniques, formal oversaturation, and inflated presentation. Their dubious motivations are clumsily veiled by inane flowery premises and by infantile anthropomorphic metaphors such as “transcoding the processes of how buildings think”, or “how AI systems dream” or “hallucinate”. Anadol has frequently claimed his childhood fascination with the spectacular advertising in
Blade Runner (1982, directed by Ridley Scott) as one of the uplifting inspirations for his art career, without any self-critical reevaluation of the political background of visuals and architecture in that film. Only in 2021, he was induced to acknowledge his misreading of the dystopian essence of
Blade Runner’s aesthetics
[103]. Consequently, despite the formal abundance and copious explanatory data (which usually do the opposite of demystifying the production), Anadol’s spectacles have been virtually devoid of critical views on mass surveillance, immaterial labor, environmental damage, and other problematic aspects of the big data capture and processing they rely upon. For comparison, we can take some of the monumental art practices throughout the 1980s that roughly coincided with the release of
Blade Runner, such as Krzysztof Wodiczko’s projections
[104], Barbara Kruger’s immersive setups
[105], or Anselm Kiefer’s heavy confrontational installations
[106]. They employed grand scale, formal saturation, and overidentification to critically appropriate and reflect the inherent use of overwhelming presentational strategies in gender-biased advertising, power structures, and totalitarian regimes. While the tactical values of these practices had been thereafter attenuated or recuperated in an inevitable process of cultural assimilation, they redefined the landscape of critical art with lasting historical impact and relevance.
Another telling parallel can be drawn between Marco Brambilla’s
Nude Descending a Staircase No. 3 (2019)
[100] and Vladimir Todorović’s
The Running Nude (2018)
[107], which both relate to Marcel Duchamp’s painting
Nude Descending a Staircase No. 2 (1912). Brambilla relies on compositional abundance and installation size to sustain the GAN animation that refers to the influence of early cinema on cubism and futurism. Todorović’s generative VR work unobtrusively leverages the problem of data interpretation in AI to reference the polyvalent interpretation in western fine arts tradition. It provides a formally subdued but experientially intensive interactive experience in which the ASMR-whispered descriptions of select classical nude paintings are generated by an ML program trained on pulp love stories.
To extend the exploration of this sweeping comparative range, the reader is invited to relate the spectacular generative portrait synthesis in CDV Lab’s installation
Portraits of No One (2020)
[102] with formally compact and technologically discreet works such as Jason Salavon’s
The Class of 1967 and 1988 (1998)
[108]; Golan Levin and Zachary Lieberman’s
Reface (Portrait Sequencer) (2007–2010)
[109]; or Shinseungback Kimyonghun’s
Portrait (2013)
[110].
The crass complacency exerted by spectacular AI art suggests that its creators have skipped some of the required reading assignments of Modern Art History 101 courses, most notably Guy Debord’s
The Society of the Spectacle (1994)
[111]. It discredits the self-serving claims of some cultural agents that spectacular AI art opens up opaque ML technologies, makes them more accessible to the public, and thus more exposed to critical assessment
[112]. As evident in performative AI art practices discussed in
Section 2.1.2, and from the long history of religious art, totalitarian art, or advertising, the aesthetic and presentational exuberance undermine the exploratory and epistemological impact or conceal the lack thereof. The cultural momentum of uncritical or manipulative AI art spectacles and AI-derived mainstream art is particularly detrimental to the field because it obscures experimental and avant-garde practices, and tempts the emerging AI artists to soften their critical edge in favor of career-friendly strategies
[6] (pp. 252–254)
[23].
2.4. Tactical Exploration
The recurrence of tactical AI art exemplars throughout this study indicates their potential to direct the field toward a socially responsible and epistemologically relevant expressive stratum. Tactical AI art extends the heterogeneous line of critical practices in new media art, which have energized art and culture in the 20th and the 21st century by subverting and exposing the exploitative corporate strategies based on quantization, statistical reductionism, data-mining, behavioral tracking, prediction, and manipulation of decision-making
[3] (pp. 71–73). Artists uncover the undesirable aspects and consequences of corporate AI and denounce biases, prejudices, economic inequalities, and political agendas encoded in the mainstream ML architectures. In some works, they also engage in an exploratory critique of the nature of ML as an artistic medium; the value of this critique is proportional to the artists’ understanding of the political subtleties and ethical facets which are often dispersed across the conceptually abstract, technically convoluted, and functionally opaque ML systems.
To incite active critical scrutiny, artists sometimes combine humor and provocation by intentionally taking seemingly ambivalent positions toward the issues they address; they emulate the corporate AI’s operative models but recontextualize them or repurpose their objectives for ironic revelatory effects. One of the common repurposing methodologies involves taking an existent ML pipeline, training it with a nonstandard dataset, and employing it for novel tasks. Many successful tactical works refrain from dramatic interventions and didactic explanations in order to let the audience actively identify the interests, animosities, struggles, inequalities, and injustices of corporate AI.
2.4.1. Socio-Cultural
Artists often work with NLP to critique various cultural manifestations of applied AI. Examples include Matt Richardson’s
Descriptive Camera (2012, discussed in
Section 2.1.4); Ross Goodwin’s
Text Clock (2014) and
word.camera (2015); Michel Erler’s
Deep Learning Kubrick (2016); Ross Goodwin and Oscar Sharp’s
Sunspring (2016, discussed in
Section 2.1.3); Jonas Eltes’
Lost in Computation (2017, discussed in
Section 2.1.3); Jonas Lund’s
Talk to Me (2017–2019); Joel Swanson’s
Codependent Algorithms (2018), and others.
A number of related projects use NLP and language hacking to probe the intersection of AI technologies and MCA. For example, Disnovation.org’s
Predictive Art Bot (since 2017)
[113] questions the discursive authorities and aesthetic paradigms of AI art; Sofian Audry and Monty Cantsin’s
The Sense of Neoism! (2018) critiques the cogency of artists’ manifestos and proclamations; Philipp Schmitt’s
Computed Curation Generator (2017) and Alexander Reben’s
AI Am I (The New Aesthetic) (2020) problematize art-historical models and narratives; Nirav Beni’s
AI Spy (2020) and Egor Kraft’s
Museum of Synthetic History (2021) address culturally entrenched aesthetic paradigms.
For interventions that relate to the socio-cultural issues of AI, artists use and modify GAN architectures to make deepfakes. For example, Libby Heaney’s
Resurrection (TOTB) (2019)
[114] thematizes both the star power in music and the memetic power of deepfakes. Visitors of this installation are invited to perform karaoke in which the original musician of the chosen song is video-deepfaked to mimic the visitor’s singing and gesturing/dancing. Additionally, in between songs the host Sammy James Britten involves the audience in the discussion of power, desire, and control—an extension that seems to be as imposing and redundant as the artist’s explanatory section for this work. Heaney’s
Euro(re)vision (2019)
[115] addresses the transmission of power and politics in popular media more effectively. In this video deepfake, Angela Merkel and Theresa May sing absurd songs in the style of Dadaist Cabaret Voltaire performances within a setting of the Eurovision song contest. Their stuttering algorithmic poetry eerily resembles the nonsensicality of actual Brexit discourse and implies the broader semantic reality of political life.
With two iterations of
Big Dada: Public Faces (2019–2021)
[116], Bill Posters and Daniel Howe confused the visitors of Instagram by inserting deepfaked fictional video statements by Marcel Duchamp (about the ashes of Dada), Marina Abramović (about mimetic evolution), Mark Zuckerberg (about the second Enlightenment), Kim Kardashian (about psycho-politics), Morgan Freeman (about smart power), and Donald Trump (about truth).
In several works, Jake Elwes critically engages the cultural implications of training dataset annotation and algorithm design in mainstream AI. His ongoing multipart
Zizi Project (since 2019)
[117] interfaces deepfake with the world of LGBTQ+.
Zizi-Queering the Dataset (2019) is a video installation continuously morphing through gender-fluid (androgynous) portraits and abstract forms. The online work
Zizi Show (2020) critiques both anthropomorphism and error-prone gender inclusiveness of AI. This virtual drag cabaret features deepfakes generated from the training datasets with original films of London drag artists’ performances. The
Zizi Project clearly indicates that the training model datasets and statistical nature of data processing in GANs inevitably impose formal constraints to the possible outputs (such as realistic human-like images) regardless of the common rhetoric about the “unpredictability” or “originality” of such systems; however, this is an already known and well-documented issue
[10] (pp. 9–10). The project fails to show how exactly the race, gender, and class inequalities and stereotypes transfer into ML to harm the underrepresented social, ethnic, or gender identity groups. The
Zizi Project’s playful, technically sophisticated remediation within AI-influenced cultural context may be beneficial for the celebration, affirmation, and inclusion of LGBTQ+, but its publicity narratives, its high production values, and its focus on glamour and spectacle in lieu of less picturesque but perhaps more important existential aspects of LGBTQ+ can easily be perceived as artistic exploitation by means of ML. Moreover, if taken seriously by corporate AI, this critique can backfire by contributing to the refined normalization, instead of correction, of socio-political biases toward the LGBTQ+ community because these biases have a broader, deeper, and darker evolutionary background.
In contrast, Derek Curry and Jennifer Gradecki’s
Infodemic (2020)
[118] and
Going Viral (2020–2021)
[119] exemplify a consistently more effective critique, recontextualization, and transformation of ML as a socio-technical realm
[59]. Both projects target celebrities, influencers, politicians, and tech moguls who have “contributed” to the COVID-19 pandemic by sharing misinformation and conspiracy theories about the coronavirus, which themselves went “viral”, often spreading faster than real news.
Infodemic features a cGAN-deepfaked talking head video in which some of these high profile misinformers deliver public service announcements that correct false narratives about the pandemic; their statements are taken from and voiced by academics, medical experts, and journalists. In
Going Viral, visitors of the project website are invited to help intervene in the infodemic by sharing on social media the “corrective” videos delivered by the deepfaked speakers in the
Infodemic. By playing with deepfakes within their native context of fake news, these projects also probe the broader phenomenology of mediated narratives. Together with
CSIA (discussed in
Section 2.1.4), they testify to the effectiveness of Curry and Gradecki’s tactics based on thorough research and self-referential methodology with computational media affordances. A specific quality of their poetics is that playful participation is simultaneously a gateway to transgressive affects, an interface to learning resources, and a friendly implication of our complicity to the politically problematic aspects of the applied AI through conformity, lack of involvement, or non-action.
2.4.2. Physical and Existential
AI technologies affect socio-cultural life and politics both directly and indirectly, through the material/physical, ecological, and existential changes. Artists sometimes metaphorize this influence by using geospatial contents (landscapes, terrains, maps) for training datasets and by positioning the machine-learned output in contexts with various political connotations.
For example, Ryo Ikeshiro’s
bug (2021)
[120] is a sophisticated geospatial ambient work that addresses the uses of ML-powered sound event recognition and spatial/directional audio technologies in entertainment, advertising, surveillance, law enforcement, and the military. Similarly, Nao Tokui’s
Imaginary Landscape and
Imaginary Soundwalk (both 2018)
[77] are formally economical interactive installations. In
Imaginary Landscape, the ML software continuously analyzes Google StreetView photographs, selects three that look similar, and joins them together horizontally in a three-wall projection. Another ML program, trained on landscape videos, generates soundscapes that correspond with stitched triptych landscapes. In
Imaginary Soundwalk, viewers freely navigate Google StreetView for which the ML system, using the cross-modal technique for image-to-audio information retrieval, generates the “appropriate” soundscape. It is instructive to compare the meditative effectiveness of these projects with Anna Ridler and Caroline Sinders’ interactive online work
Mechanized Cacophonies (2021)
[121].
Other examples include Mike Tyka’s
EONS (2019); Liliana Farber’s
Terram in Aspectu (2019); Weili Shi’s
Martian Earth and
Terra Mars (both 2019); Martin Disley’s
the dataset is not the map is not the territory (2020), and Daniel Shanken’s
Machine Visions (2022).
Some works explore the physicality of AI through haptics (touch), for example Jeff Thompson’s
I Touch You and You Touch Me (2016–2017)
[122], or through kinetics, for example Stephen Kelly’s lumino-sonic installation
Open-Ended Ensemble (Competitive Coevolution) (2016)
[123]. François Quévillon’s
Algorithmic Drive (2018–2019)
[124] also uses kinetics to play out the tension between robotics and the unpredictable nature of the world. For this work, several months-worth of front-facing video capture was synchronized with information from the car’s onboard computer, such as geolocation, orientation, speed, engine RPM, stability, and temperatures at various sensors. The captured videos and data feed a sampling system that sorts the content statistically and assembles a video that alternates between calm and agitated states by modifying parameters of sound, image, car’s activity, and environment. An interactive controller displays data for each scene and allows visitor intervention.
Continuing the line of earlier statistically founded eco-conscious tactical media art, such as Chris Jordan’s
Running the Numbers (since 2006)
[125], artists combine speculative approach with ML to generate visuals and narratives that address the environmental challenges and ecological aspects of large-scale computation-intense research, technologies, and industries such as AI. Examples include Tivon Rice’s
Models for Environmental Literacy (2020)
[126], and Tega Brain, Julian Oliver, and Bengt Sjölén’s
Asunder (2021)
[127]. Maja Petrić’s
Lost Skies (2017)
[128] illustrates how much easier it is for the projects in this range to aestheticize the ecological data than to articulate it into meaningful and perhaps actionable narratives. Ben Snell’s
Inheritance (2020)
[129] elegantly and somewhat provocatively compresses the material and ecological aspects of AI. It is a series of AI-generated sculptures cast in the composite medium which was produced by pulverizing the computers used to generate the sculptures’ 3D models. This project also addresses the issues of agency and creative expression by referencing radical auto-recursive art experiments such as Jean Tinguely’s self-destructive machines. Expectedly, regardless of their poetic values, it is not easy to calculate how much the systemic technological entanglements of such projects (and AI art in general) participate in the overall environmental damage and contribute to the legacy of the Anthropocene.
A full spectrum of the applied AI’s existential consequences is boldly integrated into Max Hawkins’
Randomized Living (2015–2017)
[130]. In this two-year experiment, Hawkins organized his life according to the dictate of recommendation algorithms. He designed a series of apps that shaped his life by randomized suggestions based on the online data: a city where he would live for about a month and, once there, the places to go, people to meet, and things to do.
Randomized Living is a strong exemplar of cybernetic-existentialism—the art of conceiving a responsive and evolving cybernetic system in order to express deep existential concerns
[42].
2.4.3. Political
The uneasy positioning of the individual toward or within computational systems of control has been reverse-engineered in a number of works by new media artists and activists such as Bureau d’Etudes, Joana Moll, Adam Harvey, and Vladan Joler. In several collaborative projects, Joler has been effectively applying analytical tools and mapmaking to render diagrams of AI power within various perspectives. With SHARE Lab and Kate Crawford, he released
Exploitation Forensics (2017)
[131] which snapshots in a series of intricate diagrams the functional logic of Internet infrastructure: from network topologies and the architecture of social media (Facebook) to the production, consumption, and revenue generation complex on Amazon.com. Similarly, Crawford and Joler’s project
Anatomy of an AI System (2018)
[132] deconstructs the Amazon Echo device’s black box by mapping its components onto the frameworks of global ecology and economy. With Matteo Pasquinelli, Joler issued
The Nooscope Manifested (2020)
[133], a visual essay about the conceptual, structural, and functional logic of sub-symbolic ML, and its broader epistemological and political implications. It leverages the notions of gaze and vision-enhancing instruments as metaphorical and comparative devices, although their conceptual suitability within the context of ML is somewhat unclear.
Since the introduction of the OpenCV library in 2000, artists have been using CV for various purposes in a large corpus of works. With advances in ML, this exploration has intensified and increasingly involved the critique of the (ab)use of CV for taxonomic imaging, object detection, face recognition, and emotion classification in info-capitalism. For example, Jake Elwes’ video
Machine Learning Porn (2016)
[134] indicates human (perceptive) prejudices that influence the design of ML filters for “inappropriate” content. Elwes took the open_nsfw CNN which was originally trained with Yahoo’s model for detecting “sexually explicit” or “offensive” visuals and repurposed its recognition classifiers as parameters for generating new images. This modification outputs visually abstract video frames with a “porny” allusiveness. However, the cogency of this project depends on leaving out that
all formal image elements are abstract by default and that in humans, the pathways of complex scene recognition and related decision-making are not precisely known
[135][136], so the ground for critiquing biases in these pathways is also uncertain.
The issues of ML-powered biometry are particularly sensitive and pertinent in facial recognition and classification due to the convergence of evolutionarily important information in the face and its psycho-social role as the main representation of the self and identity. Various deficiencies frame the machine training/learning and “recognition” process in which the classification models ultimately always make implicit (but unobjective) claims to represent their subjects.
Some critical works in this domain function as markers of the technical improvements in face recognition, for example Zach Blas’
Facial Weaponization Suite (2011–2014)
[137] and
Face Cages (2015–2016)
[138]; Heather Dewey-Hagborg’s
How do You See Me (2019)
[139]; and Avital Meshi’s
Classification Cube (2019)
[140], or provide demonstrations of expression analysis, for example Coralie Vogelaar’s two works in print
Happy, and
Facial Action Coding System (both 2018); Lucy McRae’s interactive data visualization setup
Biometric Mirror (2018, mentioned in
Section 2.3.1); and Lauren Lee McCarthy and Kyle McDonald’s
Vibe Check (2020). By revealing the human perceptive flaws (such as pareidolia) reflected in CV design, Driessens and Verstappen’s
Pareidolia (2019)
[141] reiterates a number of preceding works such as Shinseungback Kimyonghun’s
Cloud Face (2012) and
Portrait (2013)
[110]; Onformative’s
Google Faces (2013)
[142]; and Benedikt Groß and Joey Lee’s
Aerial Bold (since 2016)
[143].
Biases in ML design have been continuously identified by both scientists and artists. For example, a research project with artistic overtones titled