Chance must be recognised as a new stimulus to artistic creation.
hat role does chance, if any, play in artistic practices? It is easy to assume that the arbitrary logic that underpins it is the antithesis of the voluntaristic nature of the artistic process. Chance is never seen as a by-product of human agency, which is exactly the first place we usually try to locate the cradle of creativity. Art is born from intentionality. But to view creative processes as faithfully walking down the narrow path of premeditation and control would be to ignore the fact that a fundamental method of producing art is through experimentation – which, by its very nature, purposefully bypasses human agency. The dismissal of chance in this context is not dissimilar to the one endured by the mass-produced object when Marcel Duchamp introduced it to the vocabulary of artistic discourse by forcefully pulling the rug from underneath the previously unquestioned value of craftmanship. But whereas today the ground-breaking significance of the mass-produced object has made itself a home in art historical narratives, the role of chance has been comparatively neglected, even though it has played an essential part in artistic strategies since the early part of the 20th century. Today, it has become urgent to re-evaluate its importance given the rising significance of generative art practices, where chance often plays a pivotal strategic role.
From the highly systematic to the informal and expressionist, chance in its various guises has been widely adopted by artists for well over a century. There’s spontaneous chance, as evidenced in the work of the Surrealist artists (such as Joan Miró), who used psychoanalytic techniques like automatism to produce works that were physical manifestations of the unconscious mind. A similar logic is at play when we think of the drops of paint that Jackson Pollock flung haphazardly from his paintbrush onto the canvas. Pollock’s heightened yet deliberate gestures ultimately resulted in an unpredictable configuration on the pictorial plane. There’s also accidental chance. The famous work The Bride Stripped Bare by her Batchelors, Even (1915–1923) – which Marcel Duchamp worked on for eight long years – was finally declared finished by the artist only after it was accidentally shattered during transit.
Then there is the subject of this essay: experimental chance. This type of chance is not gestural, expressionist or accidental, but is rather rooted in experimentation and observation. It involves setting down a particular set of instructions and parameters and observing the results. In other words, instructions dictate the initial conditions, but the outcome is liberated from the artist’s control. This strategy characterises many of the works produced by generative artists employing algorithmic procedures. It is therefore critical to unpack it and situate it historically if we are to effectively evaluate the work that has mushroomed as a result of today’s increasingly productive relationship between art and technology.
One of the earliest examples of experimental chance are the collages made by Jean Arp, a key figure of Dadaism. In 1916, frustrated with his work, he famously tore up one of his drawings and let the pieces fall randomly onto a sheet of paper on the floor. Once there, Arp rapturously preserved the remnants and their random configuration, gluing them to the paper. Discussing this work, fellow artist Hans Richter later claimed that ‘[…] chance must be recognised as a new stimulus to artistic creation. This may well be regarded as the central experience of Dada, that which marks it off from all preceding artistic movements.’
Another key exponent of experimental chance was Marcel Duchamp, one of the (if not the most) ground-breaking artists of the 20th century, who redefined both the art object and the role of the artist. For his 3 Standard Stoppages (1913–1914), he set up an experiment accompanied by an elaborate set of instructions. From the height of a metre, he dropped three metre-long pieces of thread onto three dark surfaces. The different patterns that were formed were then fixed onto the surfaces. He later created wooden templates based on the various formations of these threads and displayed them in a used croquet case. Of this work, Duchamp later claimed:
I don’t think the public is prepared to accept it … my canned chance. This depending on coincidence is too difficult for them. They think everything has to be done on purpose by complete deliberation and so forth. In time they will come to accept chance as a possibility to produce things.
Artists in the second part of the 20th century certainly continued this Duchampian legacy of ‘canned chance.’ It piqued the interest of conceptual artists – for his Blasted Allegories (1978), John Baldessari created clusters of photographs of television scenes, taken with the use of an intervalometer (and therefore, entirely aleatory). In fact, during this period, the view of the artist as the sole custodian and determining force behind the meaning of their work began to change, helping to legitimise other factors – like chance and randomness – as valid components of artistic strategies. This was partly thanks to the watershed essay ‘The Death of the Author’ (1967) by French theorist Roland Barthes, in which he broadly advocated for a severing of the meaning of a work from the control of its creator. Also deposing authorship as the ultimate locus of meaning, a year later in 1968 Robert Morris published his seminal essay ‘Anti Form’, in which he expressed his interest in refraining from intervening in the materiality of a sculptural work in order to examine how:
The focus on matter and gravity as means results in forms that were not projected in advance […]. Random piling, loose stacking, hanging, give passing form to the material. Chance is accepted and indeterminacy is implied. Disengagement with preconceived enduring forms and orders for things is a positive assertion.
This ethos was the driving force behind Morris’s works, including Untitled (1967–1968), in which he purposefully let thick, flat felt spill chaotically on the floor according to the laws of gravity. Although he meticulously oversaw the process of making the work, Morris deliberately relinquished control of the outcome, letting the material determine the final form. In a similar vein, artists working under the aegis of Fluxus – such as George Brecht – were similarly keen to mobilise chance in their work. In his Chance Paintings (1957), Brecht poured water and ink on crumbled sheets of paper and simply observed the unpredictable patterns that they created.
Artists working with sophisticated digital systems composed of code often embrace the capacity of the medium’s ability to produce unforeseen effects. This is partly because the tools used to make such images exist in a realm that is entirely separate from the visual. Manfred Mohr, one of the pioneers of digital art, has explained that one of the most exciting aspects of artists working with code is that ‘[…] a non-visual logic will create a visual entity.’ His work is paradigmatic of experimental chance in that his practice often involved giving a detailed set of instructions that deliberately opened the door to chance. For his Inversion Logique (1970) he began by laying out a variety of symbols in an extensive grid. He then created an algorithm that was instructed to choose one of these symbols at random and re-insert it in a predetermined order. Introducing another variable, Mohr asked the algorithm to vary the length and thickness of the lines in an aleatory fashion.
This modus operandi of experimental chance still characterises the work of many generative artists working today. Discussing his generative algorithm Fidenza, Tyler Hobbs has claimed that its parameters are sufficiently flexible to allow for ‘enough variety to produce continuously surprising results.’ He creates elaborate algorithmic systems capable of producing by now iconic, mesmerising works. Discussing the numerous strategies that generative artists employ, Hobbs has maintained:
These styles of artwork are all quite a bit different, but I think the common thread that ties them together is that they do an excellent job of blending randomness and structure. They’re able to keep this balance between the two to where it’s still surprising, and unpredictable […].
A particularly famous example of this kind of strategy, which exists productively suspended somewhere between randomness and control, is generative artist Dmitri Cherniak’s 1000-piece collection, Ringers (2021). For this series, he created an algorithm that explored his fascination with the limitless possibilities opened by the act of wrapping strings around an arrangement of pegs, adjusting for variations such as colour, peg count, orientation and layout. One of these works, Ringers #879, affectionately known as ‘The Goose’, instantly gained fame precisely because of the improbability of its existence. Unlike the other abstract works of the series – and against all odds given the algorithm that engendered it – it was figurative. When examined closely, a goose emerges proudly elevating its beak in a restrained and elegant palette of white, black and yellow.
We often fall into the trap of relegating experimentation to the realm of science, but it is a fundamental strategy that has pushed the boundaries of artistic production and the definition of the art object since the early 20th century. Generative art practices are building on this long legacy. Many in the artworld hypocritically hold a different standard for works made from code, myopically cringing at the transfer of agency from the artist to algorithmic processes. But as Duchamp’s work has shown, controlling the outcome of an artwork should not be a precondition for innovation – and therefore artistic agency and integrity is not in danger of dwindling when placed in the hands of algorithms. Like Hobbs and Cherniak, artists working with code are creating innovative visual paradigms that would have been impossible without the code that summons them into existence. It is time for us to embrace with open arms Duchamp’s ‘canned chance’ so that art can freely inhabit a place where chance and experimentation – as well as technological innovation – are recognised as fundamental forces in artistic production.
Paula Brailovsky is a London-based art historian who finished her master’s degree at the Courtauld Institute of Art and then went on to complete her PhD at University College London in 2014. Entitled Geographies of Violence: Site-Oriented Art and Politics at the Mexico-U.S. Border From the 1980s to the Present, her dissertation centred on the intersection between art and politics in relation to nationhood, identity and the conditions of globalisation. As an academic, she has published her work in the journal Object and MIRAJ, has taught modern and contemporary art courses both at graduate and postgraduate level and has lectured nationally and internationally. In recent years, she has worked as a researcher and writer in an art consultancy where she built an academic programme and helped individuals and institutions navigate the complex waters of contemporary art and its markets. Using her academic background as a springboard, she is interested in the practical possibilities and theoretical potentials that exist at the crossroads of contemporary art, business and technology.