Bringing together an innovative group of visual artists and musicians who have embraced the power of technology, Sonic Alchemy calls into question what it means to experience music in the digital age.
A new artist’s work will be revealed each week, alongside accompanying critical essays by programmers and professors at the forefront of computer music research today. By focusing on each artist respectively and incorporating insights from industry experts, Sonic Alchemy seeks to deepen our understanding of what is possible with sound in the age of the algorithm.
RELEASE SCHEDULE
Alida Sun
Wednesday 14 June, 6PM BST
Boreta
Wednesday 21 June, 6PM BST
Joëlle Snaith
Wednesday 28 June, 6PM BST
Elias Jarzombek
Wednesday 5 July, 6PM BST
Joëlle Snaith, Sound Totem 01, 2022, Unique Original NFT
At its core, generative sound art employs algorithms, systems and chance operations to generate soundscapes that are emergent and unpredictable by nature.
Whilst the artist sets the parameters and guides the algorithm, there is always a degree of freedom for the computer to generate things that the artist might not expect. What emerges is an eloquent blend of artistic intent and computational randomness, thereby transcending the predetermined or fixed nature of conventional musical notation.
The concept was first introduced in 1996 by the multifarious composer Brian Eno, who described it as “ever-different music that is created by a system specifically designed for that purpose”. His revolutionary embrace of randomness and spontaneity in sonic art is still reverberating today. It birthed a new generation of algorithmic composers who recognised the potential of autonomous systems to create intricate and endlessly variable sonic patterns.
Whilst computer music started to become mainstream in the 1970s, its origins can be traced back to the 1940s and 1950s when researchers and composers began exploring the potential of computational processes for musical composition. In 1957 Lejaren Hiller employed a computer to generate compositional material for his string quartet. This is widely regarded as the first computer-composed score and was soon followed by software that would enable musicians to generate sounds directly from a computer. Max Mathews is celebrated as the “father of computer music” for his work in this area. The subsequent development and dissemination of sonic software allowed for greater experimentation with electronic sounds, paving the way for algorithmic techniques in generative music which serves as the foundation for this exhibition.
From Lejaren Hiller’s experimental compositions to Brian Eno’s generative sound, computer music continues to be a compelling medium for discourse, a radical way to reflect upon our technology-driven society through the very fabric of sound itself.
Sonic Alchemy deepens our understanding of what is possible with sound by offering insight into the unique practices of four contemporary artists.
Whilst they all incorporate some form of generative process in the making of their work, each interpolates sound in a distinctly nuanced way. Therefore, rather than a set discipline, Sonic Alchemy positions generative sound within a more conceptual, experimental and critical framework, expanding the parameters of this compelling practice.
Critical Essay
WHY DO WE MAKE GENERATIVE MUSIC?
Long term Brian Eno collaborator Peter Chilvers offers his take on why artists employ computers and algorithms to make music.
Alida Sun
Alida Sun demonstrates how colour, light and music can commingle in novel and dynamic ways within a single composition. As these elements develop in synchronicity by virtue of a generative algorithm, the poetic and creative nature of computer music is brought to the fore.
Alida Sun
is a multidisciplinary artist based in Berlin. Her practice integrates presence, resistance, and the nature of adaptation in the age of algorithms. Every day for over 1,500 days and counting she has coded and built new generative artworks encompassing installation, sound, architecture, choreography, drawing and light. Sun is the creator of Art Blocks Curated project glitch crystal monsters. Her work has been exhibited at the Venice Biennale Decentralised Pavilion, Ars Electronica, UCCA Center for Contemporary Art, Seattle NFT Museum, and experimental audiovisual festivals around the world.
Interview with Sun
UL: To an audience who has little knowledge of generative systems, can you explain the relationship between your ideation, dataset (input), the tools used, and the final results (output)?
AS: I don’t try to force generative systems to fit preconceived notions or traditional media conventions. My ideation is a collaborative process with my algorithms, the errors or glitches we make along the way, and our surrounding environment – I often combine code with sound, light, movement, and architecture/space. I’m influenced by the Bauhaus principle of letting the material guide you, which may sound weird given the “material” in question here is digital programming as opposed to metal or clay, but as you create an open-ended process that welcomes deviation, you get the sense code wants to be free. On that note I don’t think generative art can ever be reduced to “what program do you use”, but the values and transparency in free and open source tools and communities are key to maintaining strong, dynamic practice and results.
UL: The visual and sonic elements in your work unfold and evolve simultaneously. What do you want viewers to take away from this multi-sensory, durational experience?
AS: I hope people leave with a heightened sense of wonder, where any moment of your life can become an artistic or meditative experience. I hope their awareness for the environment and our connection to it is refreshed and they start to question where noise ends and signals and music begin. I want to brighten the everyday rhythms that get overlooked as mundane. Or if people just see that art that questions and challenges can also be playful and accessible, that’s something I dream of too when so many folks are often made to feel that art appreciation is exclusive to standing around stiffly with white wine in white cubes. No offence, but that’s not the most welcoming atmosphere, especially for Web3.
UL: What advancements would you like to see in computerised composing?
AS: We definitely need more transparency and open source models that allow for archiving, future cultural preservation, and shared learning. I think the greatest advancements will be strongly community-based and social in tandem with technical prowess. I’d also like to see an embrace of the art form as an ever-changing system in which we co-create transformative, maybe even joyful somatic experiences. It’s time to go beyond some very specific established conventions. Why should computerised art of all things be confined to a canvas? Or even a screen?
Sun’s Musical Influences
The Stone (Back In No Time) by Milford Graves & Bill Laswell
Contaminations by Metaraph
Playing with Pink Noise by Kaki King
Cathedral of Dreams (Full Mix) by Miro Berlin
Radio Romance by Mashrou’ Leila
Tresor by Gwenno
Side Ways by Dasha Rush
Case Study B by Jenny O
Devil In Your Eyes by Wil Akogu
Home Is Where the Hatred Is by Gil Scott-Heron
Cha Cha by Mulatu Astatke & The Heliocentrics
One Minute Warning by Passengers
MYUNG Theme (cello version) by Yoko Kanno
Wuthering Heights by Kate Bush
20220304 by Ryuichi Sakamoto
Twitter Space
In celebration of Alida Sun’s release, we hosted a Twitter Space where we spoke to Sun about her musical influences and the somatic nature of her audiovisual practice.
Bringing together artist and audience, we offered a collective moment to listen to the songs that have influenced her most and welcomed Atau Tanaka, Professor of Media Computing at Goldsmiths University, into the conversation.
Critical Essay
GENERATIONS OF GENERATIVE MUSIC
Professor of Media Computing at Goldsmiths, Atau Tanaka, presents the rich, and at times contested, history of computer music, with a special focus on Alida Sun’s Synesthesia.
Boreta
For Boreta, sound serves as a starting point. Visuals are then constructed by, and interwoven into, his sonic composition by way of a custom interface. As relations are drawn out between the internal workings of the algorithms and the outside world through the materiality of sound, Boreta unveils the human touch in generative systems.
Boreta
is a Los Angeles based musician and artist whose audio works explore themes of spirituality, connection and the materiality of sound. In 2020, his collaborative project, Superposition earned a Grammy nomination for their album Form//Less. In 2021, Boreta ventured into audiovisual generative art. Working with Aaron Penne and Bright Moments, he released Rituals on Art Blocks, which received the esteemed Lumen Prize in 2022. He is also known for his role in The Glitch Mob, an influential electronic group who have performed at music festivals including Coachella, Lollapalooza and Reading & Leeds, and has a discography spanning three albums and official remixes for acclaimed artists including Daft Punk, The Prodigy and Metallica.
Interview with Boreta
UL: You used one kind of software to generate sound and another to generate visuals. You then connected the two so that the sound painted the picture in real-time. Can you elaborate on this?
B: My creative partner, Peter Sistrom, and I built a custom interface using Touch Designer for Ableton Live. Essentially, it’s a personalised tool shaped around my music-making process. It’s fascinating to observe how technology can mimic natural processes, crafting visuals from sound in real-time. The distinction from how I typically create music lies in the simultaneous development of these visuals with the music. This results in a more profound integration and feedback loop.
UL: Is ‘human-sounding’ music a criteria you use for determining the effectiveness of generative music? Does this force us to reconsider the role of traditional composers?
B: I enjoy exploring how music can underline the beauty in our human flaws. I tend to do this with samples, electric signals, or live performance. Even when working with generative systems, there’s a human touch. But here, instead of capturing a performance, there’s space for accidental beauty to emerge.
UL: This marks your first audiovisual work fully under your name. What have you learnt?
B: Collaboration is always a great learning experience. It gives me a chance to grow, explore new ideas and make stuff that hasn’t been done before. Releasing my first solo audiovisual piece is equally thrilling – it’s deeply personal because music is my first love. To fully take charge of the visual aspect along with the sound was a new, enlightening experience. It’s a different way of playing with the rhythms found in nature.
Boreta’s Musical Influences
Ikebukuro by Brian Eno
Emanating Dreamscape by Boreta & Aaron Penne
Receiving (feat. Laraaji) by ANNA & Laraaji
Xtal by Aphex Twin
Tides V by Kaitlyn Aurelia Smith
Utility by Barker
Echo//Radiate by Superposition
Tilting On Windmills by Malibu
Arterial by Lusine
Tendency by Jan Jelinek
1 /2 Singing Bowl (Ascension) – Excerpt by Jon Hopkins
Camel by Flying Lotus
Foil by Autechre
Bubbles by Yosi Horikawa
Music for Mallet Instruments, Voices and Organ by Steve Reich
Listen by Alan Watts & Boreta
Twitter Space
In honour of Boreta’s first solo audiovisual piece, we hosted a Twitter Space where we spoke to the artist about his influences, past projects and journey through the innovative world of generative sound.
Critical Essay
UNIQUE TOGETHER: ON GENERATIVE MUSIC AND WORLDBUILDING
Tero Parviainen, a musical software developer who has worked closely with Boreta, considers generative sound in relation to virtual worldbuilding, offering a fresh perspective on how to experience computer music.
Joëlle Snaith
Joëlle Snaith’s visual compositions are meticulously crafted by sound. The frequency data of the audio is used to sculpt the initial sketch. Snaith then builds upon this to create compositions that are deeply personal, an echo of the emotions, sensations and memories she has when she listens to music.
Joëlle Snaith
is a designer and audiovisual artist living and working in London, UK. Working in the realm of live visuals and performance, her work focuses on exploring the relationship between sound and form. She is interested in visually communicating the feelings she has when she listens to music, and exploring the connection between what she hears, feels and sees. Using frequencies and minimal structures, she creates compositions that are largely sculpted by sound. She’s the visual artist for electronic musician and DJ Richie Hawtin and a resident at FOLD, London. Her work has been exhibited at Digital Decade (London), Elixir Poetry Festival (Terrassa), Athens Digital Arts Festival and Sónar Hong Kong.
Joëlle
She Nº 1
2023
NFT Video
Edition of 5
0.1 ETH
Joëlle
She Nº 2
2023
NFT Video
Edition of 5
0.1 ETH
Joëlle
She Nº 3
2023
NFT Video
Edition of 5
0.1 ETH
Joëlle
She Nº 4
2023
NFT Video
Edition of 5
0.1 ETH
Joëlle
She Nº 5
2023
NFT Video
Edition of 5
0.1 ETH
Joëlle
She Nº 6
2023
NFT Video
Edition of 5
0.1 ETH
Joëlle
She Nº 7
2023
NFT Video
Edition of 5
0.1 ETH
Interview with Snaith
UL: In your audiovisual compositions, are both the sound and visual elements generative? Which comes first? And how do you create a harmonious match between the two?
JS: The audio always comes first. My partner Eaton Crous designed the sound using generative systems developed in Ableton Live and Max for Live devices. My creative process is very experimental, and I’ll explore various techniques and outputs while immersing myself in the sound until I arrive at a place that feels right with the music. I use Vuo, a node-based visual programming environment which enables this kind of iterative workflow, where you uncover and discover the path as you go along.
The connection between sound and visual happens both through the audio-reactivity – as there is a tangible connection between the two – and through my emotional response. Aesthetically, my work tends to be quite minimal, with moments of chaos and intensity but always fundamentally rooted in restraint. I try to amplify the details within the frequencies in a very precise way and create an atmosphere with sound and visual where it feels like they belong together.
UL: How do you find a balance between control over the algorithm and the algorithm’s own logic? What degree of freedom do you give the system to deviate from the parameters set?
JS: The Vuo compositions receive frequency data via a live audio input, which drives the behaviours and animation. I create rules where I tell the software to do something depending on the frequency band I’m targeting and also the loudness of that specific frequency. The audio-reactive elements exist within the constraints and thresholds I’ve defined but I also program additional parameters which I manipulate via a MIDI controller in real-time, adding a performative layer to the process.
When working with frequencies, you can get some really unexpected results depending on the audio input. This is especially true in a live performance setting where I have no control over what the DJ is playing. Live performance brings an element of spontaneity and improvisation and allows me to guide the visuals in response to what’s being played, and those visuals have behaviours and motion that respond to the frequencies within the defined constraints. So it really is a collaboration between the sound, the machine and myself.
UL: You have mentioned that your work is a result of attempting to visualise the emotions experienced while listening to music. Is your intention to highlight to viewers the interconnectedness between hearing, feeling and seeing?
JS: Yes absolutely. My goal is to amplify sound with visuals and to heighten the experience of listening to music by creating a symbiosis between the two. It can be very hypnotic when the two complement one another because it finds you in a space where everything is in balance. One thing is not shouting for attention more than the other. Music reflects in us how others are feeling and gives us a common language to express ourselves. This expression often connects to escapism and can be simultaneously a very individual but also a defining collective experience. I’m particularly sensitive to this, there is power in restraint, and in not overpowering the sound and enabling that space for interpretation.
Snaith’s Musical Influences
Trioon I by alva noto & Ryuichi Sakamoto
Xerrox 2ndevol by alva noto
Eutow by Autechre
Pakard by Plastikman & Richie Hawtin
Peep Show by Cause4Concern
Sixtyten by Boards of Canada
Trick of the Light by Bad Company UK
Rekall by Plastikman & Richie Hawtin
Moon by alva noto & Ryuichi Sakamoto
Signals – Remastered 205 by Brain Eno
Test Pattern #0100 by Ryoji Ikeda
Hidden Past by Forest Drive West
Film by Aphex Twin
Dark And Long – Dark Twin by Underworld
The Wretched by Nine Inch Nails
54 Cymru Beats
Cometa by Murcof
Main Titles by Vangelis
Ambre by Nils Frahm
Passage (Out) by Plastikman & Richie Hawtin
Twitter Space
In this Twitter Space we spoke to Joëlle about the collaborative nature of her practice and how she combines audio-reactive elements with live performance to create highly personal and evocative pieces.
Critical Essay
ON CORPOREAL CODE
Dr Jess Aslan, a professor and musician interested in using machine learning for musical generation, delves into the etymology behind ‘generative’ and how digital artists are engaging in embodied practices with code to find the human in generative systems.
Elias Jarzombek
Elias Jarzombek’s genesis NFT collection takes the form of four unique videos, each a capture of the artist using a web-based interface of his own design. Fascinated by the potential for visual imagery to be transformed into intricate soundscapes, Jarzombek built a system that translates pixel data from found stock images into sonic outputs, captivating viewers with ever-evolving audiovisual patterns.
Elias Jarzombek
is a programmer and multidisciplinary artist based in Brooklyn, New York. He merges sound, code and electronics to develop software for creating music in unconventional ways, with the aim of fostering sonic exploration that is accessible to individuals regardless of their musical background. In addition to experimental soundscapes, he often draws inspiration from geometry and natural patterns to create audiovisual compositions and multimedia installations. He releases music under the moniker Jarz0 as well as with the groups Obstacle and Custom Scenario. He also collaborates with NonCoreProjector to reveal the interconnectedness of data in both logical and seemingly illogical ways.
Interview with Jarzombek
UL: Your work in Sonic Alchemy represents your genesis NFT collection. What excites you most about this new venture?
EJ: I’m excited by the medium’s potential when it comes to building interactive elements into artworks. I’m also looking forward to sharing artefacts of my work, which often can be somewhat transitive because of that interactivity. This piece is a bit of both – I created an interactive system and then recorded myself using it to produce the output.
UL: What appeals to you most about using computers, algorithms and generative processes to generate music? What led you to incorporate visuals into your sonic explorations?
EJ: Making fun musical experiments was why I got into programming. I wanted to build toolkits for creating music in new ways, regardless of one’s experience level. I gravitate towards interfaces that encourage experimentation with some kind of visual metaphor that represents musical creation.
I find that the best generative processes feel like a collaboration between me and the machine. Algorithms augment how we arrive at new ideas, and this can expand creative possibilities. It’s nice to not have to make all of the decisions yourself, but also important to maintain your voice. For this project I handcrafted the instruments and sonic palette that the system would then use in its composition.
This piece initially took shape as a visual composition. I wanted to see what would happen if a generative sound algorithm could mirror the principles of the visual algorithm. This resulted in a direct link between the music and imagery where the image controls the sound.
UL: Your work is often exploratory and experimental in nature; how did you come up with the idea for Traversals, can you speak about the software you used?
For this piece I built a system that turns random stock photos into landscapes and soundscapes. I wanted to see how differences in imagery could translate into differences in music. My goal was to reach a point where it seemed like the algorithm had a mind of its own, and could fully explore the sonic palette provided to it. I enjoy finding multiple ways to interpret the same data, and in this piece the image data drives all visual and sonic variations. Some of these transformations are direct while others are quite circuitous. This process of using one medium to create another results in emergent patterns that might otherwise be impossible to reach.
The visuals operate by painting strokes of colours based on the original image. Starting at their original position, they move, expand, and contract based on the properties of colour (for example brighter colours move faster) as well as the user’s mouse position.
To generate sound, some of these brush paths are selected as sonic controllers, each mapped to a certain sound or instrument based on their starting colour. As they traverse the image in two dimensions, they move over pixels from the original image. The pixel data is transformed in real time into notes, effects, and timbral adjustments.
In addition, the image’s dominant colours are interpreted as looped sequences of notes, whose presence in the mix is determined by another set of traversing brush paths.
The code runs in the browser, and includes a custom synthesizer code that I made with RNBO, the Max/MSP tool for building for multiple platforms.
Jarzombek’s Musical Influences
Floating Against Time by Wata Igarashi
Bird Strike by Sky H1
Pearls Scattered by Christina Chatfield
Everydaywehustlin by Emeka Ogboh
Cryptochrome by John Tejada
Installation by Pangaea
Love Invaders by Fatima Yamaha
No Furniture/Tanagra by Time Wharp
Prayer by Tujiko Noriko
Sin dones by Juana Molina
The Hut by Waldo’s Gift
Undergrowth by Squid
Solemn by Space Afrika
Energy by Ikonika
Melbourne Bolera by Talaboman, Axel Boman & John Talabot
Destrozar el Mundo para Levantar Otro Más Verdadero by El príncipe idiota
Twitter Space
To celebrate Jarzombek’s first ever NFT collection, we hosted a Twitter Space where we spoke with the artist about his background in software development and how he combined this with his love for art to generate the works for this show. We also took a moment to listen to the songs that inspire him.