Interview
Continuing Unit Digital’s series Beyond the Code, we speak with artist Ivona Tau about transforming code into cyclical narratives and creating visual outputs so rich they feel practically tangible.
Unit Digital: What imagery went into After Us (After Nondescriptives)? What have you found yourself photographing or pulling content from? Do you have a standardised process you follow each time?
Ivona Tau: I definitely don’t have a standardised practice; it varies from project to project. For After Us (After Nondescriptives), I was particularly interested in exploring the spaces between different text-to-image model representations. I wanted to find the in-between of two different generative concepts: something very natural, like tree leaves and intertwined branches, and something more synthetic, like urban landscapes and buildings.
In 2022, I generated images using state-of-the-art text-to-image techniques available at the time. Although limited, these techniques could already capture good representations of simple subjects like urban landscapes, abandoned factories, and organic tree structures. I then trained another model on top of these generated images to explore the nonexistent architectural plants and organic, abandoned landscape architectures, creating a mix of the two in both cases.
This project differed from my usual work because I didn’t start with my photographs. Instead, I began with text-to-image images and used an AI model trained on my urban night photography. I was curious to see how AI could generate images that are not easily described by text alone, capturing those that live between easily representative concepts.
UD: How does After Us fit into the chronology of your work?
IT: After Us is essentially a continuation of my Nondescriptives project, which was a generative series I created for an exhibition in Berlin for Bright Moments in 2022. Initially, I produced one hundred still images using the algorithm for that show. With After Us, I aimed to bring a more immersive experience, creating a flowing, animated piece that captures the constantly changing, surreal landscape. This animation also incorporates audiovisual elements like sound, which adds another layer to the experience. In this way, After Us serves as both a continuation and a culmination of my previous work, presented in a different format.
UD: Nondescriptives is based on a text-to-image model. In the past, you’ve used your own source images to train your models – can you speak the difference between these methods and how your creative process changes between them?
IT: Using a text-to-image model for Nondescriptives was very different from my usual approach. Typically, I use my own source images for training, but in this case, I used synthetic training data. To maintain a style similar to my previous work, I ensured the visual palette and aesthetics aligned with my photographs. I trained my own AI model on synthetic and artificial data, fine-tuning it with a pre-existing checkpoint trained on my own data. This process was trickier, especially in achieving a cohesive style.
Themes such as urban landscapes and dark colour palettes are distinctive elements often found in my photos. I had to work to make sure the results fit within my practice. However, the project’s goal was to explore existing text-to-image tools, justifying their use by discovering representations that cannot be easily expressed in words.
UD: The genesis of this video was a still image. What do you think are the merits of video or still images? How does the form change the meaning of your work?
IT: The works in the Nondescriptives series were created as still images in mind, with a focus on generative textures that could be appreciated both on screen and in print. I aimed for these textures to appear organic and tactile, making viewers want to touch them, whether digitally or physically. This rough, organic feel was a key theme in that body of work.
When I started working on After Us, the concept shifted. While I used the same visual models and training data, my focus was different. Instead of exploring the in-between spaces of text-to-image models, I delved into the cyclical theme of nature vs civilisation. This cycle is best depicted through animation, highlighting the back-and-forth between human-made and organic elements. The piece raises questions about beginnings, endings, and what might happen after our civilisation decays. Will urban architecture be reclaimed by nature? What will nature look like in thousands or millions of years?
My goal was to provide perspective, showing humanity as a small part of the universe’s long history. The COVID-19 pandemic, when nature began to reclaim spaces like the dolphins in Venice, inspired this. It highlights the ongoing cycle of nature taking over when humans retreat. This interplay prompts reflection on whether we will cherish and protect nature or continue to harm it, knowing that nature will likely reclaim its territory after us.
UD: You’ve mentioned that you’d like your work to be shown on a bigger scale or in a more immersive setting. How do you think the consumption of your work changes its meaning and reception?
IT: With After Us, by creating the animated piece, I wanted to achieve a more immersive setting. This involves incorporating sound and visuals to transport the viewer into a different world that might exist centuries after us, encouraging them to reflect on humanity’s relationship with nature and the environment, as well as the parallels between urban and natural landscapes.
Often, we see these as separate, but they are not so different since both are made from organic materials. The piece aims to highlight this point through animations that flow in a meditative state, transitioning fluidly from one concept to another. When viewers experience this in a black room, surrounded by the work, their perception is very different from just scrolling through social media.
Creating such scenarios without distractions allows people to truly engage with the art. I enjoy the meditative state I enter when watching visuals morph and move, and I aim to evoke the same effect in viewers. Achieving this on a smaller scale is more challenging, but installations like the one I did with Project 22 facilitate this experience.
Immersive exhibitions have been popular for a while, though they’ve faced criticism for sometimes prioritising “Instagrammability” over quality. However, I believe the medium isn’t to blame; it’s about how we use it to create meaningful art.
UD: There has been a lot of new attention on AI generated art in the past year with the accessibility of prompt-generated works and now videos. Do you feel that making work now feels different to when you started? And how heavily do you think technological advances in the world of AI changes your process and output?
IT: Yes, absolutely. The landscape has changed dramatically. Seven years ago, when I started working with AI creatively, I was met with a lot of curiosity. Now, when I say I’m working with AI, I’m often met with criticism, even before people learn anything about my practice. There are hateful comments about stealing from artists, etc.
With all the new tools available, there are many more possibilities. However, having more tools doesn’t always mean that new is better. I still feel a lot of sentiment and nostalgia for older AI tools, like GANs, or earlier text-to-image tools, such as Stable Diffusion 1.5 versus the newer versions. I don’t use Midjourney at all because I’m not a fan of its hyper-realistic, photographic, or game animation style.
For artists, it’s important not to focus too much on the tools. AI models are just tools in your toolkit. It’s crucial not to chase the next best thing but to focus on what you’re trying to create and what your goal is. In my practice, I often want to extract information from larger datasets to bring new insights, whether it’s from my personal photography, family archives, or collective internet archives. For this, training your own AI is often the best option.
Stylistically and aesthetically, it depends on what you want to achieve. I like the medium I call “motion painting,” which is a cyclic animation without narrative, focusing on a few key visuals that could live as a still work, but with a movement added. This allows viewers to focus and enter a meditative state. For this kind of medium, I find that GANs still provide the most natural movement for what I want to achieve. This is different from animations created with tools like Runway.
UD: Finally, do you have any unrealised projects or ambitions that you are passionate about?
IT: One of the areas I’m working on involves bringing more physicality into my artworks. Since animation is my main medium, it’s difficult to achieve this beyond installations or moving screens. However, I’m curious to expand my practice into the very 3D aspect of things, exploring sculpture or manual printing techniques that involve a tactile element.
Additionally, I’m interested in lenticular printing techniques. Although I’ve been experimenting with these in my studio, they’re not yet at the level I want them to be. My goal is to merge both the physical and digital, creating an interesting dialogue between pieces that exist in digital format and those in a more physical form.
The challenge is to create something that maintains the immersive, meditative quality of my animations while also engaging the viewer in a tangible, physical way. This is an area I’m passionate about and eager to explore further, aiming to close the gap between the digital and the physical in a meaningful and impactful manner.