Latent Dresses in Motion, 2024
Stable Diffusion, StyleGAN2
The above video is only an excerpt. Get in touch to get a link to the whole thing.
Click to load imageA view of Latent Dresses in Motion from outside the Mono Gallery in Tunis.
To reduce website CO2e emissions, project images need to be manually loaded.
Click to load imageA frame from the video work an AI generated white woman in a grey dress on a grey background.
To reduce website CO2e emissions, project images need to be manually loaded.
Click to load imageA frame from the video work an AI generated white woman in a mottled blue dress on a pink background.
To reduce website CO2e emissions, project images need to be manually loaded.
Click to load imageA sideview of Latent Dresses in Motion inside the Mono Gallery in Tunis.
To reduce website CO2e emissions, project images need to be manually loaded.
Click to load imageA frame from the video work an AI generated white woman in a pink dress on a grey background.
To reduce website CO2e emissions, project images need to be manually loaded.
Click to load imageA frame from the video work an AI generated white woman in a white dress on a background with weird artefacts.
To reduce website CO2e emissions, project images need to be manually loaded.
Click to load imageA frame from the video work an AI generated white woman in a red dress on a distorted city background.
To reduce website CO2e emissions, project images need to be manually loaded.
About
"Latent Dresses in Motion" is an AI generated video work, which considers algorithmic biases through the lens of AI fashion.
Through a distorted aesthetic, the work exposes how biases in AI systems come to the fore in representations of bodies and garments. Latent Dresses in Motion is created by feeding hundreds of AI images of women in dresses generated with Stable Diffusion. These photos are then fed through StyleGAN2, another AI model, which summarizes the Stable Diffusion images in video form.
The work highlights how the statistical methods embedded into AI systems exaggerate and persist stereotypical representations of bodies through the lens of fashion. In the work, we see how both the bodies and the dresses in them are incredibly uniform. By running the images through not one, but two different AI models, Latent Dresses In Motion achieves a certain hyperfiltering in which only the absolutely most generalizable representations are allowed an existence. The video thereby speaks to and criticizes racial, bodily and gendered AI representations.
The work was initially created as a collaboration with the fashion studio Sabot called The Latent Dress Project. Through conversations surrounding gender and AI, we conceptualized the video work together. The work was then shown alongside a dress made by Sabot in Copenhagen's Morph Studios with exhibition design by Possible Scenarios. To accompany the exhibition, we released a lookbook which juxtapose Sabot's pieces with AI generated approximations. I also wrote a short essay about crafting with AI, which can be read here.
The underlying image dataset was generated in Stable Diffusion using the Epic Realism pure Evolution v5 model.
"Latent Dresses In Motion" uses a modified version of Derrick Schultz' StyleGAN2 setup.
See more
Read my accompanying essay on crafting with AIExhibitions
- —Solo exhibition. Mono Gallery, Tunis, Tunisia.
- —Trio exhibition. Morph Studios, Copenhagen, Denmark.