This essay was first written for the lookbook publication accompanying the Latent Dress Project, an interdisciplinary collaboration with Sabot and Possible Scenarios.
Working with artificial intelligence means flattening the world. Putting things into boxes, categories and words. Removing nuance, and letting go. Leaving it up to the algorithm to reveal what it thinks about how the world is put together, and then work from there to add nuance again. At best, this can be a happy collaboration with interesting surprises and serendipitous encounters, but if you're not careful, the algorithm will trick you into reinforcing harmful, boring, useless stereotypes. The AI images in this publication reveal this process to you, so you can see for yourself what it means to work with and against a statistical algorithm.
For this booklet, we have created distorted versions of Sabot's pieces using the image generator Stable Diffusion. In juxtaposition to photos of the real, handmade garments on actual people, they show us how image generators tend to reinforce and recreate stereotypes. It breaks down images into the terms that have been used to describe them, but it always resorts to the most common descriptor. Yet Sabot's garments do not always fit neatly into the established mainstream categories. They defy genres, and break expectations.
So how do we recreate their works in a piece of software built to think of everything as belonging to a genre? Well, we don't, because we can't and we don't want to. Instead, we want to investigate the difference between an AI-driven design process, and a tactile, material, intuitive process like Sabot's. Our goal is to show how image generators distort reality by default, and that using it in interesting, ethical ways takes a lot of work and knowledge. If left on its own, the algorithm boils down what we see into what is most commonly seen in English spoken media. If left on its own, it shows us that dresses only belong on feminine people, that models are slender and that people do not have dark skin.
With each image, you see the almost poem-like prompts that we used to generate them. Near the end of every prompt, you will see the words "unrealistic", "deformed" and "weird hands". These are there to make the images look more explicitly like they are algorithmically constructed. By showing you the prompt and enhancing the artificiality of the pictures, we aim to show you how the sausage is made. We do this, because we believe it matters where the things we consume come from. Finally, at the every end of the prompts you see the term <lora:muzzy_head_lora:1.1>, which anonymizes all the models by blurring their faces. This adds to their AI'ness, but it also makes sure we don't accidentally steal someone's identity without consent.
With all of this in mind, we hope you will read, view, peruse and study this lookbook as a reflection on the importance of craft, tools, methodology and politics.
—