Tech

We Asked an AI to Draw a Self-Portrait

Pay no attention to the machine learning algorithm behind the curtain.
Janus Rose
New York, US
Screen Shot 2022-06-10 at 11
Images generated by DALL-E Mini

DALL-E, the AI system that generates images from text prompts, has captured the internet’s imagination over the past few months. Literally.

Created by OpenAI, DALL-E is the latest in a series of tools that seem to tap into the internet’s subconscious, using massive datasets of text and images to parse and reproduce human language with uncanny accuracy. The system uses a machine learning model with billions of individual parameters to illustrate whatever phrases you feed into it, resulting in bizarre and often shockingly realistic renderings—though oftentimes with predictably racist and sexist tropes.

Advertisement

But while access to DALL-E is currently only being offered to a select list of artists and researchers, open source AI systems that attempt to replicate OpenAI’s model have recently sprouted up, allowing anyone to try their hand at human-machine artistic collaboration.

One model in particular, called DALL-E Mini, has practically achieved meme status over the past week. Hosted on the AI repository HuggingFace, the demo’s massive volume of users has caused long delays to complete requests, as social feeds fill with images generated from all kinds of absurd prompts. (“Gender reveal 9/11” and “Aileen Wuornos on Drag Race” are among the many deranged highlights)

Given how so many humans are now collaborating with AI models to make art, I felt it was only fair to ask the AI to reveal itself as a self-portrait. 

The results were… mixed. Based on what prompt I chose, DALL-E Mini either sees itself as some sort of seabird, a goat-like creature, or a mysterious orb that resembles a microscopic organism—among many other bizarre mutations.

Advertisement
Images produced by Dall-E mini showing a white goat-like creature, a seabird with multiple beats, and a cellular green orb
Images produced by Dall-E mini showing polygonal Nintendo 64 graphics, a dragon-like orb creature, and distorted photos of Obama shaking hands with himself.

Prompts attempting to get DALL-E Mini to draw itself. Top (L-R): "a portrait of DALL-E," "Photo of DALL-E Looking at the camera," "DALL-E reveals itself." Bottom (L-R): "DALL-E in Nintendo 64," "DALL-E's true form," "DALL-E shaking hands with Obama"

Screen Shot 2022-06-10 at 11.02.53 AM.png

A "self-portrait" of DALL-E as a old man with a glasses and a unibeard.

It should be noted that DALL-E Mini is not the same as OpenAI’s DALL-E system, and the results are typically not as good due to significant differences in the model’s size, datasets, and training. But its authors indicate that operating on a smaller scale was a primary goal of the project. 

“We show we can achieve impressive results (albeit of a lower quality) while being limited to much smaller hardware resources,” DALL-E Mini’s authors wrote in the project’s technical description. “By simplifying the architecture and model memory requirements, as well as leveraging open-source code and pre-trained models available, we were able to satisfy a tight timeline.”

The project’s description indicates that the model is still being trained, and a more advanced version, called DALL-E Mega, is also available for download—though not as conveniently accessible as the version hosted on Hugging Face.

Still, with OpenAI’s DALL-E still in closed beta testing, projects like DALL-E Mini are giving many people their first taste of human-AI artistic collaboration—and maybe, a glimpse into the future of the art world as we know it.