We Asked an AI to Draw a Self-Portrait

Pay no attention to the machine learning algorithm behind the curtain.

DALL-E, the AI system that generates images from text prompts, has captured the internet’s imagination over the past few months. Literally.

Created by OpenAI, DALL-E is the latest in a series of tools that seem to tap into the internet’s subconscious, using massive datasets of text and images to parse and reproduce human language with uncanny accuracy. The system uses a machine learning model with billions of individual parameters to illustrate whatever phrases you feed into it, resulting in bizarre and often shockingly realistic renderings—though oftentimes with predictably racist and sexist tropes.

Advertisement

But while access to DALL-E is currently only being offered to a select list of artists and researchers, open source AI systems that attempt to replicate OpenAI’s model have recently sprouted up, allowing anyone to try their hand at human-machine artistic collaboration.

One model in particular, called DALL-E Mini, has practically achieved meme status over the past week. Hosted on the AI repository HuggingFace, the demo’s massive volume of users has caused long delays to complete requests, as social feeds fill with images generated from all kinds of absurd prompts. (“Gender reveal 9/11” and “Aileen Wuornos on Drag Race” are among the many deranged highlights)

Given how so many humans are now collaborating with AI models to make art, I felt it was only fair to ask the AI to reveal itself as a self-portrait. 

The results were… mixed. Based on what prompt I chose, DALL-E Mini either sees itself as some sort of seabird, a goat-like creature, or a mysterious orb that resembles a microscopic organism—among many other bizarre mutations.

Prompts attempting to get DALL-E Mini to draw itself. Top (L-R): "a portrait of DALL-E," "Photo of DALL-E Looking at the camera," "DALL-E reveals itself." Bottom (L-R): "DALL-E in Nintendo 64," "DALL-E's true form," "DALL-E shaking hands with Obama"

A "self-portrait" of DALL-E as a old man with a glasses and a unibeard.

It should be noted that DALL-E Mini is not the same as OpenAI’s DALL-E system, and the results are typically not as good due to significant differences in the model’s size, datasets, and training. But its authors indicate that operating on a smaller scale was a primary goal of the project. 

“We show we can achieve impressive results (albeit of a lower quality) while being limited to much smaller hardware resources,” DALL-E Mini’s authors wrote in the project’s technical description. “By simplifying the architecture and model memory requirements, as well as leveraging open-source code and pre-trained models available, we were able to satisfy a tight timeline.”

The project’s description indicates that the model is still being trained, and a more advanced version, called DALL-E Mega, is also available for download—though not as conveniently accessible as the version hosted on Hugging Face.

Still, with OpenAI’s DALL-E still in closed beta testing, projects like DALL-E Mini are giving many people their first taste of human-AI artistic collaboration—and maybe, a glimpse into the future of the art world as we know it.

Tagged:

Artificial Intelligence, ai art, large language models, DALL-E, DALL-E Mini

More
like this
DALL-E Is Now Generating Realistic Faces of Fake People
Google's AI Isn’t Sentient, But It Is Biased and Terrible
Facebook’s New AI System Has a ‘High Propensity’ for Racism and Bias
Technologists Are Using AI to ‘Expand’ Famous Works of Art
Scientists Are Training AI Robots to Write Graffiti
The AI That Draws What You Type Is Very Racist, Shocking No One
‘Valorant’ Will Use Your Voice to Train AI to Detect ‘Disruptive Behavior’
Facebook's AI Chatbot: ‘Since Deleting Facebook My Life Has Been Much Better’