Tech

DALL-E Is Now Generating Realistic Faces of Fake People

Things are about to get really weird in the world of image-generating AI.
Janus Rose
New York, US
Images of AI-generated faces side by side. Image credit: Patrick Clair on Twitter​
Image credit: Patrick Clair on Twitter

OpenAI’s machine learning tool DALL-E has generated a lot of buzz lately for its ability to generate bizarrely specific images from text prompts. Now, after a recent change to the AI model’s internal use policies, OpenAI is allowing researchers to share generated images of photorealistic human faces belonging to nonexistent people.

Advertisement

According to an email sent to DALL-E testers on Tuesday, users are now allowed to share realistic face photos created by the system after developers put in place safeguards designed to prevent the creation of deepfake images.

“This is due to two new safety measures designed to minimize the risk of DALL·E being used to create deceptive content,” reads the email, which was shared with Motherboard. Specifically, the system now automatically “rejects attempts to create the likeness of any public figures, including celebrities,” and also blocks users from uploading images of human faces in order to generate similar faces. Previously, the system’s safeguards only prevented users from making images of political figures.

Researchers have already begun sharing some early examples, and the results are… well, pretty weird.

DALL-E is still in closed testing, and OpenAI normally keeps a tight lid on what types of generated results testers can share publicly. Meanwhile, smaller-scale volunteer projects like DALL-E Mini have already given the general public the ability to create AI-generated images and memes from text prompts—albeit with much lower quality results. 

Needless to say, the ability to generate realistic human faces raises all kinds of ethical questions, even if they don’t belong to real humans. 

AI ethics researchers have warned that massive-scale machine learning systems like DALL-E can harm marginalized people through deeply embedded biases that can’t be easily engineered out. OpenAI’s own researchers have also admitted that DALL-E frequently reproduces racist and sexist stereotypes when certain words are included in text prompts. And other AI models from Facebook and Google haven’t fared much better.

While AI engineers say they’re doing their best to create safeguards that prevent abuse, it’s likely we’ve only just begun to see what large AI systems like DALL-E are capable of—and what types of harm they might cause.