The site reminds us of the limitations of AI—how AI-generated images are very stiff and easy-to-spot, but also biased in many ways.
Police forensics is already plagued by human biases. Experts say AI will make it even worse.
The regurgitation of training data exposes image diffusion models to a number of privacy and copyright risks.
Automated tools like DALL-E and Stable Diffusion are changing how instructors teach their students, but many won't ban them outright.
Get ready to swipe right on some AI-generated profile pics.
The Stable Diffusion Explorer shows how words like "assertive" and "gentle" are mapped to sexist stereotypes.
The RIAA says AI music generators will be trained on copyrighted music, the same way popular image generators are based on billions of images taken by artists.
In one stark example of how sensitive images can end up powering these AI tools, a user found a medical image in the LAION dataset, which was used to train Stable Diffusion and Google’s Imagen.
AI is progressing at an astonishing speed, but we still don’t have a good understanding of the datasets that power AI, and little accountability for whatever abusive images they contain.