large language models
The collections industry is pushing GPT-4 as a dystopian new way to make borrowers pay up, replicating the debt system’s long history of racial bias.
Mental health experts worry the high cost of healthcare is driving more people to confide in OpenAI's chatbot, which often reproduces harmful biases.
Poets say rumors of their obsolescence at the hands of AI has been greatly exaggerated.
Tech charlatans and U.S. Senators are now spreading misinformation about predictive AI tools, which are not sentient.
Students at Vanderbilt University called the use of the AI tool “disgusting” after a school shooting killed 3 and injured 5 in Michigan last week.
The inability to deal with heavy topics like death is another reminder that “AI” chatbots are just glitchy funhouse mirrors.
As generative language models become more accessible, easier to scale, and write more persuasive text, they could be used to spread disinformation.
Automated tools like DALL-E and Stable Diffusion are changing how instructors teach their students, but many won't ban them outright.
OpenAI’s chatbot could help automate the murky business of corporate political influence, but that wouldn't necessarily be a good thing.
One of the internet’s largest coding resources has temporarily banned the AI chatbot after users answered programming questions with its responses.