Companies are using bizarre methods to create algorithms that automatically detect weapons. AI ethicists worry they will lead to more police violence.
Documents reveal Lockport Schools' facial recognition tech has mistaken broom handles for guns and has misidentified Black students at much higher rates.
Michael Oliver is the second Black man found to be wrongfully arrested by Detroit police because of the technology—and his lawyers suspect there are many more.
Records obtained by Motherboard show the police department used sub-par images in almost half of its facial recognition searches, increasing the chance of misidentifying suspects.
Silicon Valley has made billions of dollars empowering the police by pitching surveillance and data analysis technology as unbiased. It’s not.
Technologists from MIT, Harvard, and Google say research claiming to predict crime based on human faces creates a "tech-to-prison pipeline" that reinforces racist policing.
Momus Analytics' predictive scoring system is using race to grade potential jurors on vague qualities like "leadership" and "personal responsibility."
After the ACLU said a community college in Michigan was violating its students’ First Amendment rights, the school partially relented.
Williams, whose long tenure at Fox News included several years at 'Fox and Friends,' will work on editorial video strategy at the social network.
A new national campaign wants to stop facial recognition from invading U.S. college campuses.
A series of studies argue that by focusing on costs as a proxy for health, risk algorithms are ignoring the racial inequalities in healthcare access.
To fight gender bias, researchers are training language-processing algorithms to envision a world where it doesn’t exist.