Tech

Facebook AI Researchers Built a ‘Fashion Map’ With Your Social Media Photos

The project collected over 7 million photos across 37 cities to predict fashion trends by neighborhood, raising fears of algorithmic discrimination.
A person in a gray coat is photographed on the street during NYC Fashion Week 2022
Jeremy Moeller / Getty Images

Artificial intelligence researchers—some of whom are affiliated with Facebook’s parent company Meta and Cornell University—used more than 7 million public, geolocated social media photos from Instagram and Flickr to construct what they’re calling an “underground fashion map” that spans 37 cities. The map can reveal groupings of people within a city, including areas that are the most “trendy” or “progressive,” and builds on an Amazon-funded AI tool called GeoStyle to forecast fashion trends, according to a press release about the research. 

Advertisement

“A person unfamiliar with a city could find out what neighborhoods might be suitable for them to visit, e.g., to satisfy interests in outdoor activities vs. shopping vs. tourist areas,” researchers wrote in a newly published report completed as part of an internship with Facebook AI Research. They also claim anthropologists could leverage the maps to infer trends within a city across time. 

The project’s affiliation with Facebook and Amazon raises larger questions about the unexpected ways tech companies use personal data, often without explicitly notifying users. 

Tamara Berg, co-author of the report and director of Meta AI—Facebook’s artificial intelligence research center—did not respond to Motherboard’s inquiry about Facebook’s potential use of the data, or whether Instagram and Flickr users are aware that their photos are being used to construct fashion maps. 

When contacted, a spokesperson for Cornell University first stated that “the researchers are not affiliated with, nor do they work with, Facebook,” but later acknowledged that several people involved in the project are Facebook employees, including one researcher who was a Facebook intern at the start of the project.

Study co-author Kavita Bala, professor and chair of computer science at Cornell University, has developed products used by Facebook previously. In 2020, Facebook acquired GrokStep, an AI-powered product-recognition startup that she co-founded. 

Advertisement

The obvious use case for artificial intelligence-assisted social mapping would be for Facebook and other companies to further refine demographically and geographically targeted ads, said Jathan Sadowski, a researcher and professor on the social dimensions of artificial intelligence, in an email to Motherboard. 

But the potential implications related to social mapping research go beyond targeted advertising for new clothes, he said. “My immediate concern is about the use of such data analysis by finance, insurance, and real estate sectors,” Sadowski told Motherboard. “These are industries with long histories of using ‘alternative data’ to inform decision-making and justify discriminatory decisions. Often as proxies for other protected categories like race, gender, and class.”

Facebook’s ad-targeting program has repeatedly come under fire for its discriminatory practices. For years, advertisements on Facebook’s platform allowed for companies to exclude some gendered and racialized groups from learning about opportunities for housing and employment, according to lawsuits filed by the ACLU, the National Fair Housing Alliance, and other groups.

Advertisement

“As more people turn to the internet to find jobs, apartments, and loans, there is a real risk that ad targeting will replicate and even exacerbate existing racial and gender biases in society,” the ACLU wrote in a blog post about the lawsuit’s settlement. “Whether you call it weblining, algorithmic discrimination, or automated inequality, it’s now clear that the rise of big data and the highly personalized marketing it enables has led to these new forms of discrimination.”

While Facebook claims to have stopped explicitly allowing advertisers in housing, employment, and credit to target users based on racial categories, researchers with The Brookings Institution found Facebook’s algorithms still discriminate based on race and ethnicity as recently as last year.

In September 2020, the Trump administration’s Department of Housing and Urban Development (HUD) issued a final rule that relaxed the Fair Housing Act’s (FHA) discriminatory effects standard, which critics say legally justified the use discriminatory algorithmic practices. In June 2021, HUD proposed restoring the standard. However, insurance companies have long discriminated, despite regulations, by cloaking risk classifications with statistics and mathematical models.      

Sadowski said he could easily imagine financial tech or insurance tech companies using fashion map data to conclude that a particular fashion trend popular among young Black people correlates with financial riskiness or falling property values. 

“But unlike with redlining, the source of data and method of analysis has been AI laundered, making the proxy discrimination a few layers removed,” he said. “Thus plausible deniability—or an easy apology—to those who need it.”