Tech

Trump Wants to Make It Basically Impossible to Sue for Algorithmic Discrimination

A new rule would make it easier for businesses to discriminate without consequence. That’s the point.
Ben Carson, Secretary of Housing and Urban Development, testifying before Congress.
Win McNamee / Staff

Artificial intelligence experts have long warned that algorithmic decision-making systems can be used to discriminate. Now, a leaked memo reveals a new rule proposed by the Department of Housing and Urban Development (HUD) would make it much easier for banks, insurance companies, and landlords to use discriminatory algorithms without the threat of lawsuits.

The new rule takes aim at a 2015 Supreme Court ruling, which decided that consumers could combat housing discriminatory business practices by making "disparate-impact claims" under the Fair Housing Act of 1968. In a disparate-impact claim, if you find out that a business practice had a disproportionate effect on certain groups of people, then you can hold that business liable—even if it was an unintended consequence. Claimants, then, could not only challenge discriminatory housing practices but use statistical analysis of past behavior to prove that the discrimination was happening and detrimental to certain groups.

Advertisement

If, for example, you were to find out that housing loan applications by black people were rejected but applications by white people in the same socioeconomic category were accepted, then a disparate-impact claim could be made. The same rule would apply if a group of black people in the same socioeconomic category as a group of white people were consistently getting charged higher interest rates, higher rents, or higher premiums on insurance.

HUD’s new rule would throw all that out the window by introducing huge loopholes to shield businesses from liability when their algorithms are accused of bias. As Reveal News reported, “A hypothetical bank that rejected every loan application filed by African Americans and approved every one filed by white people, for example, would need to prove only that race or a proxy for it was not used directly in constructing its computer model.” But there is substantial evidence to show that racial bias is fundamentally baked into the way that these algorithms and their data sets are constructed, even if they don’t specifically take race into account.

Another loophole introduced would allow a business to beat a disparate impact claim by showing the algorithm in question was created by a third party or that it was vetted by a "neutral third party."

Os Keyes, a doctoral candidate at the University of Washington who studies algorithmic bias, told Motherboard that “the new HUD rule would be laughable if its consequences weren't so horrifying.”

Advertisement

“It sets an impossible and contradictory burden for victims of discrimination—to show not only that the outcomes of algorithms are racially discriminatory, but that the algorithm isn't predicting a ‘valid objective,’” they said, “valid objective” meaning some metric a business would need to make a decision about a loan, its interest rate, insurance premium, or rent.

Within the leaked memo, one example given to justify this new rule is a credit rating, because credit ratings are ostensibly based on your creditworthiness and not your race, gender, sexuality, or other protected characteristic. But Keyes says the problem is that “credit ratings are racist. Because they're dependent on family credit (which is dependent on intergenerational wealth), income (which is racially biased) and location (which is racially charged thanks to the continued segregation in the United States), credit ratings look very different for African-Americans, Native Americans, and other marginalised populations.”

The proposed rule may come as a surprise given that just this March, HUD announced it was suing Facebook for algorithms that intentionally and unintentionally let advertisers discriminate who saw their housing advertisements.

As real estate moguls, Trump and his father have both been sued by HUD for violating the Fair Housing Act, and gutting the law would further institutionalize that kind of discrimination in the housing market. This is especially plausible with machine learning systems, which are often black boxes created by private companies that provide little or no transparency into how their algorithms make decisions.

Painting algorithms as objective and neutral decision-making tools even though they’re fundamentally flawed implements might suffice for now, but at the end of the day, they are tools that carry the biases and prejudices of their creators—and their society.