A Proposed Trump Administration Rule Could Let Lenders Discriminate Through A.I.

The Fair Housing Act could be crippled by a new interpretation that would allow tech companies to sell biased algorithms — and get away with it

Dave Gershgorn
OneZero
Published in
6 min readAug 12, 2019

--

Photo illustration by Tessa Modi, Photo: HUD Website

AA new interpretation of a more than 50-year-old housing law by the Trump administration might encourage the use of biased algorithms in the housing industry, while protecting banks and real estate firms from lawsuits that might result.

According to documents published last week by the investigative reporting outfit Reveal, the Department of Housing and Urban Development (HUD) is considering a new rule that would alter its interpretation of the Fair Housing Act, a 1968 law ushered through Congress in the midst of the civil rights movement that shields protected classes from discrimination. The proposal, which has not been made available to the public, reportedly includes language that would protect companies that use third-party algorithms to process housing or loan applications by providing a specific framework for how they can “defeat” bias claims.

A spokesperson for HUD says that the proposal was submitted to Congress for a required review prior to publication.

“Upon the completion of this review period (very soon), we’ll be publishing that rule in the Federal Register,” the spokesperson says. “Until then, we are limited in what we can say publicly as Congress continues its review.”

Meanwhile, experts who spoke to OneZero say the proposal misunderstands how artificial intelligence might discriminate against people in the housing market, and that the move would remove incentives for companies to audit their algorithms for bias.

Currently, the Fair Housing Act protects against “disparate impact” to what are known as protected classes — meaning you can’t be denied a loan or application because of your race, sex, religion, or other protected statuses. The proposed update would make it more difficult for people to prove disparate impact resulting from automated programs, as opposed to human judgment. Plaintiffs alleging algorithmic bias in a lawsuit would have to prove five separate elements to make a successful case against a…

--

--

Dave Gershgorn
OneZero

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.