Listen to this story



Silicon Valley Is Bringing Back Racist Redlining

Autonomous vehicles could turn neighborhoods into no-go zones

Map of San Francisco redlining. Source: Mapping Inequality

InIn the 1930s, the federal government surveyed 239 cities across the United States to create mortgage-lending risk maps, dividing the cities into four categories: green for the best areas, blue for areas that were “still desirable,” yellow for “declining” neighborhoods, and red to signify those that were most risky, which meant getting a mortgage was near impossible.

The racism of the time was built into these rankings, as neighborhoods condemned by their red designation “were predominantly made up of African Americans, as well as Catholics, Jews, and immigrants from Asia and southern Europe” — almost any groups that were not whites originating from northern Europe.

The practices behind redlining were outlawed decades ago, but the problems it created are still with us. Two-thirds of neighborhoods redlined in the 1930s continue to be inhabited primarily by minorities, while 91% of those living in green areas remain middle- or upper-income and 85% of their residents are white. Gentrification is also fueled by the legacy of “redlining” since many people in areas once deemed undesirable struggle to access credit, while wealthier people moving into those neighborhoods are given loans to buy and renovate properties.

As cities cement their place at the center of global economic activity and financial power, tech companies have become even more interested in embedding themselves in urban infrastructure through smart-city initiatives, new transport “solutions,” and new tools to target residents. But many of these technologies are not fully thought through, and could allow a new form of redlining to emerge unless regulators are proactive.

New tools for discrimination

Tech redlining is not just a theoretical conversation. Google has been heavily criticized by communities across the United States for renaming their neighborhoods on Google Maps without their knowledge or consent. Not only does this take power away from residents, but changing the name of a place can disconnect it from its past, and is often an important step in rebranding it for a gentrified future where original residents will eventually be priced out.

Even worse, tech companies are failing to account for how their ad platforms — an industry now dominated by Google and Facebook — could be used for discriminatory ends. Early this year, the Department of Housing and Urban Development filed charges against Facebook for allowing housing ads to exclude people based on their ethnicity, meaning their choices of housing would be curtailed. In effect, this meant advertisers could choose not to have their ads shown to black and Hispanic people, groups which have already been negatively impacted as a result of redlining and other practices.

Google has been heavily criticized by communities across the United States for renaming their neighborhoods on Google Maps without their knowledge or consent.

Poor neighborhoods, which minorities are more likely to call home, also tend to have fewer transport options. To try to rectify that, the permit programs for electric scooter companies in many cities specifically task them with reaching underserved communities, but there have been plenty of complaints that they aren’t living up to their commitments.

Scoot, which was recently acquired by Bird and is one of two companies with a permit to operate in San Francisco, took a surprising move by drawing red lines around the Tenderloin district and Chinatown to stop users from dropping off scooters in those areas. Not only are those two neighborhoods poorer than other parts of the city, but they’re among seven communities that Scoot is supposed to serve as a condition of their permit. Instead, they’ve locked them out.

Uber has been accused of similar practices. After a major study by MIT, Stanford, and the University of Washington in 2016 showed that black customers faced longer wait times and more frequent cancellations than non-black customers, the Rideshare Guy explained that Uber had bonus areas for drivers, and that in Los Angeles the zones lined up pretty closely with the city’s racial demographics, thus incentivizing drivers to serve predominantly white areas. The company’s discriminatory practices were further confirmed during the 2017 trial over Uber’s theft of Waymo’s trade secrets when former security employee Richard Jacobs testified that Uber would identify “high threat areas where crime takes place” and refuse to operate in them, in an effort to lower operating costs.

These are just a few examples of a larger trend of discriminatory tech products that may not always be explicitly designed to exclude certain groups, but have that effect regardless. And if technology is further integrated into the urban fabric and self-driving cars become as ubiquitous as some companies have claimed they will, these problems could get worse.

The malware threat to connected vehicles

Autonomous vehicles are supposed to make personal transportation much safer by removing the risk of human error, but they need to communicate with one another to pre-plan routes and avoid accidents, sending data back and forth between vehicles and infrastructure. But what happens when a vehicle or piece of infrastructure is infected with malware that can be spread as it communicates with other pieces of the connected transport network?

If low-income neighborhoods and areas with large minority populations are perceived to be those with a higher risk of getting malware, self-driving vehicles may avoid those parts of the city.

In a paper published last year, Evan W. Vassallo and Kevin Manaugh describe how malware could be spread through a fleet of autonomous vehicles and how the risk of being infected with malware could be higher in certain areas if specific manufacturers or software systems are targeted. The vehicles themselves could then take the malware risk into account when planning their routes to avoid areas where the risk of getting infected is higher, turning certain regions into no-go zones.

There’s no reason to believe that wealthy areas would be able to “better protect themselves from malware by buying a more expensive AV,” but it could also reinforce existing stereotypes about minority areas of the city — many of which would likely have been redlined in the past. If low-income neighborhoods and areas with large minority populations are perceived to be those with a higher risk of getting malware, self-driving vehicles may avoid those parts of the city. That would bolster existing stereotypes of oppressed areas and the data would show that they receive fewer vehicle trips, which “could render public funding for these areas less politically feasible” at a time when they’re most in need of investment, thus compounding the problem.

It’s not just Vassallo and Manaugh warning us about this; there are a growing number of voices with similar concerns. In his book Click Here to Kill Everybody, security expert Bruce Schneier argues that we’re creating a cybersecurity nightmare by connecting everything to the internet. He points to autonomous vehicles and smart cities as technologies that could have positive outcomes, but will also present major hacking risks which will be very costly to address and require government enforcement to ensure the proper cybersecurity measures are taken.

It would be easy to brush these concerns off as hysteria or a Luddite response to the onward march of technology, but as the previous examples showed, tech is already enabling discrimination — and the use of machine learning has the potential to make it even worse.

Take the case of predictive policing. These systems are being adopted by police departments to help them identify where crime might take place, but the algorithms making those predictions are only as fair as the data being fed to them. And in a society that overpolices poor people and minorities, the algorithm is going to ‘predict’ they’re more likely to commit crime. Adam Greenfield describes how this works in his book Radical Technologies:

Predictive policing may seem to be concerned with the future, in other words, but the future in question is one oddly entangled with the past. A neighborhood in which a statistically significant spike in felony assault has taken place may find itself the focus of intensive patrolling moving forward, leading to new citations for felony assault being issued at a rate far above the citywide average, and therefore new cycles of police vigilance.

Tech’s urban ambitions need democratic oversight

Minorities and low-income people have suffered enough from planning decisions made by an affluent, white elite that placed structural barriers in their way. As we continue to integrate new technologies into our transportation network, we must critically examine whether they will improve urban life for those who’ve so often been left out or excluded.

The discussion around smart cities and autonomous vehicles has been too quick to accept the utopian promises that they will make cities more efficient, sustainable, and convenient, without scrutinizing the claims of tech billionaires who would reap major windfalls if their systems were to be adopted en masse in major cities. Sidewalk Labs’ unwillingness to answer fundamental questions about their project in Toronto should be a lesson to other cities to approach any agreements with tech companies with caution.

The admission that autonomous vehicles are further away than companies previously claimed gives us more time to create proper democratic processes to study the full range of potential impacts of these technologies so residents can make informed decisions about what they want for their cities. Tech companies are used to getting their own way, but when it comes to the future of our cities, we can’t afford to leave the decision-making to corporations who are motivated by power and profit, rather than the well-being of regular people.

Critic of tech futures and host of Tech Won’t Save Us:

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store