To Build a Better Future, Designers Need to Start Saying ‘No’
Reflections on the transformative potential of refusal as a method in the design of technology
Last year at the Conference on Neural Information Processing Systems (NeurIPS), one of the most well-respected computer science conferences in the world, the opening panel discussion on A.I. for social good didn’t go quite as people might have expected.
“I’m not usually in spaces like this, and I’m not entirely convinced that I haven’t surreptitiously walked into a terrorist den,” began Sarah T. Hamid, a community organizer based in Los Angeles and one of the core members of the Carceral Tech Resistance Network. As Hamid explained, “like terrorists, technologists in spaces like this have a concept of what social good is. … Everybody thinks that they’re doing good things in the world … that’s the scary thing. I don’t know how it is that we have a disconnect between kids in cages and the work that’s happening in spaces like this.”
There was a smattering of applause as others in the room shifted uncomfortably in their seats. Most of the morning had been devoted to presentations on how computer science might help tackle some of the world’s toughest problems, ranging from online content curation to supporting the UN’s Sustainable Development Goals. Throughout those talks, the line between problem and solution had been clear and uncomplicated. Now Hamid was pushing the audience to think more deeply about the ways their efforts might contribute to the very problems they claim to be solving with technology.
The moderator asked Hamid how she thinks computer scientists should handle competing definitions of social good, citing the various mathematical formalisms that have been developed to grapple with ideas like “fairness” in A.I. Again, Hamid responded with an answer that few people might have anticipated.
If computer scientists really hoped to make a positive impact on the world, then they would need to start asking better questions.
“I think that’s a very interesting question,” she said, “I just don’t think it’s an important question.” Hamid went on to describe a multiyear campaign led by the Stop LAPD Spying Coalition to dismantle Operation LASER*, a predictive policing program that the Los Angeles Police Department used to deploy officers to “hot spots” for violent crime and gang-related activity. Hamid described the painstaking efforts that the coalition undertook to learn about and fight against Operation LASER. “Meanwhile, seven people have died in LASER zones,” she explained, “… while we’re trying to understand how to defend our community against these data-driven programs … people are dying in these LASER zones.”
Hamid used this example to illustrate how abstract debates about social good were out of touch with the very real harms that communities face in their daily lives as a result of data-driven technologies. If computer scientists really hoped to make a positive impact on the world, then they would need to start asking better questions.
Interactions like this can be deeply unsettling, especially for people who strive to use their time and talents to build technology for social good. But over the last few years, conversations like this have become foundational to my work. Hamid pushed her interlocutors to connect the dots and be accountable for the ways they perpetuate violence through the study and design of technology. Tech-enabled violence comes in the form of police tools like Operation LASER and PredPol (the Predictive Policing Company), but it also shows up in the way life chances are distributed through other racialized and gendered systems of meaning and control — what writer and lawyer Dean Spade calls “administrative violence.”
On its face, administrative violence appears banal, neutral, and objective. Even benevolent. Who wouldn’t want to help government officials identify children at risk of abuse or support the placement of people into public housing? But as Virginia Eubanks, Ruha Benjamin, and numerous others have documented, these technologies often function more like behavior modification programs that prioritize coercion and compliance over the provision of care.
As professor and social justice advocate Dorothy E. Roberts argues, the problem is not that these technologies produce wrong assessments — it’s that they are used to support a fundamentally wrong approach to addressing community needs. In their article “Unbecoming Claims: Pedagogies of Refusal in Qualitative Research,” Eve Tuck and K. Wayne Yang call this approach to research “inquiry as invasion” or “the proliferation of damage-centered narratives, rescue research and pain tourism” that erases systemic violence through an adept “arrangement of justifications and unhistories” that make our present social condition seem natural, inevitable, and immutable.
Unfortunately, these moral hazards are not captured in the formal fairness criteria that computer scientists develop to illustrate the trade-offs of different algorithmic design choices. While these debates give the impression of rigor, Seeta Peña Gangadharan warns that abstract problem-solving effectively disappears people and history into mathematical equations, which amounts to its own kind of violence.
Similarly, Hamid was warning her audience against investing too much time and energy in abstract debates on algorithmic fairness because doing so diverts away valuable resources and brainpower from addressing more urgent needs. She refused to engage with the default assumptions and ideas that so often frame conversations regarding A.I. for social good. She declined to answer the questions she was posed and instead redefined the questions that needed answering. Put simply, Hamid engaged in an act of refusal.
Refusal is an essential practice for anyone who hopes to design sociotechnical systems in the service of justice and the public good. As I’ve argued elsewhere, data scientists often lack the conceptual tools necessary to interrogate, resist, and reimagine the power relationships that shape their work. As a result, data science reproduces what Donna Haraway refers to as a “conquering gaze from nowhere.” The concept of refusal could offer a transformative framework for reimagining the work of data science as a liberatory practice.
What is refusal?
To refuse is to say no — to turn down requests and opportunities to build technologies that are likely to produce harm. But refusal is more than just an exit strategy. It’s an opportunity to reimagine the default categories, assumptions, and problem formulations that so often circumscribe the work of data science. Refusal is a beginning that starts with an end.
I first encountered this concept in my own work as someone on the receiving end of refusal. In 2017, I was part of an interdisciplinary team of researchers from MIT and Harvard who were interested in the ways that data were being used to promote bail reform. During the early stages of our project, our team reached out to a local bail fund to learn more about their work. During the meeting, we threw out a number of ideas on ways we might use data collected by the bail fund to better understand different pretrial outcomes. The organizers were rather stoic about the ideas we pitched.
Toward the end of the meeting, Atara Rich-Shea, the executive director of the Massachusetts Bail Fund, leaned in and shared her frank perspective. She said the bail fund was frequently contacted by academics who were only interested in asking their own questions and that for the most part, those questions were harmful to the people she served. She went on to explain the ways that academics undermine the work of movements for liberation by asking questions that either siphoned people into categories of “deserving and undeserving,” erased the violence of incarceration, or distracted from more pressing issues.
Rich-Shea’s refusal was a generative and strategic act, one that opened up space for us to renegotiate the assumptions and key vocabularies underlying our work. This approach is resonant with the way Indigenous scholars have talked about the transformative potential of refusal in other fields such as anthropology. As professor Sarah Wright argues, refusal is “a way of reframing debate, refocusing the terms of engagement, and re-centering it in productive ways.”
So what exactly is recentered when we refuse? In the case of bail reform, we came into the conversation thinking that our task was to help key decision-makers (judges, prosecutors, etc.) distinguish “signal from noise” when making time-sensitive decisions about potentially dangerous individuals. Rich-Shea pushed us to reframe the problem in terms of a runaway courtroom culture that has enabled pretrial detention rates to skyrocket in spite of the rare and declining incidence of violent crime. Rather than focusing exclusively on the behavior of people awaiting trial, we would try to understand why American judges send so many people to jail in spite of state and federal laws protecting against excessive pretrial detention.
The concept of refusal could offer a transformative framework for reimagining the work of data science as a liberatory practice.
As Alex Zahara explains, refusal is often “intended to redirect academic analysis away from harmful pain-based narratives that obscure slow violence and towards the structures, institutions, and practices that engender those narratives.” Over the course of the next several months, our team developed a new set of research questions that were based on this critical reframing. But it was not immediately apparent to us what the most effective approach might be for accessing data to support our work. Through a number of conversations with local and state officials, our team quickly realized that we would also need to hone our skills of refusal in order to actively shape the course of our research agenda away from harmful modes of knowledge production.
For example, we spent several months negotiating with a state government to gain access to data that we could use to understand how judges were responding to reforms aimed at reducing pretrial jail populations. During these conversations, government officials expressed interest in working with us to understand the impact of supervised conditions of release, such as electronic monitoring and mandatory drug testing. Although this was beyond the scope of our original research question, we felt it was necessary to explore the topic as a stepping stone to acquiring the other data we needed.
After several weeks of careful consideration, we decided not to proceed with the study on supervised conditions of release because doing so would have likely resulted in the legitimizing of harmful practices such as electronic monitoring. The limited selection of outcome variables provided to us would have cast practices such as pretrial detention as effective interventions while simultaneously erasing the well-established harms of incarceration. That kind of erasure amounted to a violence we were not willing to participate in.
Why don’t more people refuse?
We were pretty bummed to arrive at the conclusion that we couldn’t move forward with the supervision study. It felt like we were missing out on an opportunity to engage in an important policy conversation, and we had some anxiety about turning down this request while we were still negotiating access to the data we needed for the judge study. But we eventually came to view this refusal as a generative act — it gave us the opportunity to have conversations with key decision-makers from the courts about the limits and opportunities of the data that they had collected and to imagine alternative research questions.
There are virtually no venues for scholars to share the essential insights unearthed from the process of deciding not to study or build technology.
Let’s be real though: These conversations don’t just magically change the way people in positions of power and authority think. Over the last few years, my attempts to engage in generative modes of refusal have been met with mixed results. Emails go unanswered. Polite nods and pregnant silences fill the room. Doors have been closed. It’s easy to slip into the mindset of “If I don’t do this bad study, someone else will do it even worse than me.”
But these experiences have created space for other, more transformative relationships to take root. Our community collaborations have only deepened as we’ve begun to actively participate in the struggle for shared power rather than trying to direct initiatives or evaluate the issues from the outside. Our relationships with directly affected communities are based on the trust and accountability that emerges from the process of navigating refusal. Refusal allows computer scientists to reposition themselves as actively producing the conditions of inquiry — it breaks down carceral, violent modes of knowledge production and imagines a new, reparative role for itself.
Yet, refusal is not something that is rewarded within the academy. There are virtually no venues for scholars to share the essential insights unearthed from the process of deciding not to study or build technology. The academy’s unrelenting appetite for “original research” means that social scientists and computer scientists alike are constantly on the hunt for new objects of study, and the easiest targets are often the poorest and most marginalized among us.
Besides, refusal doesn’t feel good, not to give nor to receive (at least at first; it gets easier with time). If we’re going to get serious about refusal as an essential design practice, we’ll need to attend to the affective dimensions of engineering, to the ways that desires for productivity, scale, and impact shape our motivations to do certain kinds of work and ask certain kinds of questions.
I am constantly relearning lessons I’ve already learned, undoing desires that I’ve already undone. This work is iterative and never-ending, and it starts with refusal. As Carla Bergman and Nick Montgomery argue in Joyful Militancy, “Undoing Empire means undoing oneself. This is never a purely negative undoing, because it also means becoming capable of something new.”
I keep relearning: Refusal is a beginning that starts with an end.
In spite of the challenges outlined above, progress is underway. I’m encouraged by projects like the Feminist Data Manifest-No, which explicitly outlines ways scholars can refuse harmful data regimes while affirming commitments to more radical and transformative data futures. I’m inspired by the work of people like Gangadharan and Benjamin as well as Jonathan Zong and Nate Matias, who are all actively exploring ways that the concept of refusal fits into the process of technology design. In the classroom, Erhardt Graeff has been charting out ways we can teach students about the “responsibility to not design.” And groups like the Coalition for Critical Technology are committed to developing collective responses to refuse harmful modes of research and technology development within the academy.
As Tuck and Yang argue, “Rather than chasing aims of objectivity, we encourage researchers to take up a stance of objection, one that will interrogate power and privilege, and trace the legacies and enactments of settler colonialism in everyday life.” It’s time for the fields of engineering and technology studies to take up this call for objection. It’s time for us to refuse the default modes of engagement that are handed down to us by the gatekeepers of data. It’s time for us to embrace refusal as a first step toward asking better questions.
*This article was updated on 10/15/20 to specifically name the Stop LAPD Spying Coalition as the leaders of the multiyear campaign against Operation LASER in Los Angeles.
Special thanks to Sarah Hamid for shaping the ideas presented in this essay over the course of several months of dialogue. And thanks to Ethan Zuckerman for the lightning-fast turnaround and keen feedback on the first draft of this essay.