Big Technology

Ex-Googler Meredith Whittaker on Political Power in Tech, the Flaws of ‘The Social Dilemma,’ and More

In a new interview, the tech labor activist and A.I. researcher discusses major issues in Silicon Valley today

Meredith Whittaker
Meredith Whittaker

OneZero is partnering with the Big Technology Podcast from Alex Kantrowitz to bring readers exclusive access to interview transcripts with notable figures in and around the tech industry.

This week, Kantrowitz sits down with Meredith Whittaker, an A.I. researcher who helped lead Google’s employee walkout in 2018. This interview, which took place at World Summit A.I, has been edited for length and clarity.

To subscribe to the podcast and hear the interview for yourself, you can check it out on Apple Podcasts, Spotify, and Overcast.

When I interviewed Tristan Harris about The Social Dilemma earlier this month, my mentions filled with people saying, “You should speak to the people who were critical of the social web long before the film.” One name stood out: Meredith Whittaker. An A.I. researcher and former Big Tech employee, Whittaker helped lead Google’s walkout in 2018 amid a season of activism inside the company. On this edition of the Big Technology Podcast, we spoke not only about her views on the film, but also of the future of workplace activism inside tech companies in a moment where some are questioning if it belongs at all.

Alex Kantrowitz: It seems like your perspective on The Social Dilemma is a little bit different from Tristan’s. Where do you think it went right and wrong?
Meredith Whittaker:
One of the significant weaknesses with the film was that it sidelined a lot of the people who have been researching and calling out these issues in more nuanced ways for a very long time. There are folks like Safiya Noble, Sara Roberts. Ruha Benjamin. I’d look at Black women like I’Nasah Crockett and Sydette Harry, who were, in 2014 and before, calling out racist trolls that were germinated from message boards like 4Chan. There are a lot of people who actually have been looking at some of the issues that are produced and amplified through social platforms and the consolidation of power that is now represented in a handful of tech firms.

That was one of the primary issues, and along with that erasure, it erased some of the fundamental harm. The way in which a lot of these platforms and algorithmic systems reproduce and amplify histories of racism, histories of misogyny. Who bears the harm of this type of targeted harassment or the way in which algorithms represent our world, as Safiya Noble has [pointed out] so brilliantly. And who, frankly, reaps the benefits?

A lot of the people who were being interviewed were people who skipped from the side of reaping the benefits, working at a tech company — which I certainly did — to being critics of this technology. But a lot of the criticism was drawing very heavily on this earlier work. I would love to see a number of these people on your podcast, and I would love to see these critiques enriched with some of those perspectives.

Tristan made the argument that The Social Dilemma was powerful because it showed the people who had built the software saying, “We built a Frankenstein.” What do you think about that?
That is an argument, but it doesn’t obviate the fact that those are the people receiving a platform — that a lot of people who are first learning about some of these issues are learning it from that perspective. The voices you raise up, the people you represent matter a lot in these debates. There was a lot of prior art, frankly.

This is not a brand-new set of problems that just occurred to folks. There have been decades of work and inquiry around these problems that were largely ignored, dismissed, or were just considered the byproduct of “positive disruption.” It wasn’t really taken seriously until wealthy white men, frankly, in Silicon Valley began to feel some of these effects themselves.

I think that’s fair criticism. Let’s get to the arguments against the film itself. What did they miss?
I’ll highlight a couple things that I think are really important in any analysis of tech and its social implications. The first thing that really troubled me was this persistent picture of these types of technologies, these social media feeds, the algorithmic systems that help curate and surface some of this content, etc., as almost superhuman. You heard phrases like “can hack into the brainstem.” Things that really paint this as a feat of superior technology.

I think that ignores the fact that a lot of this isn’t actually the product of innovation. It’s the product of a significant concentration of power and resources. It’s not progress. It’s the fact that we all are now, more or less, conscripted to carry phones as part of interacting in our daily work lives, our social lives, and being part of the world around us. That’s only increased during Covid.

I think this ultimately perpetuates a myth that these companies themselves kind of tell, that this technology is superhuman, that it’s capable of things like hacking into our lizard brains and completely taking over our subjectivities. I think it also paints a picture that this technology is somehow impossible to resist, that we can’t push back against it, that we can’t organize against it.

The film argues that devices and apps are hard to resist. Is that the power they have over us that’s misrepresented?
I’m looking at these firms as centers of almost improbable power in this case. We have about five companies in the Western context that are dominating this Big Tech. These firms are a product of the commercialization of computational networked infrastructure, so the internet, and a product of the development of advertising technology and other platform services that allow them to gain a massive foothold with infrastructure and a massive data-collection pipeline.

These companies represent extraordinary powers over our lives — they are not magical. That power is reflected in their ability to give away platforms for education to all of our school districts, to replace other sorts of social forums, to thread themselves through our lives and institutions. They worked to become the spaces for our commerce, for our sociality, etc., and then they financialized and commodified their roles in these spaces. I think we need to analyze the material powers that these firms have and look a little more closely at how these technologies actually work.

The future of workplace activism inside tech companies

You had played a pretty central role in Google’s protests. Now you can take a look a little bit with the benefit of hindsight. Do you think this activism worked inside the company?
Yeah, I certainly think it worked. But again, I don’t frame that type of organizing as having a goal and then subsiding. The goal was both to push back against unethical, immoral business decisions and the dangers…

For the benefit of the audience, we’re talking about use of A.I. in warfare, the decision to pay out $90 million to someone accused of sexual harassment, and then…
Yes. All those things. And the inequitable treatment of the contract workforce, which made up more than half of our colleagues but were not afforded the privileges of full-time work that you think about when you think about a big, glorious tech company workforce.

“It’s staggering to think about how much extraordinarily intimate information [Google] has about billions of people, literally… There is a roomful of Stasi dossiers on each one of us.”

They ended up not renewing the contract with Maven. People who were subsequently accused of misbehavior inside Google were not awarded payouts. They were summarily fired. And the laborers, I think the laborers is still an open question.
Yeah, they changed some policies in the right direction resistantly. Again, this is an ongoing struggle. This isn’t something you win once by changing executives’ minds because they finally see the light. This is, again, we’re dealing with capitalist logic. Capitalist logic dictates that ultimately the objective functions, to use an A.I. term, of any firm is continued revenue growth, continued exponential growth forever over time. These kinds of impossible goals that we’re beginning to question as we see the fragility of the planet that we live on and the harm that type of operation has done in the world, to the point that we’re now experiencing the beginnings of climate chaos and the potential end of organized human life.

We want to make sure that all of our colleagues have a dignified job, are not sleeping in their cars, are making a living wage, have health care, and that we erase these two-class worker systems. We really want to do that, because that’s justice now. While doing that, we also want to build the collective muscle to gain the power to make these decisions ourselves, to have more collective decision-making, and not leave these to a handful of people at the very top whose duty is to the board, whose duty is to shareholders, who ultimately are calibrating their decision-making around those capitalist incentives.

Shouldn’t these very powerful companies have a more democratic type of way of deciding what these companies should do?

Should it be in the hands of this one group of employees?
No. Especially when you’re looking at the outsized power that a company like Google has. It’s staggering to think about how much extraordinarily intimate information that company has about billions of people, literally, right? There is a roomful of Stasi dossiers on each one of us. It’s staggering to think about the way in which that company is able to use that data, that information, to create A.I. models, to create other services that are then making extremely sensitive determinations throughout our social institutions.

So, there are two levels on which I think we really need to take this power seriously and recognize the risks there. Certainly what I’m not suggesting is that a handful of 100,000 or 200,000 people in the world should be the arbiters of all of those decisions. But part of the work we were doing organizing was also building whistleblower networks. Beginning to build the connective tissue between organized workers and other social movements so that we could organize in ways that prefigured broader democratic decision-making. These kinds of broad based movements that allow democratic control are going to be necessary to create or recreate technology that could serve the public interests.

But I agree with you that the goal is not to build this hermetically sealed workplace democracy at Google. We don’t want a larger collection of the same people making those decisions. What we were doing in our organization was aiming to open up more space for collective discussion and decision making, and to prefigure these broader connections to those directly affected by many of the decisions Google makes. Ultimately the goal would be to link with these other social movements and ensure that those on the ground, who are most at risk from things like Google’s building A.I. for drone targeting, can take the lead.

I think the No Tech for ICE movement is one example where you saw tech workers across the industry taking the lead of people who do immigration policy, immigration advocacy on the U.S. southern border, and really understanding the context of what it means to be hunted and tracked by this technology, communicating that to the people who don’t have that experience but may have an understanding of how these systems work to build a campaign that is then pushing back against the companies who are provisioning those types of systems to ICE.

I want to just do a quick digression, because you mentioned Stasi dossiers. Can employees at Google just go into someone’s personal information and access it?
I’m using that as a metaphor, because it’s more easily graspable than different shards of a database where different parts of that information, which may not even be identifiable to me without matching that to something—whatever. They do collect that information, and at this point, I will say it is not easy to access that information. They log that very strenuously, and that’s in part because when I started back in 2006, it was a lot easier to access that information. There were a couple incidents.

“I think what we’re dealing with right now is an extraordinarily atrophied, if not broken, political system.”

Brian Armstrong, CEO of Coinbase, caused a bit of a stir in Silicon Valley when he banned employee political activism within the company. I thought about that and said to myself, “Maybe it’s better for employees’ political energy to be channeled through normal political channels versus trying to work within their company.”
I have a lot of questions about that framing, because when you think about the traditional political system, I’m like, are you thinking about it without the voter disenfranchisement that has been part of the far-right agenda for 20 years, driven by Karl Rove and others? Are you thinking about it before Citizens United, when corporate donations became corporate seats, and you had millions of billions of dollars of corporate dark money flooding into these campaigns, or before that? What conditions of normal politics are we talking about such that volunteering to get out the vote would be as effective as organizing for working power?

I guess that’s a question we all have to wrestle with, because I think what we’re dealing with right now is an extraordinarily atrophied, if not broken, political system that has been at the receiving end of legal activism and lobbying and a really organized campaign by the far right for many, many years. Right now, we’re speaking, I think at… the Amy Coney Barrett hearing is happening right now, which could further gut the last threads of voter protections that we have. So, again, I want to be really careful about that frame, and I think—

If folks don’t like the way the political system is operating, it’s one thing to say it’s wrong. It’s another thing to throw up their hands and say, “Well, it’s kind of broken, and we can’t fix it.” I also wonder if the energy would be well spent trying to push back against some of the things that you’re talking about.
Yeah. I think part of trying to push back against that would be things like pushing back against the slush money these corporate hacks are pushing into far-right causes that don’t represent the views of the people who are there or represent, arguably, the best interests of the public. So, again, I think we can’t ignore the outsized influence. Like vastly, vastly orders of magnitude outsized influence that large corporations have in shaping our political system and the way in which individual goodwill and volunteerism doesn’t even… It doesn’t even come close to ranking against the way these companies are able to operate.

So, again, this isn’t saying give up on the political system. Try all tools. Definitely vote. Get your parents to vote. Do the work. But frankly, I think worker organizing is also politics. The Coinbase story, one of the pieces I often hear missing from this story is that the CEO wrote that sort of polemic blog post after he had been challenged by a number of workers who ultimately staged a walkout because he wouldn’t say “Black lives matter.” So there’s context already for that. What do you make of a CEO who won’t say “Black lives matter” during a time of unprecedented rise in white supremacy and it veering toward authoritarianism? That is political.

No doubt. That should be an easy one.

Okay, so you got retaliated against. You left Google, and Claire Stapleton, another Walkout organizer, also left. Inside Amazon, another company I cover, Tim Bray, former Amazon VP, also left after the whistleblowers there were fired. What’s going to happen now that a lot of people who led them have now left?
Well, happily there’s still a lot of organizing going on at these companies that I’m aware of. The good news is that there’s a lot of ways to organize with your colleagues that don’t involve a public onslaught that engages the media, which was very much a part of our strategy. But there are a lot of leaders people don’t know about who were part of that organizing. A lot of people who didn’t want to or, for one reason or another, didn’t want to take the risk of being public with that. We each made our own choice, but again, I certainly don’t think this diminishes the strength of the organizing, and I would caution against equating visibility with continuity.

There’s a lot of organizing that’s continuing, and as you see with Coinbase, we now have tools in our toolbox across tech, like the walkout, a number of Facebook workers who have whistleblown and written their stories as they leave, that are becoming common sense. I think that’s one of the ways this type of organizing and this type of consciousness permeates over time, and there’s certainly continuity between the organizing that was happening when I was there and what we’re seeing now. I think what we’re seeing now is learning from some of the mistakes that I and others made. They’re developing stronger and more precise muscles to continue this work.

A.I. ethics

For a lot of people, the term “A.I. ethics” has become a bit of a lightning rod, a vehicle through which political views are potentially injected into the tech companies. I don’t like his methods, but I found one slide that James O’Keefe unearthed was interesting. It talked about how algorithms are programmed, and then media is filtered through those algorithms, and then people are programmed. Are algorithms actually programming people?
I find the entire project that he’s a part of extremely problematic and very, very, very flimsy. I think the critique I offered around The Social Network and this picture of tech as an almost godlike force that’s able to subdue us mere mortals with the power of its algorithms, again, no, that’s not what’s going on. Again, that bolsters some of the rhetoric that is, ironically, coming from the tech companies themselves that are claiming these systems can do a lot of things they’ve never been proven to do.

That was an internal Google slide, I think.
Well, I would disagree with the person who made it. Again, I don’t spend a lot of time digesting Project Veritas.

“I have problems with the term ‘A.I. ethics,’ because I think it’s almost so broad as to be meaningless.”

Again, I’m not a fan of the method. But I feel like there are moments where it’s worth taking a look at some of the material they unearthed, whether the methods are good or not, and then talk about it.
Yeah, I’m less concerned with who said it, although that’s definitely… that’s something we need to take into consideration. That’s not true. That’s not how these things work.

It’s feeding dual narratives, I think—that the tech companies have one interest in presenting this technology as infallible, because it justifies the proliferation of this technology into domains where they’re going to make money. The far right has another interest in presenting these technologies as scary bogeymen that we need to be very frightened of, because it perpetuates a kind of campaign to subdue these companies and ultimately bend them to the will of the far right.

Can you talk a little bit about how the company rhetoric is playing into that far-right campaign, and what do you think the far right’s goals are?
I have problems with the term “A.I. ethics,” because I think it’s almost so broad as to be meaningless. We did organize in opposition to an ethics review board that Google put together, which was part of the company’s attempt to try and pacify some of the dissent around the choices they were making around the A.I. in the military. On that review board, they had Kay Coles James, who was the head of the Heritage Foundation, a deeply far-right organization that she personally and the organization as a whole have taken a number of fairly virulent anti-LGBTQ positions, anti-trans positions.

When you look at the way these A.I. systems constructed through machine learning work, it is very clear to anyone who works with these systems that, again, they aren’t intelligent. What they do is process huge amounts of data, whatever data they have available, and from that data, they build a model of the world. So, if you show them a bunch of data about cats, they’re going to… Here’s 1 million, 20 million pictures of cats. They’re going to get some picture of what a cat looks like. Then, if you show them a picture of a truck, they’re going to be like, “That’s not a cat.” You show them a picture of a cat, they’re going to be like, “I predict that this is a cat.” That’s how they work. So, they’re given data from the world we live in that represents our past and our present, and they very irreducibly encode the values that are in that data.

So, these systems very often—and there’s research—Joy Buolamwini, Timnit Gebru, Deborah Raji, and others have shown this over and over again that these systems replicate patterns of racism, patterns of misogyny. They encode these assumptions in their understanding of the world, because, of course, their understanding is trained on data that is pulled from that very same world, those very same contexts. So, when you’re looking at the politics of A.I. systems, when you’re looking at their implications, when you’re looking at issues of bias and fairness, and yes, even ethics, you need to be weighting very heavily the views of people who experience those harms of marginalization. You’re looking at people who understand the dynamics of race and racialization. People who understand issues with anti-LGBTQ bias.

A lot of our organizing around these issues was pushing back on the idea that a board like that was suitable to make these decisions, and pushing forward the notion that we really needed to center the voices of people who are most likely to be harmed by these systems.

There was that very famous case inside Amazon where they built this recruiting algorithm, and even when they didn’t tell the recruiting algorithm the gender of the person, it would look for attributes that would indicate the person was a woman and then end up removing them from the search. It ended up becoming so broken that Amazon gave up on even rolling it out.
Yeah, that’s a classic example. I think that is also a pretty interesting diagnostic tool to reflect back the persistent misogyny that was encoded in Amazon’s hiring practices.

We can also think about these systems as showing us some of these uncomfortable and potentially latent issues that are part of the construction of the data—in this case, the résumés, the hiring, the weighting, the performance reviews, etc.—that trained this algorithm.

“There is zero proof that anti-conservative bias exists. In fact, these companies bend over backwards to not enforce their terms of service for people like President Trump.”

We’re about to see a push from the Department of Justice under Bill Barr to attack Google. Do you worry that even if the field you’re in does good work that the government will come in with blunt force and end up rolling some of it back?
Yeah, absolutely. I think, again, these are political battles. What the far right and the protofascist government are saying right now is that if you deplatform hate speech, if you deplatform Nazi content, if you deplatform the propaganda arm that we have implemented through these systems, then we are going to come after you. There is zero proof that anti-conservative bias exists. In fact, these companies bend over backwards to not enforce their terms of service for people like President Trump. But you see that… Why, during the Big Tech hearings, did Jim Johnson and Representative Gertz spend so much time just bloviating about this? They’re setting up a narrative that is effectively communicating to these companies, “Don’t touch this content.” I think, again, that gets back to the fact that this is organized. That there is money going into the propagation of this type of content through these networks. It’s not magic. It’s organization, and it’s funding.

Let’s end with this question. You tweeted that an interview question you’d ask people was, “If you could shut down the internet, would you?” If you could shut down the internet, would you?
Oh, well, one of the things I also tweeted when I tweeted that interview question, which is something I used to ask at Google when I was interviewing people, was that I was looking for someone who would actually take that question seriously, would think about in total: Do I think this is a positive or negative force? I would work with people who think through that. What would be the reason for shutting it down? What would be the reason for keeping it on? All things considered.

I think my view on this is that it’s going to be really hard to ensure that these sort of computational network technologies ultimately serve democratic ends — meaning the internet, plus everything that’s been built on top of these original protocols and all the infrastructure that was built out and is now mainly privately owned, etc. It’s going to be really hard to repurpose that toward democratic, people-driven ends, given the consolidation of power that is right now dominating those infrastructures and given the neoliberal capitalist incentives that are driving those who dominate those infrastructures. But, I think it’s worth fighting.

I’m certainly not anti-computers or anti-technology, but I think we have to recognize these as questions of power and control, and not questions of the hypothetical, benevolent uses of these technologies. We need to examine what actually happens when people with a certain set of capitalist incentives are the ones governing this technology’s use and utility.

Silicon Valley-based journalist covering Big Tech and society. Subscribe to my newsletter here:

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store