Open Dialogue

‘There’s Not a Lot of Self-Reflection in Silicon Valley’: On Big Tech and Ethics

Evan Selinger in conversation with Mary Berk

This is Open Dialogue, an interview series from OneZero about technology and ethics.

I’m thrilled to talk with Mary Berk. Mary has a PhD in philosophy, a degree that includes a specialization in ethics, but spent her career working in Silicon Valley. Most recently, Mary was a product manager at Facebook and Instagram. Previously, Mary worked at Amazon, Google, Microsoft, eBay. Given Mary’s many years of experience and her disposition for critical thinking, she’s the perfect person to discuss whether Big Tech can care about ethics.

Our conversation has been edited and condensed for clarity.

Evan: What got you interested in philosophy? I was drawn in right away during my first semester in college. My high school mostly offered the typical humanities courses, like English, Communications, and Social Studies. I almost couldn’t believe that a professional discipline exists that’s devoted to critically interrogating fundamental beliefs.

Mary: My fascination with philosophy started in high school. At the time, everyone told me I wasn’t really interested in it, that I was just curious about existential literature. They were wrong. I went into college intending to be a philosophy major. But when I sat down with my advisor freshman year and told him what I wanted to study, he said I didn’t really want to go down this path — and he was head of the philosophy department!

One more thing. I grew up in a very middle-class family. The question was always, “How are you going to support yourself as a philosopher?” So, there was a lot of pressure around that, and I was always keeping in mind a “plan B.”

Evan: Despite the raised eyebrows, you ended up getting a PhD in philosophy from Johns Hopkins University and writing a dissertation on Thomas Hobbes’ conception of rights. Why did you focus on Hobbes?

Mary: During my first year of graduate school, I wrote a paper on Hobbes. One of my professors stopped me in the middle of my presentation on it and said, “No, that’s wrong. Whose presentation is next?” After this treatment, I was determined to figure out what Hobbes was thinking. And in the process, I fell in love with Hobbes.

Evan: I’m sorry you were treated this way. It’s horrible. I know we’re going to focus on working in Silicon Valley and the problematic issues occurring there. But damn, academia can be a toxic environment. It’s so distressing that people keep telling you that you don’t know what you want.

Let’s flip the scenario and put you in the driver’s seat. If Silicon Valley executives shared your appreciation of Hobbes, what would they do differently?

Mary: From a political perspective, Hobbes always questioned, “How might this thing I’m doing today snowball into other actions and consequences later on?” He was always asking “what chain of events can this action today possibly lead to?” and pointing out that an environment in which everyone has ultimate freedom to pursue their interests maximally without regard to long-term consequences would lead to disastrous realities. Tech companies that took Hobbes to heart would be aware of the potential for anything they do to create massive damage, including ultimately undermining their own self-interest. More specifically, they would consider how a product they’re developing might cause harm — not just now, not only in a few years, but into the distant future. And they would take steps to mitigate those issues now, not just when they’re called on it or when the inevitable problems emerge. I like to think that Hobbes was also pointing out the effects of scale, to put things in more “tech-friendly” terminology.

Unfortunately, tech leaders don’t think this way. The dominant attitude is to strive to get a lead on competitors, maximally monetize innovation, avoid creating any limits that could impede progress towards whatever growth metric matters most at the moment, and not worry about cleaning up messes until forced to. And to be clear: I’m not saying these are bad people, necessarily, but that they’re laser-focused on their own interest.

Evan: How did you go from getting a doctorate in philosophy to working in Silicon Valley?

Mary: While I was studying philosophy, I had regular conversations with a friend about the internet, in the early days. We were drawn to tech-ethics issues, although they weren’t called that at the time. She got a job at eBay. When the company was building out its policy team, she recommended me for a job.

Evan: How did you ace the interview?!? You didn’t have a policy or tech background.

Mary: They weren’t interested in me at first for just this reason. But eventually, the hiring manager called me. He expressed that after 50 phone interviews, nobody demonstrated they could think critically about how to calculate the consequences of actions 20 or so steps down the line, or understand the motivations behind someone’s actions, such as an individual eBay seller, in a largely unregulated environment. But given my training, I could do that easily!

Evan: Do you remember any of the scenarios you had to consider to prove you could what they were looking for?

Mary: Yes. I was asked why someone might list the fanciest TV available at the time on eBay with a starting price of one cent without listing any reserve. This means the highest bidder wins the item, and that the seller could theoretically have to sell a $3,000 TV for one cent.

Evan: At face value, it might seem that this person doesn’t understand how to do a sensible auction.

Mary: I thought the answer was obvious. The person found a great way to draw attention to an item. Lots of attention can jack up the price — invite more bidders and more competition. Today, as people are more versed in the internet, I think the answer might be more obvious. But at the time, the internet was young and people just didn’t think this way.

Evan: After giving an impressive demonstration of your mind at work, did you take the job?

Mary: I turned it down initially because I had planned to spend one more year in graduate school turning my dissertation into a book. After thinking about it for a while, however, I realized it was a great opportunity. After finishing my dissertation in a Silicon Valley coffee shop, I started at eBay the very next day as a product manager working in the Trust and Safety Department. I focused on intellectual property policy and was the product manager for eBay’s anti-counterfeit program.

Evan: What were the big issues at the time? Were the folks at eBay concerned about being liable for the sale of counterfeit items on the platform?

Mary: Exactly, eBay wanted to protect buyers and avoid being sued and fined for hosting or participating in the sale of a potentially counterfeit item. eBay, like other tech companies, was a platform that brought buyers and sellers together, and they couldn’t really know in great detail what specifically was being sold. This is what was new about the internet at the time.

Evan: How did eBay attempt to figure out which goods were fake? Did they feel they had any prospective obligations to determine what’s fake before a buyer gets duped?

Mary: With the caveat that I’m not a lawyer, I’m going to speak generally about tech companies and the law. If a company knows that fake (illegal) items are being sold, it typically has an obligation to remove them and is liable if they don’t. Given this outcome, companies may go to lengths to avoid having knowledge or the appearance of knowledge, especially when that “knowledge” is tenuous or obligates them. This is somewhat understandable. The law sets up this structure, and companies who aren’t experts in other companies’ intellectual property really do have a hard time knowing for certain what is fake and what is not. When you can’t be certain, and either your decision to remove an item or to host it can create liability, you just try to avoid knowledge and rely on users and rights owners to tell you what’s happening.

Evan: In other words, there’s a powerful incentive to actively remain ignorant.

Mary: Right. Companies will often create a minimally reactive program, which isn’t to say that such a program doesn’t take a lot of staff, infrastructure, or effort. Companies will do obvious things, like identifying and taking down posts where someone comes right out and states they’re selling counterfeit goods. Or taking action after receiving multiple complaints from buyers that a seller’s goods are fake.

One big risk for companies, including Amazon, where I worked on similar problems, and any companies that own retail platforms, occurs when you have a certain type of knowledge but don’t act on it. For example, being told by a rights owner that an item infringes copyrights or trademarks, but not doing anything to rectify the situation. And so, platforms had a tendency to take their word as gospel. Treating IP owners as experts allowed platforms to maintain the position that they had no real knowledge until an authoritative expert notified them otherwise. It’s important to note that this is also the process that the law — in this case, the Digital Millennium Copyright Act (DMCA) — lays out.

In sum, being proactive can sometimes mean setting a precedent. If a company can do it once, the question becomes “why can’t they always do this,” and the answer is that it’s extremely difficult. Companies typically (not always) want to protect consumers, but there are reasons that they believe their hands are tied from doing more, or that doing more creates additional problems they’re trying to avoid. From a corporate perspective, that can be a bad idea because it creates additional risk and liability.

Evan: It must have been frustrating to be bound by these constraints.

Mary: It was, but it’s also understandable given the legal incentives and risks created by the law, so it’s also important to be mindful of the larger context here. This is a recurring theme throughout my career in Silicon Valley. Employees often want to do the right thing. But something structural gets in their way. Or in the case of privacy law, for example, the historical lack of legal boundaries creates other incentives and motivations which also do harm.

Evan: Are structural conditions absolute impediments?

Mary: They’re highly influential. But sometimes there’s hope. I have sometimes been able to convince companies to go outside legal best practices when it’s the right thing to do, or in the best interest of the user/customer. It can take a long time to persuade the legal team. They worry about change and their job is to prevent risk.

Evan: Given how entrenched these dynamics are, how did you make headway?

Mary: My typical approach is to focus on the fact that being proactive will create a safer environment in the long run, and to put on my legal hat to provide rationales for why the type of changes I was seeking wouldn’t violate clear legal boundaries. Sometimes giving people a perspective outside of their discipline is helpful, and we all need that. For example, at one company, safety was one of the high-level values expressed in the company’s mission statement, and I tied my arguments to that.

Evan: Are you saying that mission statements and other corporate values proclamations can be sincere and leveraged to create positive change?

Mary: Sometimes. When companies put out value statements, they’re really excited about them and announce the ideas with great fanfare. They typically believe in them upfront. But often, they’ve also gone through 18 layers of PR and legal vetting. These processes are designed to prevent companies from using words or making claims that can be interpreted in ways that might create problems down the line. It really depends: early-stage startups are trying to move fast and get something out there; later-stage companies will be very careful about the values statements they make. And I’ve known some to pull back the effort altogether, out of fear they’d be limited or held to those statements publicly.

Evan: What does it mean, then, that Amazon claims to be the “earth’s most customer-centric company”? Who, exactly, is the customer?

Mary: I left Amazon about 10 years ago, so my comments reflect the company at that time — a place I admired in many ways. When Amazon was a younger company, every single mission statement of every division of the company was crafted to further the overall mission of being “earth’s most customer-centric company.” I worked in the risk management division at the time, and we settled on a mission of creating the world’s safest place to transact online. One of the things I appreciated immensely about Amazon at the time was its genuine customer obsession. If something wasn’t best for the customer, meaning individual buyers and sellers, the message from Jeff Bezos all the way down was to go back to the drawing board. But over time, and I say this from an outside perspective and reading between the lines, it seems like Amazon has changed a lot.

Now, Amazon is serving new customers. For example, the “customer” used to be the buyer on Amazon.com. Now, Amazon has entered the ads business, and the original “customer” is in another context the “user,” and ad tech has a long history of seeing users as things to be manipulated. Similarly, another new set of customers for Amazon are advertisers and government agencies. Somewhat tangentially, Amazon now seems to be optimizing for growth, rather than the customer (as buyer), and the result is plain. Whenever you add something to your Amazon cart, you get bombarded with recommendations, and a large portion of product listings are actually paid advertisements. It looks like the growth team is always trying to optimize around the edges, not simply focus on providing the best experience and most value.

Evan: Did this shift happen because, to expand on the old adage of absolute power corrupting absolutely, emphasis on growth always corrupts in the end?

Mary: I’ll say it’s not surprising, and it’s common in tech to develop an obsession with increasing metrics around the margins. Rather than an “absolute power corrupts absolutely” approach to diagnosing what’s going wrong, I also see affinities with the Bible verse that “the love of money is the root of all evil.” Still, if I thought anyone could resist the pull, it would have been Amazon. The company has the strongest culture I’ve ever seen, and Bezos even highlights building a strong culture as the single best thing he’s done for Amazon.

What I see a lot in the tech world, including at places like Facebook and Google, is that people start out with excellent intentions and do really good work, but once they get a little bit of power, or they start getting ahead of the competition, and it goes to their heads. This is understandable but unfortunate. A culture of obsession with growth metrics makes it tempting to ignore the ecosystem. Another side impact is the prevalence of high egos. In a culture where growth and “impact” are rewarded, people who hit their numbers assume that they are unique and special. It’s a common thing in Silicon Valley to have people think that because their company is doing well, there is something special about them. I’ve known many people who assume that because they’re at a high-growth company, they must be a rock star — that Facebook’s or Google’s success means that they are special, that they can do no wrong. In reality, if they took their same behaviors to another company, they would likely not have the same success.

Evan: What did you do when you worked at Facebook and Instagram?

Mary: I was in product management. I led product teams to build and deliver new products and improve existing ones. For example, I was originally hired at Facebook to lead the privacy product team. At Instagram, I worked on privacy, integrity, and account security.

Evan: Can you expand on your thesis about egotistical behavior? I would imagine these companies are really good at ensuring that new hires share their beliefs and values. You don’t want people to have to come into work checking their consciences at the door.

Mary: Despite Facebook’s reputation from the outside — one that I have no quibbles with — there were many things I loved about working at the company. For example, I actually loved my colleagues at Facebook. They’re among the best, most humane, incredible, talented, kind, cooperative, collaborative, and thoughtful people I’ve ever worked with. They challenged me, and frankly, I live for that. One thing I noticed and appreciated is that Facebook actively screens for empathy during product management interviews. The company wants to hire empathetic people who can put themselves in the user’s perspective, anticipate what problems users might have, and effectively fix problems when they arise. For example, during an interview, you’ll be asked to design a hypothetical product to show you’re capable of understanding what users will want to accomplish and how they’ll act. In fact, Facebook is generally really good at ensuring its employees think of the people who use their products as real people — not abstract entities like users to be exploited. That isn’t to say that the corporation isn’t structured in a way that designs for dark patterns and user exploitation, but that generally speaking, product teams aim to solve real “people problems,” as they’re called.

Evan: But…

Mary: But what happens in tech companies is that company culture, policy, or executive direction constrain all of this good stuff. The company wants to hire the best in people. But also, to keep these virtues within certain boundaries by always putting the company’s goals first. If something conflicts with a company’s goals, it doesn’t go out the door. Leadership sets the goals. A typical goal might be increasing engagement or time spent, and any products that reduce those metrics don’t launch. And leadership, I think, has some pretty serious ego problems, along the lines of what I described earlier. To be clear, this isn’t just a problem at Facebook. It’s fairly typical in Silicon Valley.

Evan: What would Hobbes say about this?

Mary: Hobbes would basically say we all have desires and aversions. We’re basically just bundles of nonrational passions that conflict and change and are ultimately about serving our own interest or survival, even if we still have motivations to bond with others or serve a greater good. Whatever we desire, we call good. Whatever we have an aversion to, we call evil. Good and evil are not absolutes. They’re just whatever gives us pleasure and pain.

In tech culture, senior leadership is going to follow their passions like any other human, but they have a lot more power, and therefore more ability, to achieve the objects of their desire. And as you might expect, when a company’s leadership is frequently criticized or attacked, it’s natural to develop a type of comradery that takes the form of an “us against the world” mentality. Basically, the company is being attacked and, as a result, leadership does whatever they need to do to avoid that pain and turn it into an advantage — to win. It’s a natural human reaction. When someone comes after you with criticism, you see them as wrong and threatening, as something to be overcome or put down.

Evan: If I understand you correctly, the scrutiny of Big Tech hasn’t motivated companies to find their moral compasses or adopt precautionary policies to minimize future harms at scale. Instead, it’s allowed extremely powerful entities to adopt a victimized mentality filled with grievances.

Mary: You’re correct, although I don’t want to say it’s as simple as just being a persecution complex. But you can kind of understand this happening. People are just trying to grow their companies and build something new, especially in the early stages, and then the entire world comes after them. They’re making money hand over fist. Who are they going to listen to? The people who cheer them on? Their shareholders who love them? Or the people telling them they’re crap? There’s not a lot of self-reflection in Silicon Valley.

Evan: Even with so much money, why doesn’t management value forthright critics?

Mary: There are internal critical voices, but they are very courageous. Or, companies try to direct that reflective energy toward company goals — challenge themselves to build better toward growth goals, not ethical considerations. Or to address ethical considerations strictly within the boundaries of growth goals.

Facebook, like Google, implemented the practice of having a weekly chat with leadership. The entire company comes together. You have some beers and wine. You sit and talk. Your CEO is there. Four years ago, it was just a much more open, self-critical event. I thoroughly enjoyed them. But then, a couple of years ago, after Cambridge Analytica and other major issues, when Facebook took it on the chin, I noticed those internal conversations becoming more like propaganda machines. The goal seemed to be for leadership to provide a platform to make you feel good about your work, to give you the party line about the value of your work, to make you feel good about going back into your daily job with renewed energy. It became a lot more difficult to be voice your true perspective about doing the right thing. This is of course just my perception, but I noticed a marked change. Entire leadership teams were present to manage a Q&A that went off the rails, and it was clear that talking points were well established in advance.

Evan: I once gave a talk for a major company and was asked to start off by voicing whatever reservations I might have had. It was a brilliant move. The goal was to make me feel heard without actually caring about my concerns. The technique uses catharsis to get people to lower their guard. Does this type of disingenuousness pervade Big Tech?

Mary: Yes, many of the company forums and the posts are about, “Well, I got to be heard and the company did this horrible thing anyway, but they must have had a reason.” Feeling heard is a basic human need, and believing that your leadership is listening and that there’s a chance things will eventually change makes you feel better. But in my experience, things rarely change. Catharsis is the point. It’s very effective.

At this point in a company, you only really only survive if you keep your head down, don’t raise concerns to anyone, and let concerns drop when you hear them, even if they’re constructively posed. It’s a corporate playbook that’s reflects deeply embedded attitudes in the DNA of corporate culture. Google just followed it by gutting their A.I. ethics leads, Margaret Mitchell and Timnit Gebru, for being critical of the company’s internal practices — and that was literally their job. Google wasn’t worried about public backlash, or they assumed any backlash would go away quickly, and so the hit was worth it. They’re entirely focused on making the point that they’ll crush anyone who makes the company look bad — in scaring employees to not be so brazen in the future. They figure the public outcry will go away. They bring in new management to bulletproof themselves against further criticism. They do the opposite of becoming self-aware and contrite.

Evan: Would you say that hoping for significant gains in tech ethics or A.I. ethics in Silicon Valley without strong regulation is misplaced idealism?

Mary: I don’t like being quite so hopeless. But I can’t disagree. Shareholders expect revenue growth from 30% to 50% year after year. 20 years ago, that would have been phenomenal. Now, it’s the norm.

Having worked on privacy at Facebook, let me give you a hypothetical example illustrating how deep the constraints can run. Facebook doesn’t give you a choice to make your profile picture private. It’s always publicly visible, enabling bad actors to mark it up, repost it, create impersonation accounts, harass women, and more. The harms are clear.

Evan: They also include making your image vulnerable to being scraped and inserted into a facial recognition database.

Mary: So, why might this be the case?? Like many things in tech, there’s a simple answer: System design is typically deliberate. You might imagine that, without profile pictures, friending goes way down. Friending is one of the primary drivers of engagement on Facebook. This is not a secret. The more users are engaged, the more other growth metrics are hit, and the more money the company makes. Design consequences are typically intentional: Why do you think it’s so hard generally to find privacy settings, or that the default on an iPhone is not to show you when an app or service is using your location?

Evan: What’s morale like under these conditions?

Mary: Usually in a tech company, you’re working so hard and running so fast that there’s not a ton of time for reflection. For example, many people set stretch goals or 50/50 goals. Goals you set for yourself that you have a 50% chance of achieving. You hope that by stretching yourself, you will actually achieve 75% of them. So, you’re always in a sprint, trying to do more with less. And when you’re sprinting toward a goal, you don’t have time to question the ecosystem. An important takeaway is that Silicon Valley requires lots of young employees because if you don’t have the stamina to be working nonstop, full time as overtime, you’re not going to cut it. And in performance reviews, you’re competing against your colleagues, so the pressure is always there to do more. It’s relentless. I never had a real vacation where I could just relax.

On top of this, as I already mentioned, you notice that people who do speak up and ask critical questions end up getting humiliated and fired. Being on board is an unspoken requirement of employment.

Evan: How bad is the burnout in this culture?

Mary: It’s very high. But you do what you need to do because the pay is so good, or because you believe you’re making a difference, or because you just want to keep your job. A regular thing to do in Silicon Valley is work for four years and then take a year off because you’re so tired. Also, you’re constantly monitored and assessed, and you’re only as valuable as your last performance review.

And here, as elsewhere, things can be worse for women or minorities. I’m sure this is typical. You sort of can’t win for losing at times. In one hilarious performance review, I received peer feedback that criticized me for both not smiling enough, which meant I’m too serious and unapproachable, and also for smiling too much, which meant I’m not serious enough. I’ve also been called “difficult” for having even the gentlest of opinions and was basically told that having any feedback at all meant that I didn’t trust my manager and that this made me a problem. Even with lots of emotional labor, offering constructive criticism through compliment sandwiches — where you insert carefully worded criticism in-between praise — it’s very hard for women to be heard. These are the situations and performance reviews I can laugh at. You develop a thick skin and learn to laugh when it hurts.

Prof. Philosophy at RIT. Latest book: “Re-Engineering Humanity.” Bylines everywhere. http://eselinger.org/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store