Illustrations: Erik Carter

The Risk Makers

Viral hate, election interference, and hacked accounts: inside the tech industry’s decades-long failure to reckon with risk

OOne spring day in 2014, Susan Benesch arrived at Facebook’s headquarters in Menlo Park and was ushered into a glass-walled conference room. She’d traveled from Washington, D.C., to meet with Facebook’s Compassion Research Team, a group that included employees, academics, and researchers whose job was to build tools to help users resolve conflicts directly, reducing Facebook’s need to intervene.

Benesch, a human rights lawyer, faculty associate at Harvard, and founder of the Dangerous Speech Project, a nonprofit studying the connection between online speech and real-world violence, worked closely with the Compassion Research Team, and used this meeting to raise a serious issue that had come to her attention: The extensive sectarian violence in Myanmar.

Long before it was headline news, human rights groups were warning that the Burmese military and a segment of the population were orchestrating widescale abuses against civilians, particularly the country’s Muslim Rohingya minority: forced labor, sexual violence, extrajudicial killings, the burning of villages. The attacks, amply documented but denied by the Myanmar government, were being coordinated online and often via Facebook, human rights activists said. Facebook came preinstalled on most mobile phones and, as a result, was the country’s primary news and information source.

Activists in Myanmar and the United States were calling Benesch and asking for help. Facebook, they said, was proliferating dangerous speech without fully grasping the country’s political and cultural divisions, or comprehending the danger — and their efforts to address the problems with the company hadn’t received an adequate response. In her meeting with the compassion team, Benesch relayed their concerns in blunt terms.

“You have this serious problem in Myanmar,” she told the group. “There is an appreciable risk of mass violence.”

To address the issue, Facebook began working directly with activists in Myanmar to flag dangerous content, and the company made some changes, such as translating Facebook’s Community Standards into Burmese and setting up a rapid escalation channel for reports. But these changes were relatively minor, Benesch and human rights activists said, and did little to stem the tide of violence that followed.

It took years before Facebook publicly acknowledged its role in what the United Nations’ top human rights official characterized in 2017 as “a textbook example of ethnic cleansing.” The following year, Yanghee Lee, the UN’s special rapporteur on the situation on human rights in Myanmar, highlighted the social media giant’s influence in the country. “I’m afraid that Facebook has now turned into a beast, and not what it originally intended,” Lee said.

The routine is familiar now: A tech company like Facebook, Snapchat, or Zoom introduces a new product or service without fully anticipating the possibilities for abuse. Then comes an apology and vows “to do better.” It’s a wash-rinse-repeat cycle that spans decades.

Then comes an apology and vows “to do better.” It’s a wash-rinse-repeat cycle that spans decades.

“I hope you understand, this is not how I meant for things to go, and I apologize for any harm done as a result of my neglect to consider how quickly the site would spread,” Mark Zuckerberg told a number of his fellow Harvard students in 2003 after he harvested their photos, without consent, to populate facemash.com, the Facebook precursor that invited students to rank classmates à la “hot-or-not.”

“I thought once everybody could speak freely and exchange information and ideas, the world is automatically going to be a better place. I was wrong about that,” Twitter co-founder Ev Williams, who served as the company’s CEO between October 2008 and October 2010 said in a retrospective interview with the New York Times in 2017 that covered violence, harassment, and fake news on the platform, as well as suggestions that Twitter had possibly helped Donald Trump win the presidency. (Williams is also the founder and CEO of OneZero’s parent company Medium.)

In April 2020, at the height of the pandemic, as internet trolls “Zoombombed” video calls with harassing content, Zoom CEO Eric Yuan apologized too — for “challenges we did not anticipate.”

The failure to properly calculate risk sits at the core of most high-profile tech disasters of the last decade. The problem is endemic to the industry, critics say. “Harmful content, of any category, is not an aberration, but a condition of platforms,” says Tarleton Gillespie, a principal researcher at Microsoft and an adjunct associate professor at Cornell University, and author of the 2018 book Custodians of the Internet.

The internet’s “condition of harm” and its direct relation to risk is structural. The tech industry — from venture capitalists to engineers to creative visionaries — is known for its strike-it-rich Wild West individualistic ethos, swaggering risk-taking, and persistent homogeneity. Some of this may be a direct result of the industry’s whiteness and maleness. For more than two decades, studies have found that a specific subset of men, in the U.S. mostly white, with higher status and a strong belief in individual efficacy, are prone to accept new technologies with greater alacrity while minimizing their potential threats — a phenomenon researchers have called the “white-male effect,” a form of cognition that protects status. In the words of one study, the findings expose “a host of new practical and moral challenges for reconciling the rational regulation of risk with democratic decision making.”

Risk assessment is also often neglected in favor of profit. “In decision-making, attention becomes focused on [profit], and not on human welfare, humanitarian social issues,” Paul Slovic, co-founder of Decision Research and a pioneer in the field of risk assessment, tells OneZero and Type Investigations. “That then can lead to policies that are not necessarily intended to harm, but that do inadvertently.”

According to interviews with more than three dozen tech industry experts, insiders, and researchers, these companies and more, including Google, YouTube, Lime, and Zoom, repeatedly fail to adequately assess the risks their products introduce. Facebook in particular continues to make headlines, most recently about state actors allegedly misusing the platform for political purposes. A massive high-profile hack this summer underscored how Twitter, too, struggles to reckon with risk. The security and safety functions historically tasked with reducing risk remain siloed, narrowly technical, and ignorant or dismissive of some 50 years of decision science research.

Instead, many tech companies still cause harm and apologize in the aftermath. Multiple sources attributed this dynamic to a “move fast and break things” hangover.

“When you work at these companies, you are constantly moving from emergency to emergency,” Alex Stamos, who worked as a top security executive at Yahoo and Facebook from 2014 to 2018, tells us. “There was never not a fire at either Facebook or Yahoo. I cannot think of one point where I could sit down and do something proactive.”

A daily flood of stories about security breaches, surveillance risks, weaponized data, voter manipulation, disinformation, algorithmic biases, conspiracy theories, hate, and harassment attests to the infinite volume and variety of possible failures. The internet of things, already enmeshed in our day-to-day lives, is rife with profound security risks. From glitchy voting-related apps hastily designed and released, to concerns about dangerous vaping pods, self-driving cars, exploding smartphones, and supposedly revolutionary blood tests, companies routinely release untested, unverified, unregulated, and, sometimes, fraudulent products.

“When you work at these companies, you are constantly moving from emergency to emergency.”

Identifying risk isn’t just a technical problem. Risk assessment is, in simple terms, any action taken, whether by an individual or an institution, to decide what is acceptable risk. It matters who’s in the room identifying acceptable risk and taking action in response. It is political. It is high-stakes. And, according to sources with expertise in the field, it is deeply misunderstood.

The terms “risk” and “threat” are frequently conflated. “The language of risk is fuzzy and confusing,” says Slovic. “We use the word risk to mean different things, often from one sentence to the next and we don’t realize it.” Threats are generally understood as known possibilities, and can be natural (a tornado), unintentional (a coder’s mistake), or intentional (a terrorist attack). Historically, threats have been the bailiwick of security teams.

Risk, on the other hand, can be understood as the likelihood of something bad happening and impacts entire organizations.

“Risk management is looking at the macro level of all risks facing an organization,” says Michael Coates, co-founder and CEO of Altitude Networks, and former chief information security officer at Twitter and former security chief at Mozilla.

With the 2020 presidential election looming, the risk of tech going wrong is tangible and immediate. In mid-September, a damning publication of an internal document written by Facebook data scientist Sophie Zhang was leaked to BuzzFeed News.

“I’ve found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry,” she wrote in an exit memo — in Azerbaijan, Honduras, India, Ukraine, Spain, Brazil, Bolivia, and Ecuador. Zhang described siloing that inhibited communications and a company that prioritized business concerns over “real-world problems.” While she herself was not part of Facebook’s “civic integrity” team, her work regularly exposed civic integrity risks, but was downgraded to the status of pet project performed in “spare time.” Relative to spam, she wrote, “The civic aspect was discounted because of its small volume, its disproportionate impact ignored.” She carefully noted that there was no conscious malicious intent. Instead, she said, “slapdash and haphazard accidents” were rife. Given the high stakes, her observations are alarming — but not surprising.

OneZero and Type Investigations spoke with academics, activists, and current and former employees at major tech firms, including Google, Microsoft, Twitter, Uber, and YouTube in order to understand how their companies approach risk.

We spent time at Facebook headquarters and held extensive discussions with top-level executives from the company in person and via email. Facebook revealed significant details about how the company addressed risk, but the company’s willingness to respond to questions was not the norm. Beyond saying that they were “monitoring new threats continuously,” as a YouTube spokesperson said, few companies described a specific or formalized process for identifying and assessing risk. Companies such as Twitter and Google provided generalized statements of commitment to reducing risk or links to public communications on their websites. We also reached out to other companies with recent high-profile issues, including Zoom and Snapchat. Spokespeople for the companies offered publicly available policy papers and blog posts.

In interviews, industry executives, technologists, and security experts said that “you can’t know what you don’t know.” They had little to say about decision-making processes designed to identify acceptable risk, before the fact of harm.

While not all risks can be known and risks created by new technologies are often unprecedented, the science of risk and decision-making is backed by half a century of research. The nuclear power, banking, environmental, food, automotive, aerospace, and medical industries all have formal processes — such as they are — for assessing potential risks before releasing new products or services. Should Silicon Valley’s most prominent companies get a pass?

“We would never allow, for example, a pharmaceutical company to experiment on the public and then, after seeing what happens, withdraw or change a product,” says Safiya Noble, author of Algorithms of Oppression and co-founder of the Center for Critical Internet Inquiry at the University of California, Los Angeles.

So, what would harmful tech products have looked like if their creators had integrated the best of risk science?

ItIt was a gray December afternoon in Arlington, Virginia, day two of the Society for Risk Analysis’ 2019 Annual Meeting, when we first met Paul Slovic. A balding man of 82 in cargo pants, running shoes, and a well-worn wool color-blocked sweater, he was jotting down notes about a presentation on risk communications: “Incomplete. Complex. Multidisciplinary.”

“Fast, intuitive thinking doesn’t scale,” he told us later, especially when it comes to social values and humanitarian concerns. Tech leaders would do well to have more humility “about their ability to anticipate all the problems that might arise from a powerful system in which we don’t have experiential basis for decisions.” The first step, he said, is “to be aware of the tricks that our minds play on us, so we can be alert as to why systems that can protect us can also deceive us.”

Slovic has spent decades studying risk and decision-making, and what happens to organizations encountering unprecedented speed, scale, and harm. “People have to think incredibly hard to imagine undesirable and never-before-seen consequences,” he told us.

In 1957, Slovic was a teenage basketball player who’d enrolled at Stanford with plans to major in math. Soon after, he concluded he was much more interested in the powerful forces that shape human behavior — curiosity, fear, ambition, status, attention, greed — and switched majors to psychology.

While in graduate school at the University of Michigan, Slovic was introduced to research on the psychology of risk and decision-making. Eventually, he and his collaborators would identify a deep well of mechanisms central to what he calls “society’s gambles” — the risks we face every day. His work investigates the cognitive heuristics and biases, grounded in experience, identity, and emotions, that shape decision-making, developing a canon of work. Slovic’s theories and methods have been applied to sectors as diverse as public policy, energy, medicine, human rights, law, aerospace, the military, economics, and the environment.

Risk assessment, Slovic says, is challenging because it forces us to confront and overcome our own biases. “Decision-making is also coupled with the fact that if we don’t want to see certain kinds of things, we’re likely not to think of them. If there are outcomes that are problematic or unpleasant, you might not work very hard to anticipate and address them.”

“Decision-making is also coupled with the fact that if we don’t want to see certain kinds of things.”

In the 1970s, Slovic was invited to present his work at conferences of nuclear engineers and energy industry executives. “Nuclear power, as with tech today, was a technology driven by engineering science, by great technical knowledge,” Slovic says. The engineers viewed themselves as the smartest guys in the room — not unlike today’s tech engineers and CEOs. They had little understanding or interest in the psychological perspectives on risk that Slovic was describing.

At the time, the nuclear industry was “foundering on the shoals of adverse public opinion,” he and two co-authors wrote in the late 1970s. There were critical discrepancies between expert and lay judgments, he told his audiences of nuclear engineers and scientists, and they’d do well to understand them. Slovic warned that opposition to nuclear power was springing up nationwide and that nuclear scientists should not dismiss public concerns about risk as ignorance and irrationality.

Instead, Slovic says, the engineers regarded him as a troublemaker, focused on subjective unquantifiables — uncertainties, values, politics, issues of trust — in the face of “hard science” and technical expertise. The engineers in attendance could not see themselves — their homogeneity, social status, commercial incentives, feelings, and disdain for “nonexperts” — as risk factors.

“The whole process is subjective and value-laden,” says Slovic, describing his encounters and decades of research. “Defining risk is an exercise of power.”

Then the nuclear industry got a wake-up call. Around 4 a.m. on March 28, 1979, the Unit 2 reactor at the Three Mile Island Nuclear Generating Station in Pennsylvania failed, resulting in a partial meltdown. An investigative report found the plant’s design was flawed, and its staff were poorly trained for an emergency situation. Soon after, the U.S. Nuclear Regulatory Commission provided funding for Slovic, along with colleagues and Decision Research co-founders Baruch Fischhoff and Sarah Lichtenstein, to produce a guide to “acceptable risk.”

“Acceptable-risk problems are decision problems,” they wrote in the resulting report, published in 1980. “They require a choice between alternatives. That choice depends upon the alternatives, values, and beliefs that are considered.”

Risk analysis was initially grounded as an “objective” tool for engineers and statesmen who needed more facts to understand and control risks presented by new technologies, in the aerospace and nuclear industries in particular. A community coalesced around the subject through the 1970s and 1980s, in the private sector, industry associations, academia, research centers, and at the federal policy level, with engineers, researchers, and regulators building industry-specific processes and controls.

Today, a fleet of firms provides information security and risk management services to pretty much any collective, sector, or company you can imagine — HP, Nestle, the National Weather Service, the State of Michigan have all enlisted risk management services. It’s estimated that this industry pulls in upward of $130 billion every year.

Now, critics say that the tech industry’s existing approach to dealing with risk — rooted in narrowly defined practices and relegated to legal and infosecurity departments focused on the quantifiable and discrete, like copyright — is dated. Infosec’s traditional focus on stopping “attacks” on systems by known adversaries has left companies open to misuse. In 2020, critics say, the tech industry’s practice of risk assessment must also take into account the protection of social values, like access to accurate information.

Soliciting input from the public is essential for overcoming internal biases and blind spots, and avoiding problems down the road, Slovic and his colleagues argue. As they put it in 1980: “Early public involvement may lead to decisions that take longer to make, but are more likely to stick.”

Assessing the risks of a partial nuclear meltdown and assessing the risks of a social media platform being used to hack an election may appear vastly different — but there’s common ground. Both industries made big claims about transforming the world; both industries had early critics and civil society advocates warning of potential harms; and both are dominated by white, powerful men disposed to dismissing critics as subjective, emotional, unreasonable, and ill-informed.

In 2020, critics say, the tech industry’s practice of risk assessment must also take into account the protection of social values.

And both rushed their products to market, prioritizing technical and business goals over social and humanitarian concerns. Today’s tech leaders still tend to prioritize technical fixes — better algorithms, faster processors, improved features — over efforts to improve the structure of decision-making and paths to public engagement that improve outcomes.

Is tech now in the midst of its own Three Mile Island moment? There are signs that risk assessment practices are evolving to meet the crisis, as new theories and approaches emerge. The shift is being driven largely by academics and risk experts working outside the biggest tech companies. Their ideas and work — the Center for Humane Technology’s design guide for building “more humane” products and identifying “where investing in a deeper understanding of human nature will yield further benefits,” IEEE Global Initiative’s Ethically Aligned Design, Social Threat Modeling, the safety culture model, the do no harm approach, Ethical Operating Systems, Value-Sensitive Design, Cyber Resilience, and the Cyber-XR Coalition’s recently released guide to ethics and safety in cybersecurity and XR, for instance — represent burgeoning efforts to address risk that go beyond traditional, technical solutions and embrace human and social values.

Michael Coates, the former head of security at Twitter and Mozilla, says that these conversations are starting to have an impact. Civic concerns, he tells us, were “talked about for years in the industry” but “were brushed off by people saying, ‘Well, you’re being a little paranoid.’ Or, ‘Those things won’t happen at all.’ This approach was common among industry experts tackling security issues.”

Today’s tech companies don’t have the luxury of being so complacent. Those problems that all those security experts thought we’d never see? “Well, some of them are now seen,” Coates says.

TThe violence in Myanmar, and Facebook’s apparent role in fueling it, was a disaster of unprecedented scale for the company. The company was reportedly warned for years of allegations of expanding ethnic violence. The problems were routinely exacerbated, critics say, by insufficient planning, translation services, and content moderation in the country.

A source who worked with Facebook on Myanmar policy and safety issues and had knowledge of the issues as they developed, who requested anonymity due to professional concerns, told OneZero and Type Investigations that the decision to add Burmese to Facebook was initiated by engineers on the fly during a time when Myanmar was liberalizing its telecommunications systems after the 2010 elections. A project like this, this individual said, is a “feel-good thing that sounds like it could only be positive… ‘Hey, well, let’s just support as many languages as possible.’ That turns out to be a really negative thing, in this case.”

An enduring concern in Myanmar was that posts inciting violence against the Rohingya often went undetected — a problem highlighted in a 2018 Reuters investigation. For example, one Facebook post reading “Kill all the kalars [an anti-Rohingya slur] that you see in Myanmar; none of them should be left alive,” was mistranslated by Facebook’s algorithms into benign-sounding gibberish: “I shouldn’t have a rainbow in Myanmar.”

Not only was Facebook limited in its ability to monitor what was being said in Myanmar, but because Zawgyi, rather than Unicode, was predominantly used in Myanmar, users in the country could not easily use Burmese script to type comments or posts, leaving citizens and activists to rely on memes, photography, and images to communicate. In addition, Benesch told us, at the time that she met with Facebook in 2014, the company was relying on an executive translation company in Dublin for its Burmese language needs. At that point, she says, “Facebook did not have a single person who could read Burmese on staff. ” (A Facebook spokesperson denied that the company relied on an executive translation firm. According to a Reuters report, the company did not add any Burmese speaking staff until 2015.)

At one point, prominent activists worked with Facebook, through Benesch, to develop a sticker pack, similar to today’s emoji options, to discourage online hate speech. “On the one hand, this was a tiny little thing scratching on the surface — we weren’t going to forestall genocide with a sticker pack, obviously, but it was a tiny thing that seemed better than nothing,” Benesch says.

It was not until October 2019 that Facebook integrated Unicode font converters into Facebook and Messenger. “These tools will make a big difference for the millions of people in Myanmar who are using our apps to communicate with friends and family,” the company stated.

A Facebook spokesperson acknowledged the company’s shortfalls, but emphasized the particularities of the situation and steps that the company has taken since then. Myanmar is the only country in the world with a significant online presence that hasn’t fully adopted and standardized Unicode, they said. More than 100 native Burmese speakers now review content for Facebook, and the company has the ability to review content in local ethnic languages. The company uses automated systems to identify hate speech in 45 languages, including Burmese, and since 2018 it has identified and disrupted six networks engaging in misinformation campaigns in Myanmar.

“As we’ve said before, we were too slow to act on the abuse on our platform at the time, largely due to challenges around reporting flows, dual font issues, and language capability,” the Facebook spokesperson told OneZero and Type Investigations. “We know that there continue to be challenges in Myanmar, but today we are in a much better position to address those challenges than we were in 2013.”

Similar language issues will continue to be a challenge for Facebook and other platforms, as they continue to expand globally. “The scale is just something that we have to keep in mind,” Necip Fazil Ayan, Facebook’s director of A.I., told us in 2018, pointing out that Facebook worked in 70 languages and served 6 billion translations a day that year. “Our goal is to keep improving quality. And keep adding languages.”

As of 2019, Facebook officially supported 111 languages, with content moderation teams working to identify needs in hundreds more.

As of 2019, Facebook officially supported 111 languages, with content moderation teams working to identify needs in hundreds more. It’s “a heavy lift to translate into all those different languages,” Monika Bickert, Facebook’s vice president of global policy and management, told Reuters in 2019.

Facebook declined to provide specific details about how the company decides to onboard and support new languages, but a Facebook spokesperson said the company considers “insights from a variety of sources, including policy input or regions where there is an increased potential for harm.”

Launching a product that could impact a community, region, or even whole country — particularly where history and political context are unfamiliar — without sufficient language resources, can be dangerous. Critics argue that the problem is bigger than just a language barrier, and the solution isn’t simply better translations and machine learning. Instead, they say companies should take a more deliberate and reasoned approach when deciding to expand into parts of the world where they don’t fully understand the political and cultural dynamics.

“Adding a bunch of languages is a separate process from, ‘We’re going to move into a country and we’re going to specifically think about the structure of who is in that country,’” a former Facebook executive, who requested anonymity in order to speak candidly, tells OneZero and Type Investigations. “The disconnect between these processes is problematic.”

The move into Myanmar echoed other hasty product developments at Facebook. Facebook Live, which has been used to record and distribute, for example, suicides, rapes, child endangerment, murder, and hate crimes was also reportedly rushed to market. Zuckerberg cheerfully introduced the service in April 2016. The new feature, he wrote in a Facebook post, will be “like having a TV camera in your pocket.”

At the time, the company had amassed nearly 2 billion monthly users and was managing a 24/7 stream of complaints and problems, including early warnings in 2015 that Cambridge Analytica was helping Ted Cruz’s presidential campaign by forming psychological profiles of potential voters, using data that had been mined from tens of millions of Facebook users.

It’s unclear from OneZero and Type Investigations’ reporting how much prelaunch risk assessment was done around Facebook Live. Stamos, the company’s chief security officer at the time, was told about the launch “a couple months” before the product was pushed to the Facebook app reportedly in response to growing competitive pressures from Snapchat in particular.

In addition to reportedly keeping its chief security officer in the dark until the release of the product was imminent, the policy, and trust and safety teams were similarly left out of the loop until a relatively late stage. According to a source familiar with the situation, the teams recommended that the product launch be delayed. But the suggestion was ignored.

A Facebook spokesperson denied the suggestion that the product rollout was done in haste. “When we build a product we always think both about the ways the product can be used for good in the world (the vision of the product) and the types of bad things that can happen through the product,” a Facebook spokesperson told OneZero and Type Investigations. “With Facebook Live we did just that.”

But the scene at Facebook headquarters in the days following the launch of Facebook Live was, in the words of a consultant familiar with the incidents and who requested anonymity for professional reasons, a “shitshow.” According to two sources with knowledge of the episode, the Facebook Live team worked around the clock to remove videos of suicides, rapes, and other acts of violence. By 2017, Facebook was struggling daily to contain the damage, as stories of live-broadcasted violence filled the news.

Relative to other companies, Facebook has been open to speaking publicly about how operations are evolving in response to increased awareness and understanding of the risks their products introduce to individuals and to society. When we met in 2018, Guy Rosen, Facebook’s VP of product management at the time, acknowledged the problems associated with the launch of Facebook Live. After “a string of bad things,” he said, “we realized we had to go faster” to address issues with the service. The company pulled together members of various teams, who dropped other priorities and spent two to three months in a lockdown, focused solely on resolving the issues with the new service.

Stamos says there were legitimate arguments over the best way to identify and prioritize problems like those faced by the Facebook Live team at the time. He sketched a simple X-Y coordinate grid, with circles of various sizes representing the prevalence of certain risks, the probability of them occurring, and their potential impact.

He and his team used such grids to evaluate the likelihood of certain harms — child sexual exploitation, terrorism, hate speech, and other abuses of Facebook products — and target their efforts accordingly. “We have a finite amount of resources. How are we going to apply those finite resources?” Stamos says. If safety teams had been looped into Facebook Live’s planning earlier on his team might have been able to help prevent the problems that occurred.

In the two years following Facebook Live’s launch, Facebook would reevaluate its approach to risk. In May 2018, Greg Marra, then Facebook’s project management director who oversaw News, spoke publicly for the first time about Facebook’s turn to an integrated approach to risk in an attempt to prevent harm and the creation of coordinated, cross-functional teams. “There is a lot that we need to do to coordinate internally, from Facebook, Instagram, Messenger, WhatsApp, and Oculus, and we need a standard approach to this,” he says.

“Moving from reactive to proactive handling of content at scale has only started to become possible recently because of advances in artificial intelligence.”

Six months later, in November 2018, Zuckerberg also announced a significant shift in how Facebook planned to handle risk. “Moving from reactive to proactive handling of content at scale has only started to become possible recently because of advances in artificial intelligence — and because of the multi-billion dollar annual investments we can now fund,” he said. “For most of our history, the content review process has been very reactive.”

Integrating the work of teams across functions, to move from reactive to proactive, had become the number-one focus of the company. When Stamos left, under fractious circumstances involving disagreements over possible Russian interference in the 2016 U.S. election, his position as chief security officer was not filled. Instead, a Facebook spokesperson told the Verge, the company “embedded […] security engineers, analysts, investigators, and other specialists in our product and engineering teams to better address the emerging security threats we face.”

In 2018, Guy Rosen told us that Zuckerberg had come to him ready to invest substantially. “How much do you need?” he remembered Zuckerberg asking him. “This is the most important thing. We have got to staff this up.’”

Over the past two years, the company has more than doubled the number of people working on safety and security, to roughly 35,000. It has formalized the use of cross-functional teams to address siloing and blind spots and it has located those teams under Integrity, under Rosen, who has taken on the title of VP of integrity. Additionally, in July 2019, Facebook added a new role, naming human rights activist Miranda Sissons as its director of human rights, an acknowledgment of its influence on conflicts and humanitarian crises. Her first official trip was to Myanmar.

In multiple interviews and email exchanges, Facebook executives and spokespeople described the workings of the cross-functional teams, organized into two primary categories: those who identify and mitigate risk, and those who focus on risks related to specific events, such as elections or crises that might prompt spikes in online activity.

Both types of teams conduct proactive and postmortem investigations of risks. They then work with policy and integrity staff. The integrity team — reportedly comprising some 200 employees and overseen by Naomi Gleit, vice president of product and social impact — is made up of cross-functional groups responsible for understanding the political and cultural dynamics and conditions of regions that Facebook operates in. These are the teams that Sophie Zhang often collaborated with.

“I think the public attention has helped motivate engineers to work on this,” Rosen says. “These are really hard problems. People haven’t done this yet.”

While internal sources described to us stubborn difficulties with the process, including a lack of transparency between different functional areas and poor internal communications resulting in duplication of effort, Facebook highlighted to us products and features that it said had improved as a result of the changes implemented to allow Facebook to be more proactive: the use of photo technologies to fight child exploitation, sexual abuse, and the spread of terrorist content; the development of safety measures in Messenger Kids; improved safety, privacy, and authentication processes for Facebook Dating; and heightened election-related security. In 2019, the company said, its teams removed more than 50 operations engaged in “coordinated inauthentic behavior,” compared to one network that it identified as manipulative in 2017.

“I think the public attention has helped motivate engineers to work on this.”

Nonetheless, in March 2019, Brenton Tarrant, a 28-year-old Australian, was able to activate Facebook Live and use it to broadcast his killing of more than 50 people over the course of 17 minutes. Then, 12 minutes after the attack, a user flagged the video as a problem, and it took nearly an hour, after law enforcement contacted Facebook, for the company to remove it. But by that time, the content had reached, and inspired, countless others.

Similarly, in an echo of what happened in the build-up to violence in Myanmar, Cambodian monk and human rights activist, Luon Sovath, was forced in August 2020 to flee government prosecution. A significant contributor to his exile, according to recent reports, was misinformation and disinformation shared on Facebook. The company took almost a month to remove the page and the four doctored videos that endangered and defamed him.

“As a company, you would think they would want to be more vigilant and not allow their platform to be misused,” said Naly Pilorge, the director of the Cambodian League for the Promotion and Defense of Human Rights in a recent New York Times interview. “Facebook’s reaction has been like little drops from a sink, so late and so little.”

“The obvious solution is to slow down [user generated content],” says Sarah T. Roberts, a co-founder of UCLA’s Center for Critical Internet Inquiry. Roberts has repeatedly proposed that tech companies build in access tiers to functionalities, such as livestreams, so that users are vetted before they can use a particular product. So far, however, companies have resisted such reforms. “The idea,” she says, “is never seriously discussed.”

“Safety and security are not compromised in the name of profits,” says a Facebook spokesperson. “Mark has made that clear on previous occasions — noting our investment in security is so much that it will impact our profitability, but that protecting our community is more important than maximizing our profits.”

According to Sabrina Hersi Issa, integrated “risk resilient” technology and systems are critical. An activist and angel investor who focuses on the intersection of technology and human rights, Hersi Issa advises companies on how to create, staff, and fund inclusive systems that center human values alongside profits. “It’s often the case when looking at risk, that tech companies see things as products and platforms,” she says. “When I look at a piece of tech, I ask myself, How is this piece of technology facilitating participation in democratic life? That reframing adds layers of complicated considerations that most technologists don’t consider.”

In the words of one recent security report, adding more products and practices “on top of an existing infrastructure” is no longer enough.

WWhat might a more integrated approach to risk look like? Gathering input from various departments and a diverse set of stakeholders is important, but not sufficient on its own. Individuals who are tasked with assessing risk also need the agency and authority to be part of the final decision-making, experts say.

“Even if you have cross-functional teams, the voices that bring these concerns up are sometimes just never heard or heeded, or not given the same gravity,” says Leslie Miley, a former engineering executive at Slack, Twitter, Google, and Apple, and former chief technology officer at the Obama Foundation. “Because people don’t have that lived experience or they just don’t think it’s that big of a deal. This is something that I see regularly.”

In every company we investigated, engineers and product managers — groups overwhelmingly male — hold power. Meanwhile, leadership of legal, policy, trust and safety teams — sometimes referred to as “cleanup crews” — often skews female, and are more diverse, as Sarah Emerson recently observed for OneZero.

“If I had a meeting with Trust and Safety, especially if it was a senior one, I’d be the only man in the room,” says Stamos of his time at Facebook. “Then if I had a meeting on the Infosec side, it would be all guys, or maybe one woman.”

The problem of occupational gender segregation is endemic in tech. “You’d be hard-pressed at Google not to work with a woman. Although I do know groups on Google that are 20, 30, 40, 50 people that have no women,” says a source at Google who requested anonymity. When asked for comment on this description, and after a series of conversations that included requests for detailed information about its approach to risk assessment and mitigation, a Google spokesperson replied via email: “As digital threats evolve, the lines that distinguish traditional security threats from platform and product abuse have become increasingly blurry. The combined expertise of our security and Trust & Safety teams, along with their years-long partnership, have enabled us to develop strong protections for our users.”

It’s not hard to find pernicious examples of how this homogeneity impacts risk perception and product development.

Disparities like these are also racialized. A study released this summer, based on 2016 data, found that 10 major tech companies in Silicon Valley had no Black women on staff at all. Three large tech companies had no Black employees in any position, the study found. During the past four years, industry analysts have noted the slow pace of change. In 2019, 64.9% of Facebook’s technical teams were composed of white and Asian employees, 77% of whom were male.

It’s not hard to find pernicious examples of how this homogeneity impacts risk perception and product development.

In 2017, Snapchat released a feature, Snap Maps, that displayed a user’s geolocation and then, based on settings, shared their whereabouts with others. That sparked outrage from advocates who recognized the risks the feature posed to children and targets of stalking and intimate partner abuse.

The following year, Lime, a scooter-share startup, faced a backlash over a security feature designed to protect its scooters. According to news reports, when people in Oakland, California, attempted to handle a scooter without first downloading the app and paying to use it, the scooter would announce, “Unlock me to ride me, or I’ll call the police.” Local activists and a politician — sensitive to issues of overpolicing and discrimination against Black individuals in the law enforcement system — protested, arguing that the announcement endangered Black citizens.

And in 2019, DeepNude, an app that used A.I. to virtually strip women (it did not work on photos of men), withdrew its product roughly 24 hours after releasing it, amid a widespread outcry. The development team tweeted that “the probability that people will misuse it is too high.”

Each of these failures, says Miley, emphasizes how experiences determine how a person appreciates risk. Or doesn’t. Slovic is among the many researchers who have long documented the risks of having only white men in the room where decisions about safety and harm happen. “Most striking,” reads one finding from 1994, “white males tended to differ from everyone else in their attitudes and perceptions — on average, they perceived risks as much smaller and much more acceptable than did other people. These results suggest that sociopolitical factors such as power, status, alienation, and trust are strong determiners of people’s perception and acceptance of risks.”

To offset cognitive and structural problems, risk assessment requires “a total shift in intentions,” says Ellen Pao, a former VC, Reddit CEO, and now CEO of Project Include. “Security and privacy in tech at least have been male-dominated areas for as long as they’ve been around.”

Like Slovic, Pao believes the tech industry needs to embrace inclusivity and interdisciplinarity as central practices and give more power, status, and compensation to those tasked with traditionally feminized “soft” skills tied to safety and care.

Such a shift is toothless without accountability and leadership, however.

“There’s a lot of public criticism of Facebook that’s really accurate, but there are more people working on the safety of social media at Facebook than probably in the rest of the world combined,” says Stamos. “But then you have these problems where an executive decision is made that just ignores those people, and then it completely blows away all the good work they’re doing.”

When asked about his work with the tech sector, Slovic’s Decision Research co-founder Baruch Fischhoff, an academic who has served as an advisor on risk to a wide array of federal regulatory agencies, including the FDA, DHS, and the EPA, says, “it’s difficult to distinguish malice from ineptitude from cluelessness. I think risk assessment is doomed to fail unless the CEO is deeply invested in it.”

A Facebook spokesperson stressed that no matter how proactive the company’s risk assessment efforts might be, there would always be more work to do.

“In a perfect world, we’d have perfect information and be able to act on it to prevent and mitigate risk and harm,” they said. “We’ve tried to put in place robust risk assessment mechanisms and are always working to anticipate risks, and learn lessons along the way, but we are all operating with less than perfect information.”

“In a perfect world, we’d have perfect information and be able to act on it to prevent and mitigate risk and harm.”

Identifying more information, better algorithms, and enhanced technology doesn’t fundamentally reflect “a total shift of intentions,” however. Arguably, that approach, grounded in tech fixes, detrimentally doubles down on existing ones. Data and information after the fact, as predictive inputs, however necessary, are not sufficient. “Just imagine if these companies had said, ‘We’re going to hold onto launching this new feature or capability. We need another one and half years,’” says University of Washington professor Batya Friedman, co-director of the Value Sensitive Design Lab and co-author of the book Value Sensitive Design: Shaping Technology with Moral Imagination. “These systems are being deployed very, very fast and at scale. These are really, really hard problems.”

Moreover, critics say that major social media companies have kept outside researchers at arm's length, resisting efforts to learn more about harmful content and how to prevent it. In 2018, for example, Twitter launched a study designed to promote civility and improve behavior on the platform, and collaborated with Susan Benesch and Cornell’s J. Nathan Matias, founder of the Citizens and Technology Lab. The company ended up abandoning the project, citing coding errors. A follow-up study, which began last summer, operated for only a few days before Benesch says it was shut down internally without any explanation.

“They squandered a really good opportunity to see what could diminish hate and harassment online,” Benesch says. “What a foolish thing to just throw out the window.”

In a statement, Twitter acknowledged that staff turnover and shifting priorities had stymied some research projects, but said it remained committed to working with academics. “We strongly believe in the power of research to advance understanding of the public conversation happening online,” a Twitter spokesperson said.

In the meantime, companies and the public remain exposed.

In July 2020, authorities charged 17-year-old Graham Ivan Clark, from Tampa, Florida, with hacking the Twitter accounts of a number of prominent individuals, including Bill Gates, Elon Musk, and Barack Obama. It was an embarrassing failure for Twitter, whose security team hadn’t identified that employee accounts were vulnerable to what’s well-known in security circles as social engineering.

To Sarah T. Roberts, such problems are a direct consequence of the tech industry’s resistance to outside opinions and expertise, and highlight the need to embrace a more collaborative, transparent, and structural approach to risk assessment.

“We can’t afford the continual use of the public for unwitting beta testers,” Roberts says. “We’re here today because of 40 years of denigration of anything that doesn’t have an immediate industrial application.”

Safiya Noble put it like this: “Paradigm shifts have to be imagined in order to organize our economies and our societies differently. I look at Big Tech in a similar vein to Big Tobacco or Big Cotton and ask, what is the paradigm shift? Is it legitimate to have deep human and civil rights violations that cannot be separated from these technologies? Is that legitimate? To justify their existences? We know better. The challenge here is that the technologies are often rendered in such opaque ways that people can’t see the same level of exploitation that happened in other historical moments. Our job as researchers is to make visible the harms.”

The 17-year old alleged Twitter hacker’s July arrest prompted many to ask how one of Silicon Valley’s most prominent companies could be so vulnerable. While Clark was considering his $725,000 bail, Twitter was crafting yet another apology, rinsing and repeating the tech sector’s almost 20 years of after-the-fact mea culpas.

“Tough day for us at Twitter,” Jack Dorsey tweeted after the breach. “We all feel terrible this happened. We’re diagnosing and will share everything we can when we have a more complete understanding of exactly what happened. 💙 to our teammates working hard to make this right.”

TThere are reasons to doubt that tech leaders will slow down to adopt the kind of paradigm shift Noble describes on their own. Some 16 years after Facebook’s launch, calls are growing for government regulation of the tech industry and a renunciation of a business model that profits from the idea that content is “neutral” and platforms are objective, a model that, critics point out, cashes in on engagement and extremism.

During a congressional hearing in late July 2020, following a 13-month investigation by the House Judiciary antitrust subcommittee into the business practices of Apple, Facebook, Google, and Amazon, Rep. Hank Johnson, a Democrat from Georgia, questioned Zuckerberg about predatory market behavior. “You tried one thing and then you got caught, made some apologies, then you did it all over again, isn’t that true?” he said.

“Congressman,” replied Zuckerberg, “I respectfully disagree with that characterization.”

Less than two months later, Facebook apologized after it was revealed that the company had failed to remove incendiary and violent posts from the platform in relation to counter-protests in Kenosha, Wisconsin. In what Zuckerberg characterized as an “operational mistake,” contracted moderators unfamiliar with the “militia page” where the comments were made had ignored more than 450 event reports. During one chaotic evening of protests soon after, two protesters were shot and killed.

The July hearings have been described as Big Tech’s “Big Tobacco Moment,” and it is clear that some form of regulatory control is not far down the road. The form it will take — an emphasis on consumer protections, market restrictions, or civil and human rights — has yet to be seen.

“We may decide we’re not wise enough for certain kinds of tools, or certain kinds of companies,” says Friedman. This was, in fact, observed last April when the European Union’s High-Level Expert Group on A.I. published ethical guidelines for the development of artificial intelligence, and suggested that certain technologies, such as facial recognition, “must be clearly warranted.” The same conclusion was drawn by some 20 U.K. councils that recently withdrew the use of algorithms to make decisions about everything from welfare to immigration to child protection, acknowledging that the risk of harm is too high to justify their application. In September, the Portland City Council became the first in the U.S. to restrict the use of facial recognition not only by public agencies, but also by businesses who might seek to use the technology in public settings such as parks, malls, or grocery stores.

To date, harm mitigation, regulation, and even withdrawal of a potentially harmful technology are voluntary — Microsoft’s recent announcements of commitments to “integrated security” and refusal to sell facial recognition technology to U.S. police departments until Congress acts to establish limits, for instance — and, critics say, don’t go far enough.

“Look at Big Tobacco,” says Noble. “Look at fossil fuels. Where is the evidence of that working except in the interest of these companies?”

Big Tech’s lobbying investment is already equal to or higher than spending by big banks, pharmaceutical manufacturers, and the oil industry. The largest tech companies all have well-staffed offices in Washington, and yet they are not subject to the scope or formal federal risk regimes as are these other industry sectors. Google and Amazon are among those financing an institute dedicated to “continuing education” for regulators, teaching “a hands-off approach to antitrust.”

The largest tech companies all have well-staffed offices in Washington, and yet they are not subject to the scope or formal federal risk regimes as are these other industry sectors.

Will Silicon Valley be more risk-aware in the future? Only those in power can say. While calls for a more activist public are evergreen, reliance on the demonstrably diminishing power of the people is naive. “The government should be passing laws to discipline profit-maximization behavior,” said Marianne Bertrand, an economics professor at the University of Chicago’s Booth School of Business. “But too many lawmakers have themselves become the employees of the shareholders — their electoral success tied to campaign contributions and other forms of deep-pocketed support.”

Friedman cautioned against oversimplification and polemic. Not all risks can be known. And even the most robust risk assessments and content moderation protocols won’t prevent every instance of harm. “Remember, tool builders aren’t all-powerful, and better tools in and of themselves won’t change the reality of genocide, rape, suicide, and on and on and on,” she said. The goal, she argues, should be improvement, not perfection. “Design is about envisioning an alternative that’s better and moving toward that alternative. That often means breaking and restructuring current conditions.”

“I think we can say we haven’t worked hard enough to develop professional best practice in Big Tech,” Friedman says. Ignoring human values “is not a responsible option.”

TThere is a growing sense of urgency to address massive concerns in the lead-up to the U.S. presidential election. Twitter and Facebook have both implemented various measures to address political disinformation on their platforms — flagging disinformation or blocking political ads immediately before the election, for example — but these solutions may not go far enough, and the stakes could not be higher.

“Social media companies need to step up and protect our civil rights, our human rights, and our human lives, not to sit on the sideline as the nation drowns in a sea of disinformation,” said Rep. Mike Doyle, D-PA, during a House Energy and Commerce subcommittee hearing on disinformation in June. “Make no mistake, the future of our democracy is at stake, and the status quo is unacceptable.”

Meanwhile, on the other side of the world, Facebook will also have to face its past failures. Myanmar is scheduled to hold a general election this year, on November 8, only its fourth in six decades. It stands to be another major test for Facebook, and the company is working to make sure bad actors don’t use its platform to spread disinformation or otherwise meddle in the democratic process.

One of the Facebook employees we spoke with says the company has been monitoring what is happening on the ground in Myanmar and holding meetings with different groups across the country in order to better understand risks in context. Facebook told us that it is “standing up a multi-disciplinary team,” including engineers and product managers, focused on understanding the impact of the platform in countries in conflict, to develop and scale language capacity and the company’s ability to review potentially harmful content.

That team, however, may not include representatives from the advertising department. Teams responsible for reviewing ads are separate from those who review user-generated content, said a Facebook spokesperson. “People in Facebook’s ad sales department are working to increase ad content and business ads in Myanmar. This is about trying to get more people on the platform,” a knowledgeable Facebook executive told us this spring, later adding, “There was even a time where people internally were proposing to turn off ads in Myanmar because of the upcoming election. Ultimately, that was not chosen, but it was discussed.”

They paused. “This seems really difficult and tone-deaf to [those of us] thinking about risk, because all of those things come down to our human reviewers, and we already have such little capacity. We have confirmed that we won’t be able to get a lot more capacity, we have a very high-stakes election coming up, and a history of real violence in this place. So what are we setting ourselves up for? Some kind of disaster, right?”

This article was reported in partnership with Type Investigations.

Catherine Buni and Soraya Chemaly are award-winning writers and frequent collaborators. Follow their work on Twitter: @ckbuni and @schemaly.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store